report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
Patterns of enrollment in postsecondary education reflect that students frequently enroll in more than one postsecondary institution. Education’s National Center for Educational Statistics (NCES) found that 40 percent of students who entered college in the 1995-1996 academic year attended at least two institutions in the following six years. Many students enroll in community colleges with a plan for eventually transferring to a 4-year baccalaureate program. As a result, 4-year institutions face pressure to award transfer credit for coursework taken at another institution. Data show that students transfer in numerous directions. Traditional transfer is typically from a 2-year institution to a 4-year institution. However, students also transfer from 4-year institutions to 2-year institutions, known as reverse transfer, as well as laterally between similar institutions (e.g., 2-year to 2-year or 4-year to 4-year). As shown in figure 1, traditional transfer accounts for at least one-third of first transfer activity. When students want to transfer their earned academic credits from one institution to another, they must submit a transcript showing their coursework and earned grades to the receiving institution. The receiving institution may then evaluate the transcript and assess the educational quality of the student’s learning experience, compare the level and content of the learning experience with those of the learning experience offered by the receiving institution, and determine the applicability of the student’s coursework to the degree or programs offered at the receiving institution. To help streamline the evaluation process, sending and receiving institutions enter into voluntary transfer agreements, which contain criteria for credits to transfer. Today, many students who begin their studies at private, for-profit institutions transfer to public or private nonprofit 4-year institutions. To meet this demand, many private, for-profit institutions have revamped their curricula, transforming what had chiefly been vocational training aimed at job placement to a core educational curriculum that prepares students to pursue associate’s, bachelor’s, and even graduate degrees. The Department of Education administers federal postsecondary education programs, including the Title IV federal financial aid programs under the Higher Education Act of 1965, as amended. To be eligible for federal financial aid, a postsecondary institution must be accredited by an accrediting agency recognized by the Secretary of Education. Accrediting agencies are private educational associations of regional or national scope that develop evaluation standards and conduct site visits to evaluate postsecondary institutions. To become recognized, an accrediting agency must submit a written application to Education that lays out its standards for accrediting institutions as well as its procedures for ensuring that institutions follow those standards. Education requires accrediting agencies to set standards that instruct institutions to have the resources and policies in place to provide a quality education. Education applies the same requirements to both regional and national accrediting agencies. Education has recognized eight regional accrediting agencies that generally accredit academic degree granting institutions in their specific region of the country, and about 50 national accrediting agencies that accredit various kinds of specialized postsecondary institutions, such as technological or religious institutions, and programs such as nursing and engineering. The most current national data on students show that in September 2003, an estimated 15.2 million students were enrolled in postsecondary institutions; 77 percent of these students were enrolled in public institutions, 17 percent in private nonprofit institutions, and 6 percent enrolled in private for-profit institutions. Additionally, about 6,900 degree- and non-degree-granting postsecondary education institutions had students that were receiving federal financial aid. Figure 2 shows the percentage of students attending public, private nonprofit, and private for- profit institutions, and figure 3 shows the type of institutions receiving these funds. In order to acquire federal financial aid, students are required, among other things, to demonstrate financial need, demonstrate qualifications to enroll in postsecondary education, be working toward an eligible degree or certificate, be a U.S. citizen or eligible noncitizen, and maintain satisfactory academic progress while in school. Education uses a formula to determine the amount of a student’s financial need and his or her expected family contribution toward tuition, taking into account a number of factors including the student’s or family’s resources and the costs of attending an institution. In their financial aid packages, students may receive federal grants or loans, with the neediest students receiving about $4,000 per year in a Pell grant and up to $4,000 in loans under the Perkins loan program. Additionally, all students qualify to receive Stafford loans for which the government may subsidize or defer the loan interest while students remain enrolled in school. Prior to granting credit for courses taken at another institution, institutions may consider a variety of criteria, such as accreditation, transfer agreements, and course equivalency. Many institutions consider the accreditation of the sending institution, including the type of accreditation—national or regional—when determining which transfer credits to accept. Institutions may also assess the equivalency of coursework taken at other institutions, either through establishing transfer agreements covering a number of courses or on a course-by-course basis. Though reviewing courses can be time-consuming and maintaining transfer agreements requires an ongoing commitment, officials said that transfer agreements do facilitate the transfer process. Institutions also vary in who makes the final decision on which credits to accept— administrative official or departmental faculty—and when they inform a student of their decision. We found that when making decisions about whether or not to accept transfer credits, institutions often used the sending institution’s accreditation as the initial measure of the quality of the institution and its coursework. We found that about 84 percent of postsecondary institutions had policies to consider the accreditation of the sending institution when assessing transfer credits. About 63 percent of these institutions specified that accreditation from any regional accrediting agency was acceptable, and about 14 percent specified that they accepted national accreditation. Institutions indicating that they accepted regional accreditation told us that they also provide students with other options for getting their credits transferred, such as passing a competency examination before their credits would be granted. Many also said that they would allow any student to appeal a decision, and an appeal would result in a more thorough review of the student’s transcript. Several officials from postsecondary institutions with regional accreditation told us that as a rule, they did not accept credits earned at institutions with national accreditation. For example, an official at one institution told us that the institution did not accept credits from nationally accredited institutions because the coursework was technical and not academic. Similarly, an official at a regionally accredited institution told us that the institution could not accept credits from nationally accredited institutions unless the accrediting standards of the sending institution paralleled their own standards. One reason given by regional accrediting agency official for the incomparability of credits earned at nationally accredited institutions was that these institutions follow less stringent standards regarding such factors as faculty qualifications and library resources. However, our review of the standards from the regional accrediting agencies found that no regional accrediting agency explicitly stated in its written policy that credits from nationally accredited institutions should be denied. We found that about 11 percent of institutions have policies that explicitly state that they will accept both regionally and nationally accredited credits. For example, one institution’s credit transfer policy states that it will accept credits from “universities and colleges with accreditations by one of the regional accrediting associations,… community and technical colleges with accreditation by one of the regional accrediting associations,… and technical colleges, business colleges and other schools lacking regional accreditation but having accreditation by another agency recognized by the Council for Higher Education Accreditation .” Officials from a regionally accredited institution told us that they would accept credits regardless of accreditation and would review all credits the same way. However, this process was more time-consuming than relying solely on accreditation. To save time, some institutions had developed databases to track previously approved courses in order to remove the need to reevaluate them. Officials at a nationally accredited institution told us that their students often have difficulty transferring credits and that they are taking actions to assist their prospective transfer students. They told us that regionally accredited institutions did not always accept courses taken at the nationally accredited institution. They advised students to assume that credits would not transfer to regionally accredited institutions. Two nationally accredited institutions we visited have responded to the credit transfer difficulties by attaining, or seeking to attain, regional accreditation in order to improve their students’ ability to transfer credits. One of the three nationally accredited institutions we visited—the institution with dual national-regional accreditation—reported having no problems with transferring its students to 4-year institutions. In lieu of seeking dual accreditation, another nationally accredited institution we visited is reaching out to regionally accredited institutions to develop transfer agreements to facilitate the transfer process. While many institutions use accreditation as a factor to assess transfer credits, about 69 percent of postsecondary institutions have entered into voluntary transfer agreements with other institutions. Typically, institutions we visited establish transfer agreements with institutions that send large numbers of transfer students. For example, Columbia College in Missouri—a college with campuses in 11 states—has transfer agreements with 18 community colleges throughout the country. In these agreements, receiving institutions review a number of courses from sending institutions and agree to accept comparable credits from that institution. For example, the State University of New York system has a transfer agreement among all of its institutions specifying that all 4 year universities will accept associate degrees from community colleges within its system, thus guaranteeing a baccalaureate degree with the completion of 60 additional credits. Agreements can also cover individual courses, such as mathematics and science courses that are required prerequisites for upper-level courses. Institution officials told us that although maintaining transfer agreements requires considerable commitment, these agreements are useful because they make the transfer process more transparent and allow it to operate more smoothly. The agreements require receiving institutions to review the course content of each partner institution to determine its comparability and applicability to meeting the degree program requirements. Maintaining these agreements requires regular ongoing communication between participating institutions to keep apprised of all new course offerings or any changes to current courses or degree requirements. According to officials from several of the schools we visited, the process of establishing the agreements and keeping them current requires considerable commitment because institutions frequently revise their courses and degree requirements. For example, it took one private institution in New Jersey a full year to review courses for every community college with which it had established new transfer agreements. At another institution we visited, the official responsible for credit evaluation told us that the time required for maintaining transfer agreements had led the institution to reduce the number of its transfer agreements by about 25 percent. While transfer agreements can be time- consuming, they help make the transfer process more transparent. For example, in New Jersey, many 4-year institutions have established transfer agreements with community colleges in the state. Community college students may also access a Web page listing courses at their institution that will transfer to participating 4-year institutions in New Jersey, allowing students to know which credits will transfer before they apply to a new institution. One official told us that the transfer agreements, once established, allow the credit transfer process to operate smoothly between the partnering institutions, because it becomes a matter of checking a list to determine which credits to accept or deny. Officials offered a variety of reasons for pursuing transfer agreements. In some instances, transfer agreements were mandated in state law or facilitated by state agencies, but these types of agreements were usually between public institutions only. In other instances, institutions sought to establish transfer agreements out of convenience because of the significant number of students that moved between their institutions. In addition to states and institutions, another organization we visited is also involved in facilitating the establishment of transfer agreements. To improve access to baccalaureate programs for certain populations of minority students, the National Articulation and Transfer Network has facilitated transfer agreements between community colleges and minority- serving institutions across the country. Some institutions review students’ transcripts to determine the comparability of the students’ coursework. Specifically, institutions consider the characteristics of individual courses, such as the similarity of courses on a student’s transcript to courses offered at the receiving institutions and the applicability of the courses to the student’s intended major. Institutions may ask for a course description or a class syllabus to support their assessment. To expedite this review, some institutions maintain a historical list of transfer courses that they have accepted in the past. While not always a guarantee of transferability, listed courses have a greater likelihood of acceptance than unlisted courses. At the institutions we visited, two groups of reviewing officials are generally responsible for determining which courses to accept for transfer: (1) an admissions or other administrative officer, who determines which courses meet general requirements, and (2) academic department faculty members, who determine which courses meet degree requirements within their departments. When reviewing officials consider the student’s official transcript, they may review transfer agreements and historical lists of accepted courses, request the syllabus or a list of books used in the course, or discuss the course with a representative from the sending institution or use an Internet service, such as the one maintained by the American Association of Collegiate Registrars and Admissions Officers, to obtain a syllabus and description of the course, among other things. This process is shown in figure 4. Some 4-year institutions, citing time constraints and a significant backlog, have taken steps to limit the number of courses they review. Some institutions have established criteria for transferable courses, such as determining the minimum grade or course level for which credits will be accepted. Several 4-year institutions told us that they did not accept for transfer any remedial (developmental) courses, technical courses, or upper-level courses taken at a 2-year institution. Because of the backlog created by the number of transcripts to review, not all institutions succeed in providing students with an official report of transfer credits accepted before classes begin. Officials at one institution told us that they provide the report within 1 year of the student’s matriculation and encourage students to take upper-level general education courses in the interim until the report is received. To facilitate the transfer of academic credits, states enact a variety of legislation and implement statewide initiatives covering primarily public postsecondary institutions, and accrediting agencies set accreditation standards. Many states have passed legislation that requires public community colleges and 4-year public institutions to establish transfer agreements and authorizes common curricula to ease the transfer of credits. Some states have established a common course numbering system for public institutions within the state and created statewide committees to oversee the transfer of credit process within the state. In other states, state law requires university systems to initiate and form transfer agreements with institutions within the system to enhance the transferability of credits. Some states have also launched statewide initiatives to encourage transfer between 2-year and 4-year public institutions, including offering guarantees that credit will transfer. For their part, accrediting agencies facilitate the transfer process through the standards they set for affiliated institutions. Accrediting agencies that we reviewed have set standards for accreditation that require institutions to make their credit transfer policy publicly available. The six regional accrediting agencies that we reviewed generally encourage their member institutions not to accept or deny transfer credit exclusively on the basis of the accreditation of the sending institution. Some accrediting agencies have incorporated this criterion into their standards; others have issued policy or position statements. States facilitate the transfer of credits among public institutions through various statewide legislation and initiatives that, among other things, support the establishment of statewide transfer agreements, common core curricula, and common course numbering systems, and encourage institutions and others to make transfer information available to the public. We identified 39 states that had legislation pertaining to the transfer of credit between postsecondary public institutions. In general, most of the legislation focuses on facilitating the transfer of credit for students transferring from community colleges to 4-year public institutions. Some states require or encourage the establishment of statewide transfer agreements. For example, a Massachusetts statute empowers its board of higher education to develop and implement a statewide transfer agreement to facilitate the transfer of students without the loss of academic credit or standing from one public institution to another. Arizona law requires institutions to cooperate in operating a statewide transfer network to facilitate the transfer of community college students to Arizona public universities without a loss of credit toward a baccalaureate degree. An Indiana statute requires the state’s Commission for Higher Education to develop statewide transfer of credit agreements for courses that are most frequently taken by undergraduates. Colorado’s statewide transfer policy guarantees that as many as 37 credits of approved general education courses taken at a Colorado public college or university will transfer among all 2-year and 4-year institutions in the state. Some states require or encourage the establishment of common core curricula. A California statute directed the governing boards of the University of California, the California State University, and the California community colleges to jointly develop and adopt a common core curriculum in general education for the purpose of transfer. These efforts led to California’s general education transfer curriculum, which identifies courses that community college students may complete to satisfy general education requirements at campuses of both the University of California and California State University systems. An Arkansas statute requires the Arkansas Higher Education Coordinating Board to consult with colleges and universities to establish a minimum core of courses that applies toward the general education core curriculum requirements and is fully transferable between state institutions. Some states require or encourage the establishment of a common course numbering system. Florida has developed a statewide course numbering system that provides a database of equivalent postsecondary courses at public vocational technical centers, community colleges, universities, and participating nonpublic institutions. More than 100 institutions in Texas participate in the state’s voluntary course numbering program, which provides a shared, uniform set of course designations for students and their advisers to use in determining both course equivalency and degree applicability of transfer credits on a statewide basis. Some state statutes identify the types of courses or blocks of courses that are transferable. For example, Missouri officials told us that they interpret their state law as requiring all institutions to accept associate degrees from any source as evidence that general education courses have been completed. Additionally, to facilitate student transfer among Missouri institutions and to increase institutions’ accountability for student performance in general education, the Coordinating Board for Higher Education designed a 42-semester-hour block of general education. Similarly, a Texas statute states that if a student successfully completes a field-of-study curriculum developed by the state’s board of higher education, that block of courses may be transferred and must be substituted for the receiving institution’s lower division requirements for the comparable degree program, and the student must receive full academic credit. Likewise a Kentucky statute mandates that all lower division academic courses offered by community colleges be transferable for academic credit to any and all 4-year public colleges and universities in the state. Some state higher education agencies make information on transfer agreements and course equivalency guides available to the public. For example, some states, such as California, Maryland, and Florida, have placed course equivalencies online for easy access and reference. California maintains an online student transfer system called ASSIST that serves as the official repository of transfer agreements for all public postsecondary institutions in California and facilitates transfer from a California community college to a University of California or California State University campus. Maryland’s interactive online transfer information source called ARTSYS allows students to find course equivalencies between institutions, evaluate their transcripts, search for majors, and explore recommended transfer programs. In addition, it provides faculty access to update courses and provide course evaluations. The Florida Academic Counseling and Tracking for Students (FACTS) system offers a comprehensive range of transfer services, including a transfer student bill of rights, links to statewide transfer agreements, and an interactive transfer evaluation tool. A Pennsylvania statute supports the implementation of a Web-based application that makes all transfer agreements among higher education institutions available on the Internet. Similarly, Virginia requires its state council of higher education to publicize all general education courses offered at public 2-year institutions, designating the courses accepted for transfer credit at 4-year public and private postsecondary institutions in Virginia. Ohio implemented a framework that guarantees students a statewide transfer and published a transfer assurance guide to advise students of the 38 different baccalaureate degree pathways available for them to pursue anywhere within the public higher education system and in Ohio’s participating private institutions, and to identify which courses are guaranteed to transfer and apply to requirements within the system. While state legislation regarding credit transfer is generally intended to facilitate the transfer of credits among public institutions, a few state statutes require or encourage the involvement of private institutions. For example, the Louisiana Board of Supervisors of Community and Technical Colleges is required to continue development of articulation agreements between institutions under the management of the board and institutions managed by other postsecondary management boards, both public and private. A Minnesota statute requests the governing boards of private institutions that grant associate and baccalaureate degrees and have a high frequency of transfer students to participate in the development of required course equivalency guides. A West Virginia statute requires the state’s Council for Community and Technical College Education to establish and implement policies and procedures that ensure that students may transfer and apply the credits earned at any regionally accredited in- state or out-of-state higher education institution. Accrediting agencies’ standards for evaluating transfer credit generally reflect the three criteria specified in a 1978 joint national statement on the transfer and award of credit: the educational quality of the sending institution, the comparability of credit to be transferred to the receiving institution, and applicability of the credit in relation to the programs being offered at the receiving institution. These agencies’ accrediting standards generally require receiving institutions to consider if courses are equivalent with their own curricula and standards. In 2000, CHEA issued an updated statement that offered four additional criteria that accrediting agencies and institutions should consider when making decisions about transfer of credit and academic quality. Specifically, these criteria emphasized the need for institutions and accrediting agencies to (1) ensure that transfer decisions are not solely based on the source of accreditation of a sending program or institution, (2) reaffirm that the considerations that inform transfer decisions are applied consistently in the context of changing student attendance patterns and emerging new providers of higher education, (3) ensure that students and the public are fully and accurately informed about their respective transfer policies and practices, and (4) be flexible and open in considering alternative approaches to managing transfer when these approaches will benefit students. The accrediting standards and transfer policies of the 6 regional and 10 national accrediting agencies that we reviewed generally reflect the original criteria included in the 1978 joint statement. In addition, some accrediting agencies incorporated into their standards the CHEA criteria added in 2000 that the institutions’ process for accepting transfer credit be fair, consistently applied, and publicly communicated. The 6 regional accrediting agencies that we reviewed all support CHEA’s statement on the role of accreditation in the credit transfer decision- making process. As shown in table 1, some accrediting agencies have incorporated this criterion into their standards; others have issued policy or position statements. Regional accrediting agencies recognize that the institutions are responsible for determining their own policies and practices with regard to the transfer and award of credit. Accrediting agencies will not know whether an institution is following the standards and general guidelines until the institution is reviewed. Officials at one accrediting agency told us that because of the nature of the review cycle, it could take several years to review all of the institutions and thereby ensure that they had implemented the standards. The inability to transfer credits may result in longer enrollment, more tuition payments, and additional federal financial aid awards, but the full extent to which such results occur cannot be determined because institutions told us they do not collect specific data on students that are unable to transfer credit. For example, a 1996 study of Arizona’s public university transfer practices found that community college transfer students may be required to take additional courses in order to complete their degrees because academic departments do not always accept community college courses as prerequisites. The study found that the accumulation of excess college credit hours could lead to additional years in school, added taxpayer expense such as financial aid awards, or a failure to complete a degree. Officials at selected nationally accredited institutions also told us that denials based on accreditation can result in students taking additional coursework in order to graduate. For example, one nationally accredited institution told us that one of its recent graduates had been required to repeat 2 years of coursework at a regionally accredited institution before he could be admitted to a graduate program. While credit transfer denials likely affect transfer students in a number of ways, the effect that these denials have on students’ enrollment duration, success in completing a baccalaureate program, or the affordability of postsecondary education cannot be determined with available data. Institution officials told us that they did not maintain data on the number of credits they have denied for transfer because it would be too cumbersome to maintain these files. Our analysis of Education’s postsecondary education data found that transfer students fare differently from nontransfer students. The national data indicate that, on average, transfer graduates take about 10 more credits and 3 more months to complete their baccalaureate degree than nontransfer graduates. However, transfer students could take longer to graduate for a variety of reasons that may or may not be related to their decision to transfer. For example, a student who changes majors may need to take additional courses in order to graduate. We could not determine the extent to which transfer students differ from nontransfer students in these areas. Nonetheless, students taking additional credits as a result of being unable to transfer credits will likely have to pay additional tuition. Based on national averages, these tuition payments could range from about $150 per credit hour for students attending public institutions to about $520 for those attending private schools. The extent to which these costs are borne by the student or the federal government would vary depending on the student’s eligibility for financial aid. Postsecondary institutions differ in how they assess transfer credits, and as a result, the current credit transfer process does not ensure the consistent consideration of student coursework. To facilitate the credit transfer process, many states have enacted legislation and implemented statewide initiatives covering primarily public postsecondary institutions within their respective states. However, state efforts have limited influence over students transferring to and from the nation’s private institutions or institutions located outside state boundaries. Also, all regional accrediting agencies subscribe to the principle that credits should not be accepted or denied on the basis of the type of accreditation, but not all of them have set standards requiring their member institutions to do so. When such standards have been set, it takes accrediting agencies years to review their member institutions’ policies to confirm their compliance. To preserve their institutional reputations and maintain quality, postsecondary institutions want their graduates to meet certain academic standards. The federal government sets the same standards for regional and national accrediting agencies to ensure that postsecondary institutions provide a quality education. At the same time, it is in the federal government’s interest to ensure that students receiving assistance through federal aid programs, who have earned credits at an approved accredited institution, do not have to repeat coursework when transferring to another institution meeting the same standards. However, some institutions continue to deny credits from institutions with national accreditation without reviewing student coursework despite the fact that these institutions are accredited by federally recognized national accrediting bodies. Consequently, qualified students could be denied credit for comparable coursework, leading them to incur further educational costs that they may need to offset with additional federal financial aid. In order to ensure consistent consideration of students’ previous coursework, Congress should consider further amending the Higher Education Act of 1965 to require postsecondary institutions eligible for Title IV funding to not deny transfer credits on the basis of the type of accreditation. We provided a draft of this report to the Department of Education for review and comment. In its written response, included as appendix III, Education said our report was useful and informative. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the issue date. At that time, we will send copies of this report to the Secretary of Education, interested congressional committees, and other interested parties. We will also make copies of this report available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 7215 or ashbyc@gao.gov. Staff acknowledgments are listed in appendix IV. To describe how the transfer of credit operates among postsecondary institutions, we examined transfer of credit policies for a nationally representative sample of institutions and interviewed officials responsible for credit transfer evaluations from public, private, nonprofit, and private for-profit institutions. At each institution, we interviewed officials and asked them questions related to their policies and practices on transfer of credit, such as their criteria for accepting transfer credits, their process for evaluating transcripts, and if students had appeal rights once a decision was made. We also interviewed officials from the Council for Higher Education Accreditation (CHEA), the American Association of Collegiate Registrars and Admissions Officers (AACRAO), and the Institute for Higher Education Policy (IHEP). We reviewed publications and studies conducted by these organizations, the American Association of Community Colleges (AACC), and the Career College Association (CCA). To learn about how states and accrediting agencies facilitate the transfer of credit process, we searched legal databases for state statutes in all 50 states to determine if the states had legislation related to transfer of credit. We also interviewed officials responsible for higher education from five states, officials from national and regional accrediting agencies, and the Department of Education (Education). We reviewed standards for accreditation from 10 national accrediting agencies that accredit institutions that grant degrees and the 6 regional accrediting agencies that accredit senior or 4-year institutions. The 5 states we visited were California, Florida, Missouri, New Jersey, and New York. In order to get a broad perspective on the challenges that students face when transferring credit, we selected states based on their varying levels of involvement in the credit transfer process and with large transfer student populations. To understand the implications for students and the federal government of students’ inability to transfer credit, we reviewed some of Education’s national databases to describe the typical transfer student. We reviewed the Integrated Postsecondary Education Data System (IPEDS) database to analyze the average cost of attendance at various types of institutions and the Beginning Postsecondary Students (BPS) database to learn about the transfer trends. We also used data from the National Educational Longitudinal Study of 1988 (NELS). In addition, we spoke with national experts and reviewed national studies related to the implications for students and the federal government of student’s inability to transfer credits. In order to collect information about the ways in which institutions of higher education treated transfer credits, we undertook a data collection effort from a random sample of 270 institutions of higher education. The sample was obtained from the IPEDS database. The IPEDS data were from the 2000-2001 time period. IPEDS is the Department of Education’s core postsecondary education data collection program. It is a single, comprehensive system that encompasses all identified institutions with the primary purpose of providing postsecondary education. IPEDS is designed to produce national-, state-, and institution-level data for most postsecondary institutions. We conducted a stratified random sample from the IPEDS database. The sample represented 270 institutions, with 90 institutions from each of three categories of postsecondary institutions. The three categories we sampled included 4-year public, 4-year private nonprofit, and 2-year public institutions. These three types of institutions represent 3,096 institutions and over 95 percent of students attending higher education institutions. GAO did not sample 4-year private, for-profit institutions and 2-year private institutions. These types of institutions represented 1,264 institutions but less than 5 percent of students attending higher education institutions. Of the 270 institutions that were randomly selected, 6 were found to be out of scope because our research indicated that they did not grant degrees or granted only graduate degrees. These 6 institutions were not included in the eventual results. Table 2 describes our source and response rates for our sample of institutions. Survey results based on probability samples are subject to sampling error. Our sample of 264 institutions is only one of a large number of samples we might have drawn from the total population of postsecondary institutions. Since each sample could have provided different estimates, we express our confidence in the precision of our three results as 95 percent confidence intervals. These are intervals that would contain the actual population values for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values of the study population. All percentage estimates from this survey of 4-year public institutions, 4-year private nonprofit institutions, and 2-year public institutions have sampling errors not exceeding plus or minus 7 percentage points. We collected data from the 264 schools primarily through a data collection instrument that we filled out after examining the Web sites of the sampled schools. Before deploying the Web site data collection instrument, we conducted pretests with Web sites from 5 randomly sampled schools. We followed up these Web site examinations with telephone calls to ensure that the information we were obtaining from the Web sites accurately reflected the transfer credit policies of the respective schools. The extent of an institution’s policies on transferring credit from sending institutions varied widely, and the policies were found under different categories on the institutions’ Web sites. For example, some institutions listed their policies under links to transfer student information or admissions information, while others listed their policies only in the college catalog/bulletin that was available at the Web site. Most college catalogs/bulletins listed the transfer credit policy. In almost all cases, we printed proof of answers and highlighted, underlined, or numbered the answers to match the question number. All results obtained from the Web site data collection instrument were verified by a second GAO reviewer who independently examined documentation from the Web site or the information on the Web site itself. All but 8 of the 264 institutions had Web sites that we were able to examine. For those institutions that did not have Web sites, we spoke with officials from the institutions and asked questions from a telephone data collection instrument. The results of these telephone interviews were recorded by GAO interviewers. For this report we used data from the Integrated Postsecondary Education Data System database, the National Educational Longitudinal Study of 1988, and the Beginning Postsecondary Students longitudinal study database. We reviewed technical and methodological documentation for all three databases, and in the case of NELS also spoke with a research methodologist who had worked on the study. We found the data from the databases to be sufficiently reliable for the purposes of this engagement. Ariz. Rev. Stat. § 15-1824. Requires that community college districts and universities cooperate in operating a statewide articulation and transfer system, including the process of transfer of lower division general education credits, general elective credits, and curriculum requirements for approved majors, to facilitate the transfer of community college students to Arizona public universities without a loss of credit. Ark. Code Ann. § 6-53-205. Requires that the Arkansas Higher Education Coordinating Board develop a plan to maximize transfer credits of students from institutions within the system, including the development of a core transfer program for students desiring to obtain a baccalaureate degree after transferring from an institution within the 2-year system to the 4-year system. Ark. Code Ann. § 6-61-218. Requires the Arkansas Higher Education Coordinating Board to establish in consultation with the colleges and universities a minimum core of courses that shall apply toward the general education core curriculum requirements and that shall be fully transferable between state institutions. A.C. Ark. Code Ann. § 6-61-505. Gives the State Community College Board the duty and power to work with senior institutions of the state to develop the criteria for transfer of credits of students entering senior institutions from community colleges. Cal. Ed. Code § 66720. Requires the Board of Governors of the California Community Colleges, the Regents of the University of California, and the Trustees of the California State University to jointly develop, maintain, and disseminate a common core curriculum in general education courses for the purposes of transfer. Cal. Ed. Code § 66730 and note. Directs the Regents of the University of California (UC), the Trustees of the California State University (CSU), and the Board of Governors of the California Community Colleges to have as a fundamental policy the maintenance of a healthy and expanded student transfer system. Community college students must have access to a viable and efficient transfer agreement program to the California State University and the University of California for upper division work toward a baccalaureate degree. Cal. Ed. Code § 66738. Holds the governing board of each public postsecondary education segment accountable for the development and implementation of formal systemwide articulation agreements and transfer agreement programs. Cal. Ed. Code § 66739.5. States the intent of the legislature as ensuring that community colleges students who wish to earn the baccalaureate degree at California State University are provided with a clear and effective path to this degree. Cal. Ed. Code § 66740. Requires each department, school, and major in UC and CSU to develop, in conjunction with community college faculty in appropriate and associated departments, discipline-specific articulation agreements and transfer agreements for those majors that have lower-division prerequisites. Colo. Rev. Stat. § 23-1-108. Requires the Colorado Commission on Higher Education to establish, after consultation with the governing boards of institutions, and enforce student transfer agreements between 2-year and 4-year institutions and among 4-year institutions. Such transfer agreements shall include provisions under which institutions shall accept all credit hours of acceptable coursework for automatic transfer to another state- supported institution of higher education in Colorado. The commission shall also establish and enforce student transfer agreements between degree programs offered on the same campus or within the same institutional system. Colo. Rev. Stat. § 23-1-125. Directs the Colorado Commission on Higher Education, in consultation with each Colorado public institution of higher education, to outline a plan to implement a core course concept that defines the general education course guidelines for all public institutions of higher education. Colo. Rev. Stat. § 23-5-122. Requires the governing board of every state-supported institution of higher education to have in place and enforce policies regarding transfers by students between undergraduate degree programs that are offered within the same institution or within the same system. Colo. Rev. Stat. § 23-13-104. Lists statewide expectations and goals for higher education, including ensuring that no student’s graduation is delayed due to lack of access to or availability of required and core courses and ensuring that students who change degree programs lose only those credit hours that clearly and justifiably cannot apply in the degree programs to which the student transfers. Conn. Gen. Stat. § 10a-19a. Directs the Commissioner of Higher Education, in consultation with the Higher Education Coordinating Council, to establish a statewide Advisory Council on Student Transfer and Articulation to maximize the transferability of course credits. Fla. Stat. Ann. § 1007.01. Requires the State Board of Education, in order to improve and facilitate articulation systemwide, to develop policies and guidelines with input from statewide K-20 advisory groups established by the Commissioner of Education relating to a number of issues, including articulation agreements, admissions requirements, and the transferability of credits among institutions. Fla. Stat. Ann. § 1007.22. Authorizes university boards of trustees and community college boards of trustees to establish intrainstitutional and interinstitutional programs to maximize articulation. These may include transfer agreements that facilitate the transfer of credits between public and nonpublic postsecondary institutions and the concurrent enrollment of students at a community college and a state university. Fla. Stat. Ann. § 1007.23. Requires the State Board of Education to establish in rule a statewide articulation agreement, which must among other things specifically provide that every associate in arts graduate of a community college shall have met all general education requirements and must be granted admission to the upper division of a state university, except for certain listed programs. The articulation agreement must also guarantee the statewide articulation of appropriate courses within associate in science degree programs to baccalaureate degree programs. 110 Ill. Comp. Stat. 805/2-11. Empowers the State Board in cooperation with the 4-year colleges to develop articulation procedures to the end that maximum freedom of transfer among community colleges and between community colleges and degree-granting institutions be available. Ind. Code Ann. § 20-12-0.5-8. Requires the Commission for Higher Education to, among other things, develop through the committee statewide transfer of credit agreements for courses that are most frequently taken by undergraduates; develop through the committee statewide agreements under which associate degrees articulate fully with related baccalaureate degree programs; and publicize by all appropriate means, including an Internet Web site, a master list of course transfer of credit agreements and program articulation agreements. Ind. Code Ann. § 20-12-17-2. Requires all state-supported universities to accept the transfer credit of all appropriate courses successfully completed by any student at any other state-supported postsecondary educational institution having the same level of accreditation. Kans. Stat. Ann. § 72-4454. Requires the state board of regents to adopt a policy requiring articulation agreements among area vocational schools, area vocational-technical schools, community colleges, technical colleges, and state educational institutions. Ky. Rev. Stat. Ann. § 164.580. Requires the Kentucky Community and Technical College System to be responsive to the needs of students and employers to support the lifelong learning needs of Kentucky citizens in order to, among other things, facilitate transfers of credit among certificate, diploma, technical, and associate degree programs. Ky. Rev. Stat. Ann. § 164.583. Requires all lower-division academic courses offered by the community colleges to be transferable for academic credit to any and all 4-year public colleges and universities. La. Rev. Stat. Ann. § 17:3129.1. Requires postsecondary management boards to adopt and implement in the institutions under their jurisdiction common core courses that articulate from any institution of public higher education to any other such institution, taking into consideration the accreditation criteria of the institution receiving the credit. La. Rev. Stat. Ann. § 17:1871. Requires the Board of Supervisors of Community and Technical Colleges to continue development of articulation agreements between institutions under the management of the board and institutions managed by other postsecondary management boards, both public and private. Me. Rev. Stat. Ann. tit. 20-A, § 10902. States that one of the fundamental policies in the state’s public higher educational planning is to provide for a uniform system of transferring credits for equivalent courses among the various units of the University of Maine system. Me. Rev. Stat. Ann. tit. 20-A, § 10907. Requires the Chancellor of the University of Maine system to form a committee that shall, among other things, establish a uniform system to facilitate the transfer of credits for equivalent courses among the various units of the University of Maine system. Md. Code Ann., Education § 11-207. Lists among the duties of the Maryland Higher Education Commission the establishment of procedures for transfer of students between the public segments of postsecondary education and the establishment, in conjunction with the governing boards, of standards for articulation agreements. Mass. Gen. Laws ch. 15A, § 9. Gives the board of higher education the duty and power to, among other things, develop and implement a transfer compact for the purpose of facilitating and fostering the transfer of students without the loss of academic credit or standing from one public institution to another. Minn. Stat. Ann. § 135A.052. Recognizes as one of the missions of postsecondary institutions that community colleges shall offer lower-division instruction in occupational programs in which all credits earned will be accepted for transfer to a baccalaureate degree in the same field of study. Minn. Stat. Ann. § 136F.05. Requires the Minnesota State Colleges and Universities Board of Trustees to develop administrative arrangements that make possible the efficient use of the facilities and staff of the technical colleges, community colleges, and state universities so that students may have the benefit of improved and broader course offerings, ease of transfer among schools and programs, integrated course credit, coordinated degree programs, and coordinated financial aid. Minn. Stat. Ann. § 135A.08. Requires the regents of the University of Minnesota and the trustees of the Minnesota State Colleges and Universities shall develop and maintain course equivalency guides for use by institutions that have a high frequency of transfer. The governing boards of private institutions that grant associate and baccalaureate degrees and that have a high frequency of transfer students are requested to participate in developing these guides. Mo. Rev. Stat. § 173.005. Requires the coordinating board for higher education to establish guidelines to promote and facilitate the transfer of students between institutions of higher education within the state. Neb. Rev. Stat. Ann. § 85-1413. Requires the Coordinating Commission for Postsecondary Education to incorporate into the comprehensive statewide plan for postsecondary education, among other things, the facilitation of statewide transfer-of-credit guidelines to be considered by institutional governing boards. Neb. Rev. Stat. Ann. § 85-963. Encourages the community college areas to work in cooperation with the University of Nebraska and the state colleges for the articulation of general academic transfer programs of the six community college areas. Nev. Rev. Stat. Ann. § 396.568. Requires that all credits earned by a student in a course at a community college within the system must be accepted and applied toward the coursework required of the student in his major or minor for the award of a baccalaureate degree upon graduation from any university or state college within the system if certain criteria are met. N.H. Rev. Stat. Ann. § 188-F:6. Requires the department of regional community-technical colleges and the university system of New Hampshire to develop mutually agreed upon transfer articulation agreements. N.J. Stat. Ann. § 18A:3B-8. Gives responsibility to the New Jersey Presidents’ Council to encourage the formation of regional or other alliances among institutions, including interinstitutional transfers, program articulation, cooperative programs and shared resources and the development of criteria for full faith and credit transfer agreements between county colleges and other institutions of higher education. N.M. Stat. Ann. § 21-1B-3. Requires the commission on higher education to establish and maintain a comprehensive statewide plan to provide for the articulation of educational programs and facilitate the transfer of students between institutions. The commission shall define, publish, and maintain modules of lower-division courses accepted for transfer at all institutions. N.M. Stat. Ann. § 21-1B-4. Requires each institution to accept for transfer course credits earned by a student at any other institution that are included in a transfer module. N.M. Stat. Ann. § 21-1B-5. Requires the commission on higher education to establish and maintain a process to monitor and improve articulation through frequent and systematic consultation with institutions. The commission shall establish a complaint procedure for transfer students who fail to receive credit and investigate all articulation complaints and render decisions as to the appropriateness of the actions of the participants. N.Y. Educ. Law § 351. Lists as one of the missions of the state university system to exercise care to develop and maintain a balance of its human and physical resources that promotes appropriate program articulation between its state-operated institutions and its community colleges as well as encourages regional networks and cooperative relationships with other educational and cultural institutions. 1995 Sess. Laws, c. 287, §§ 1-3. Provides for the development, by the Board of Governors of the University of North Carolina and the State Board of Community Colleges, of a plan for the transfer of credits among the institutions of the North Carolina Community College System, and between those institutions and the constituent institutions of the University of North Carolina, the intention of the General Assembly to adopt a plan for the transfer of credits, and the implementation, by the State Board of Community Colleges, of a common course numbering system. 1995 Sess. Laws, c. 625. Provides that the Board of Governors of the University of North Carolina and the State Board of Community Colleges shall develop a plan to provide students with information regarding the transfer of credits among community colleges and between community colleges and the University of North Carolina and shall develop a timetable for development of guidelines. Ohio Rev. Code Ann. § 3333.16. Requires the Ohio board of regents to establish policies and procedures applicable to all state institutions of higher education that ensure that students can begin higher education at any state institution of higher education and transfer coursework and degrees to any other state institution of higher education without unnecessary duplication; the board must also develop and implement a universal course equivalency classification system for state institutions so that the transfer of students and the transfer and articulation of equivalent courses are not inhibited by inconsistent judgment about the application of transfer credits. Coursework completed within such a system at one state institution of higher education and transferred to another institution shall be applied to the student’s degree objective in the same manner as equivalent coursework completed at the receiving institution. The board of regents shall develop a system of transfer policies that ensure that graduates with associate degrees shall be admitted to a state institution of higher education. The board of regents shall study the feasibility of credit recognition and transferability to state institutions of higher education for graduates who have received associate degrees from a career college. Okla. Stat. Ann. tit. 70, § 3207.1. States that the intent of the legislature is that credits earned by students in any institution of higher education within the Oklahoma State System of Higher Education be fully accepted at any other institution of higher education within the system. Or. Rev. Stat. § 348.470. Declares that it is the policy of the state to encourage cooperation between the Oregon University System and community colleges on issues affecting students who transfer between the two segments and that all unnecessary obstacles that restrict student transfer opportunities between the two segments shall be eliminated. 1997 Or. Laws ch. 653, § 1. Requires the State Board of Higher Education to continue to work with the State Board of Education to develop policies and procedures to ensure maximum transfer of academic credits between community colleges and state institutions of higher education. 24 Pa. Cons. Stat. Ann. § 15-1504-A. Requires the Department of Education and the Office of Administration to establish management teams to distribute funds appropriated for the researching, planning, and development of the Pennsylvania Education Network, which can include, when appropriate, implementing a Web-based application that makes all articulation agreements among higher education institutions available on the Internet. R.I. Gen. Laws § 16-45-1.1. Requires vocational programs to be organized for maximum articulation between educational levels. S.C. Code Ann. § 59-52-100. Requires the State Board of Technical and Comprehensive Education and the Council of College Presidents, through the Commission on Higher Education, to clarify and strengthen articulation agreements between associate degree programs and baccalaureate degree programs. S.D. Codified Laws § 13-53-43. Requires that all general education credit hours fulfilling graduation requirements in institutions accredited by the North Central Association of Colleges and Secondary Schools be transferable between the universities under the control of the South Dakota Board of Regents and the technical institutes governed by the South Dakota Board of Education. General education course credit hours are transferable between the technical institutes and universities only for credit for general education courses. Tenn. Code Ann. § 49-7-202. Requires the Tennessee Higher Education Commission to establish and ensure that all postsecondary institutions in Tennessee cooperatively provide for an integrated system of postsecondary education. The commission shall guard against inappropriate and unnecessary conflict and duplication by promoting transferability of credits and easy access of information among institutions. Tex. Educ. Code Ann. § 61.822. States that if a student successfully completes the core curriculum at an institution of higher education, that block of courses may be transferred to any other institution of higher education and must be substituted for the receiving institution’s core curriculum. A student shall receive academic credit for each of the courses transferred and generally may not be required to take additional core curriculum courses at the receiving institution. Tex. Educ. Code Ann. § 61.823. States that if a student successfully completes a field of study curriculum developed by the board, that block of courses may be transferred to a general academic teaching institution and must be substituted for that institution’s lower division requirements for the degree program for the field of study into which the student transfers, and the student shall receive full academic credit toward the degree program for the block of courses transferred. Tex. Educ. Code Ann. § 61.831. States that it is the purpose of the statutory subchapter on transfer of credit to develop a seamless system of higher education with respect to student transfers between institutions of higher education, including student transfers from public junior colleges to general academic teaching institutions. Utah Code Ann. § 53B-6-105.5. Requires the Technology Initiative Advisory Board to provide the State Board of Regents with an assessment and reporting plan that includes an analysis of program articulation among higher education institutions in engineering, computer science, and related technology. Utah Code Ann. § 53B-16-105. Requires the Board of Regents to facilitate articulation and the seamless transfer of courses within the state system of higher education; develop, coordinate, and maintain a transfer and articulation system within the state system of higher education that allows students to transfer courses among institutions of higher education to meet requirements for general education and lower-division courses that transfer to baccalaureate majors and facilitates student acceleration and the transfer of students and credits between institutions; and identify common prerequisite courses and course substitutions for degree programs across all institutions of higher education. Va. Code Ann. § 23-9.6:1. Gives the State Council of Higher Education the duty, responsibility, and authority to facilitate the development of dual admissions and articulation agreements between 2- and 4-year public and private institutions of higher education in Virginia. Such agreements shall be subject to the admissions requirements of the 4-year institutions. Va. Code Ann. § 23-9.14:2. Requires the State Council of Higher Education to develop, in cooperation with the governing boards of the public 2- and 4-year institutions of higher education, a State Transfer Module that designates those general education courses that are offered within various associate degree programs at the public 2-year institutions that are transferable for credit or admission with standing as a junior to the public 4- year institutions. In developing such module, the council shall also seek the participation of private institutions of higher education. The council shall also facilitate the development of dual admissions and articulation agreements between the state’s public and private 2- and 4-year institutions of higher education, which are subject to the admissions requirements of the 4-year institutions. The council shall make public all general education courses offered at public 2-year institutions and designating those that are accepted for purposes of transfer for course credit at 4-year public and private institutions of higher education in Virginia. Wash. Rev. Code Ann. § 28B.45.014. Requires higher education branch campuses to collaborate with the community and technical colleges in their region to develop articulation agreements to ensure that branch campuses serve as innovative models of a two plus two educational system. Areas of collaboration include joint development of curricula and degree programs. Wash. Rev. Code Ann. § 28B.76.240. Requires the higher education coordinating board to adopt statewide transfer and articulation policies that ensure efficient transfer of credits and courses across public 2- and 4-year institutions of higher education. The intent of the policies is to create a statewide system of articulation and alignment between 2- and 4-year institutions. Wash. Rev. Code Ann. § 28B.76.2401. States that the statewide transfer of credit policy and agreement must not require or encourage the standardization of course content or prescribe course content or the credit value assigned by any institution to the course. Policies adopted by public 4-year institutions concerning the transfer of lower-division credit must treat students transferring from public community colleges the same as students transferring from public 4-year institutions. Wash. Rev. Code Ann. § 28B.76.250. Requires the higher education coordinating board to convene work groups to develop transfer associate degrees that will satisfy lower-division requirements at public 4-year institutions of higher education for specific academic majors. Each transfer associate degree developed under this section must enable a student to complete the lower-division courses or competencies for general education requirements and preparation for the major that a direct-entry student would typically complete in the freshman and sophomore years for that academic major. Completion of a transfer associate degree does not guarantee a student admission into an institution of higher education. Wash. Rev. Code Ann. § 28B.720. Requires the higher education coordinating board, in consultation with the state board for community and technical colleges and the council of presidents, to recruit and select institutions of higher education to participate in a pilot project to define transfer standards in selected academic disciplines on the basis of student competencies. Under the pilot project, participants shall develop standards, definitions, and procedures for quality assurance for a transfer system based on student competencies. W. Va. Code Ann. § 18B-2B-6. Lists among the powers and duties of the West Virginia Council for Community and Technical College Education to establish and implement policies and procedures to ensure that students may transfer and apply toward the requirements for a degree the maximum number of credits earned at any regionally accredited in-state or out-of-state higher education institution; to cooperate with the governor’s P-20 council of West Virginia to remove barriers relating to transfer and articulation between and among community and technical colleges, state colleges and universities, and public education, and to implement a policy jointly with the commission whereby any course credit earned at a community and technical college transfers for program credit at any other state institution of higher education and is not limited to fulfilling a general education requirement. Wis. Stat. Ann. § 36.11. Lists among the powers and duties of the board of regents to establish policies for the appropriate transfer of credits between institutions within the system, to establish policies for the appropriate transfer of credits with other educational institutions outside the system, and to establish and maintain a computer-based credit transfer system that shall include all transfers of credit between institutions within the system and other courses for which the transfer of credits is accepted. Wyo. Stat. Ann. § 21-16-602. Requires the Wyoming Education Planning and Coordination Council to facilitate cooperative arrangements among state education institutions in the sharing of facilities, personnel, and technology or otherwise assist in articulation between the institutions. Cornelia M. Ashby; (202) 512-7215 or ashbyc@gao.gov. Bryon Gordon, Assistant Director In addition to those mentioned above, Elizabeth Bax, Richard Burkard, Sara Edmondson, Jonathan S. McMurray, John Mingus, James Rebbe, Walter Vance, and Ann T. Walker made significant contributions to this report.
Each year thousands of students transfer from one postsecondary institution to another. The credit transfer process, to the extent that it delays students' progress, can affect the affordability of postsecondary education and the time it takes students to graduate. Seeking information on the processes and requirements that postsecondary institutions have in place to assess requests to transfer academic credits, Congress asked GAO to examine (1) how postsecondary education institutions decide which credits to accept for transfer, (2) how states and accrediting agencies facilitate the credit transfer process, and (3) the implications for students and the federal government of students' inability to transfer credits. When deciding which credits to accept from transfer students, receiving institutions consider the sending institution's type of accreditation, whether academic transfer agreements with the institution exist, and the comparability of coursework. However, institutions vary in how they evaluate and apply a student's transferable credits. Many officials from postsecondary institutions with regional accreditation told GAO that they would not accept credits earned from nationally accredited institutions. To streamline the transfer process, most institutions have transfer agreements with other institutions that generally provide for the acceptance of credits from the other institution without further evaluation. In some instances, institutions review student credits--not rejected for other reasons, such as accreditation--to determine comparability to their academic offerings. State legislation, statewide initiatives, and the accreditation standards that accrediting agencies set help facilitate the transfer of academic credits from one postsecondary institution to another. Among other things, states support the establishment of statewide transfer agreements, common core curricula, and common course numbering systems. Accrediting agencies facilitate the transfer process through the standards they set. The accrediting agencies that GAO reviewed generally adhere to the principle that institutions should not accept or deny transfer credit exclusively on the basis of a sending institution's type of accreditation. A student's inability to transfer credit may result in longer enrollment, more tuition payments, and additional federal financial aid, but current data do not allow GAO to quantify its effects on the students or the federal government. Data are not available on the number of credits that do not transfer, making it difficult to assess the actual costs associated with nontransferable credits.
NRC is an independent agency of the federal government. Its five commissioners are nominated by the president and confirmed by the Senate, and its chairman is appointed by the president from among the commissioners. The current Chairman was sworn in as a commissioner in May 1995 and became Chairman that July. NRC’s mission includes ensuring that civilian use of nuclear materials in the United States—in the operation of nuclear power plants and in medical, industrial, and research applications—is done with adequate protection of public health and safety. NRC carries out its mission through licensing and regulatory oversight of nuclear reactor operations and other activities involving the possession and use of nuclear materials and wastes. Because it is impossible for NRC’s inspections to detect all potential hazards, NRC must also rely on nuclear licensee employees to help identify such problems. Actions taken to respond to employee concerns raised in the past have significantly contributed to improving safety in the nuclear industry. Although most employee concerns are raised directly to licensee managers and are resolved internally by licensees, employees may choose to bring allegations directly to NRC. An employee generally raises a concern with NRC if he or she is not satisfied with the licensee’s resolution of the concern or is not comfortable raising the concern internally. Employees may be discouraged from raising these issues internally if they believe their employer discriminates against those who do so. This phenomenon in the working environment is termed the “chilling effect.” Some observers believe that certain developments in the nuclear power industry increase the vulnerability of power plants to hazards, which would increase the importance of employee vigilance in noting and reporting hazards. For example, the electrical power industry may soon face deregulation, which would allow customers to choose a supplier and create competition in the industry that did not exist before. This has led to increased concern by NRC about safety because of the potential pressure on utilities to minimize operating costs. Preparation for deregulation has already resulted in downsizing at some nuclear plants and the closing of others because of their comparatively high operating costs. Furthermore, the nation’s over 100 nuclear power plants are aging (most were built before 1980), which puts them increasingly at risk for certain kinds of hazards. Labor administers a variety of laws affecting conditions in the nation’s work places, including laws to protect employees who report work place hazards. OSHA’s responsibilities include investigating employee discrimination complaints under these laws, including the ERA.Investigations of employee discrimination cases are performed by a cadre of about 60 investigators. ERA cases make up a small percentage of the investigators’ workload. In response to complaints by employees who raised health and safety concerns that they were not being protected from discrimination, NRC has studied and reported on the employee protection system. In 1992, NRC’s OIG initiated a review to examine and better understand the nature of the complaints and the magnitude of this problem. In a July 1993 report, the OIG noted that employees who had raised concerns believed NRC did little to protect them from retaliation or to investigate in a timely manner their allegations of retaliation. In response to hearings before what was then the Subcommittee on Clean Air and Nuclear Regulation of the Senate Committee on Environment and Public Works, the NRC OIG issued a report in December 1993 that found NRC was primarily reactive to harassment and intimidation allegations and did not have a program to assess the work environment at licensees’ facilities except when serious problems occurred. On July 6, 1993, NRC’s Executive Director for Operations formed a review team to reassess NRC’s process for protecting against retaliation those employees who raise health and safety concerns. The review team solicited input from employees who had alleged discrimination, licensees, and the public and, in a January 1994 report, concluded that the existing NRC and Labor processes, as then implemented, did not provide sufficient protection to these employees. In addition, in a May 1993 report, the Labor OIG referred to the office responsible for preparing the Secretary of Labor’s final decisions as a “burial ground” for cases on which the Secretary and other Labor officials did not issue a final decision. The oldest 26 cases had been pending at this final stage for an average of 7.5 years, and there was a backlog of 178 cases—129 of them involving complaints under the several laws Labor enforces pertaining to discrimination of workers who raise health and safety concerns—that had been in that office for an average of 2.5 years. NRC has the overall responsibility for ensuring that the nuclear plants it licenses are operated safely. This entails informing licensees and individual employees about the discrimination prohibitions of the law and of the steps an employee can take if he or she feels unjustly treated, and ensuring that employees are comfortable raising health and safety concerns. Once an employee raises an allegation of discrimination or harassment, however, both NRC and Labor have roles in processing the allegation. Under the Atomic Energy Act, as amended, NRC may take action against the employers it licenses when they are found to have discriminated against individual employees for raising health and safety concerns. Accordingly, NRC has established a process for investigating discrimination complaints and, if appropriate, taking enforcement action against licensees. The ERA, as amended, authorizes the Secretary of Labor to order employers to make restitution to the victims of such discrimination, and Labor has instituted a process for investigating and adjudicating discrimination complaints. In 1982, NRC and Labor entered into a Memorandum of Understanding that recognized that the two agencies have complementary responsibilities in the area of employee protection. Under the Atomic Energy Act, NRC has implied authority to investigate cases in which an individual may have been discriminated against for raising health or safety concerns, and to take appropriate enforcement action against licensees for such discrimination. The act does not, however, specifically authorize NRC to order restitution, such as reinstatement or back pay, for an employee who has been subjected to discrimination. It was not until 1978, when the Congress enacted section 211 of the ERA, that statutory remedies were provided for individuals when discrimination occurs. Section 211 prohibits employers from discriminating against employees who raise health or safety issues to NRC or its licensees and authorizes the Secretary of Labor, after an investigation and an opportunity for a public hearing, to order restitution. According to Labor, restitution can include reinstatement of the complainant to his or her former position with back pay, if warranted; award of compensatory damages; payment of attorney fees; and purging personnel files of any adverse references to the complaint. The Secretary is required to complete an initial investigation within 30 days and issue a final order within 90 days of the filing of the complaint. Federal regulations allow for extensions, which, in effect, waive the 90-day time frame. In 1982, NRC issued regulations implementing section 211. These regulations notify licensees that discrimination of the type described in the law is prohibited and incorporate NRC’s implied authority to investigate alleged unlawful discrimination and take enforcement action, such as the assessment of civil penalties. The regulations also require licensees to post notices provided by NRC describing the rights of employees. As part of the Energy Policy Act of 1992, section 211 was amended to give employees more time to file a complaint, modify the burden of proof in Labor administrative hearings by requiring the complainant to show that raising a health and safety concern was a contributing factor in an unfavorable personnel practice, specifically protect employees who raise health or safety issues with their employers, and allow the Secretary of Labor to order relief before completion of the review process that follows an ALJ finding of discrimination. NRC and Labor recognized that in view of Labor’s complementary responsibilities, coordination was warranted. Consequently, Labor and NRC entered into a Memorandum of Understanding in 1982. Under the memorandum, NRC and Labor agreed to carry out their responsibilities independently, but to cooperate and exchange timely information in areas of mutual interest. In particular, Labor agreed to promptly provide NRC copies of ERA complaints, decisions, and orders associated with investigations and hearings on such complaints. NRC agreed to assist Labor in obtaining access to licensee facilities. Working arrangements formulated to implement the memorandum specified that NRC will not normally initiate an investigation of a complaint if Labor is already investigating it or has completed an investigation and found no violations. If Labor finds that a violation has occurred, however, NRC may take enforcement action. Normally, NRC considers Labor’s actions before deciding what enforcement action, if any, to take. The joint process for investigating discrimination allegations is shown in figure 1. A series of steps involving three components in Labor can lead to restitution for an employee discriminated against for raising health and safety concerns. A separate set of steps in NRC can lead to enforcement action against a licensee who discriminates. The three components in Labor’s allegation process perform the following activities. Settlements between the parties may occur at any point in this process and are often made to minimize the expense and time involved for both the employee and the licensee in continuing a case. (The actual times for these steps are discussed in the next section under timeliness standards.) OSHA: To receive restitution for being discriminated against by a licensee, an employee must file a complaint with OSHA within 180 days of the alleged discriminatory act. OSHA must complete the initial investigation within 30 days, under the law. However, under Labor procedures, when necessary and preferably with the agreement of both parties, the 30-day limit may be exceeded. If either party does not agree with the OSHA decision, it may be appealed to Labor’s Office of Administrative Law Judges (OALJ) within 5 calendar days. OALJ: Within 7 days of the appeal, the ALJ assigned to the case is to schedule a hearing. All parties must be given at least 5 days notice of the scheduled hearing. Federal regulations state that requests for postponement of the ALJ hearing may be granted for compelling reasons. The ALJ is required to submit a recommended decision within 20 days of the hearing. Office of the Secretary: The ALJ’s recommended decision is automatically reviewed by the ARB within the Secretary of Labor’s office. Either party may appeal the final Labor decision to the appropriate federal court of appeals within 60 days. Pursuant to the ERA, a final decision is not subject to judicial review in any criminal or other civil proceeding. For discrimination allegations filed directly with NRC or Labor, an NRC review panel, located in each regional office and headquarters, decides whether to request an investigation by NRC’s Office of Investigations. The Investigations staff, in coordination with the regional administrator, decides the case’s priority and whether they will do a full investigation. If Investigations determines that a violation occurred, or if a final determination of discrimination is received from Labor, NRC assesses the violation in accordance with its enforcement policy, which defines the level of severity and the appropriate sanction. Severity levels range from severity level I for the most significant violations to severity level IV for those of lesser concern. Minor violations are not subject to formal enforcement actions. One factor that determines the severity of a discrimination violation is the organizational level of the offender. For example, discrimination violations by senior corporate management would be severity level I, whereas violations by plant management above the first-line supervisor and by the first-line supervisor would be severity levels II and III, respectively. Another factor that might determine severity level is whether a hostile work environment existed. There are three primary enforcement actions available to NRC: Notice of Violation, civil penalty, and order. The Notice of Violation is a written notice used to formalize the identification of one or more violations of a legally binding requirement. The civil penalty is a monetary fine. Orders modify, suspend, or revoke licenses or require specific actions of the licensee. Complaints by current and former nuclear licensee employees about, among other things, the allegations process led NRC and Labor to study the system for protecting employees who raise health or safety concerns. In response to recommendations and concerns raised in NRC’s January 1994 review team report and NRC and Labor OIG reports, many changes have been made in an effort to improve the employee protection system. Employees we spoke with who had made allegations of discrimination for raising safety issues generally supported these changes to improve protection. However, several recommendations that could significantly improve protection, and the perception of protection, for employees have not been implemented. Many of the implemented recommendations from these studies led to actions at NRC to improve monitoring of cases, expand communication with employees about their cases, and increase the agency’s involvement in allegation investigations; they also led to changes at Labor to improve its timeliness in processing allegation cases. These recommendations addressed concerns expressed by many of the allegers we interviewed. Regarding case monitoring, NRC has designated a full-time, senior official to centrally coordinate allegation information from NRC and Labor, and oversee the management of and periodically audit the allegation process at NRC. NRC established the position of Agency Allegation Advisor in February 1995, and since then, two rounds of audits of the allegation process have been completed. In September 1996, the Agency Allegation Advisor issued the first annual report on the status of the allegation system, which addressed issues previously identified through audits and data gathered on allegations. These actions give NRC a focal point for gathering and publishing information on how its allegation process is working and enable it to recognize problems. Some recommendations implemented by NRC should improve communication. One of these recommended improving feedback to employees on the status of their cases. As of May 1996, new procedures established time frames for NRC to periodically report case status to employees. The procedures required NRC to inform the alleger in writing of the status of his or her case within 30 days of NRC’s receipt of the allegation, every 6 months thereafter, and again within 30 days of completing the investigation. NRC has also established a hotline through which employees can report problems and issued a policy statement emphasizing the importance of licensees maintaining an environment in which employees are comfortable raising health and safety concerns. These new procedures address issues allegers raised with us about not being informed on the status of their cases. However, some allegers told us that because the policy statement is directed only at the licensees’ responsibilities for maintaining a good work environment and does not include specific responsibilities for NRC, it is not adequate. To increase NRC’s involvement in the allegation process, the January 1994 study recommended that NRC revise the criteria for selecting complaints to be investigated in order to expand the number of investigations. Before October 1993, NRC had investigated few discrimination complaints and usually waited for the Labor Secretary’s final decision, which generally took longer than an NRC investigation, before taking enforcement action. In October 1993, NRC Investigations’ policy was changed to require that field offices open a case and conduct an evaluation of all matters involving discrimination complaints, regardless of Labor’s involvement. In April 1996, NRC issued a policy statement directing its Office of Investigations to investigate all high-priority allegations of discrimination, whether the Labor Secretary’s final decision has been made or not, and to devote the resources necessary to complete these investigations. As a result, the number of high-priority investigations NRC opened has increased significantly. By applying the new criteria, the percentage of cases opened that were high priority increased from 37 percent in May 1996 to 81 percent in July 1996. These actions should address the dissatisfaction employees expressed to both NRC’s OIG and us about NRC’s lack of involvement in the investigation of cases. However, NRC has identified a need for more resources at the Office of Investigations to handle the greater number of investigations, and as of December 1996, this need had not been addressed. Therefore, it is unclear whether the investigations can be completed as quickly as hoped. Labor has also improved its timeliness in processing cases, as recommended in the Labor OIG’s May 1993 report. Labor has eliminated a backlog of cases awaiting decision in the Office of the Secretary and has developed and implemented a management information system to monitor case activity. Since these changes were implemented, the average time for the Secretary’s office to decide cases has been reduced from about 3 years in fiscal year 1994 to about 1.3 years in fiscal year 1996. A Labor official told us that as of December 1996, the average case took only about 4 months to clear the Office of the Secretary, due partially to the elimination of the backlog. In addition, to better use program expertise, Labor has transferred responsibility for investigation of allegation cases from the Wage and Hour Division to OSHA, which has a staff with experience investigating allegations of discrimination against employees who raise health and safety concerns. The Assistant Secretary of Labor for Employee Standards commented that the primary purpose of reassigning initial investigations from Wage and Hour to OSHA was part of an exchange of responsibilities. Prior to the reassignment, OSHA had responsibility for the employee protection, or “whistleblower,” provisions of certain laws and the staff devoted to the enforcement of these provisions. The Wage and Hour Division was responsible for certain employee protections affecting farm workers and would be able to make field sanitation inspections as part of its regular investigations. These responsibilities were exchanged in order to better use program expertise and promote effective and efficient use of resources. This transfer was effective February 3, 1997. In spite of NRC’s and Labor’s overall responsiveness to the reports’ recommendations, some recommendations that address concerns raised not only by the NRC review team but also by other NRC staff, the OIG, and allegers we interviewed have not yet been implemented. Some recommendations, which could be implemented through administrative procedural changes, could significantly improve the system; these address timeliness standards, case monitoring, and NRC’s knowledge of the employee environment in licensees’ facilities. Other recommendations, which require statutory changes or are controversial as to their effectiveness, have also not been implemented. When allegation cases take several years to complete, significant negative effects accrue. Lengthy cases increase attorney fees, prolong the time an employee may be out of work, and have a chilling effect on other employees. Under past policies, which provided for few NRC investigations, long cases delayed NRC’s ability to impose enforcement actions as they waited for Labor decisions. Some cases that allegers have filed have continued for over 5 years, and during that time the employee may be out of work, paying attorney fees, and exhausting his or her financial resources. Furthermore, the January 1994 NRC report noted that delays in processing cases at the Office of the Secretary of Labor had, in some cases, prevented NRC from taking enforcement action against licensees because the time limits under the statute of limitations had run out. The Labor OIG report recommended that Labor establish a timeliness standard for the issuance of Secretary of Labor decisions and conduct an analysis to determine operational changes and resources necessary to meet the new standard. Establishing a standard was intended to provide a means to objectively measure Labor’s performance during the final step of its process and help meet legal requirements and customer service expectations. In September 1995, in its closing comments on this review, the OIG stated that Labor would need time to develop data on which to base a realistic timeliness standard and that the standard would be developed in the future when the data are available. A Labor official told us the standard is now being developed and that Labor expects to have a standard soon, although no date for implementation has been established. According to the Chairman of the ARB, the ARB is continuing to work on putting procedures in place to collect data that could be used to establish a standard. In addition, the NRC review team report recommended that Labor develop legislation to amend the law to establish a realistic timeliness standard for the entire Labor process. As of December 1996, NRC was drafting legislation for Labor’s approval that would establish a new timeliness standard of 480 days to complete the Labor process. This would allow 120 days for the administrative investigation, 30 days to appeal the decision to the OALJ, 240 days for the OALJ to recommend a decision, and 90 days for a final decision from the Secretary. According to NRC, the intent in proposing more realistic timeliness standards is that there is more incentive to try to meet standards that are achievable than those that normally cannot be met. These proposals were based on comparisons with baseline data from investigations done under other related statutes and proposed legislation considered in the 101st Congress. For example, the review team reported that OSHA investigations under other employee protection statutes took, on average, 120 days. Labor officials have indicated that they would support this legislation. Our review of processing times in each of Labor’s three offices showed that meeting the new standards would require a significant change in how these cases are processed. For cases processed in fiscal year 1994 through the first 9 months of fiscal year 1996, the proposed time frames were not met for all cases in any of the three offices. For 164 cases investigated by the Wage and Hour Division during this period, only 16 percent of the investigations were completed within the 30 days currently mandated by law and an additional 46 percent would have met the proposed time frame of 120 days. (See fig. 2.) These investigations took an average of 128 days, with a range of 1 day to over 2 years, to complete. OSHA officials said that during the pilot study for transferring the initial investigative responsibility to their office from Wage and Hour, they found it very difficult to meet the 30-day mandate and had to ask for extensions in several cases. During this same period, 56 percent of OALJ’s recommended decisions and orders would have met the proposed time frame of 240 days. OALJ took an average of 271 days (9 months) to issue 118 recommended decisions and orders. The time for these decisions ranged from less than 30 days to over 3 years. Currently, there is no time frame specifically for the OALJ step of the process. Even though the act provides for a 90-day time frame for moving from initial investigation to a final decision, extensions were requested by the parties in virtually all cases we reviewed. One reason for this is that the OALJ hearing is de novo—it essentially starts the process over again because it does not consider the results of the Wage and Hour investigation. In addition, Labor officials told us that these extensions were necessary to allow additional time for discovery and review of evidence by legal counsels of both parties in preparation for the hearing. In commenting on a draft of this report, Labor’s Chief Administrative Law Judge stated that 240 days is an achievable goal if the following factors are addressed: establishment of a mechanism to extend the time frame in appropriate recognition that existing case law conflicts with a strict time limit on discovery and hearing, and availability of adequate staff. For the final step in the process, our data showed significant improvement in the time it took to obtain decisions from the Secretary of Labor, but even in the most recent year we analyzed, only 37 percent would have met the proposed 90-day time frame. (See fig. 4.) The average time to decide 217 cases in the Secretary’s office decreased from about 3.3 years in fiscal year 1994 to about 1.3 years (16 months) in fiscal year 1996. In commenting on a draft of this report, the Chairman of the ARB noted that the current policy gives the parties 75 days to file all the briefs. In most cases, an extension is requested by at least one of the parties. Therefore, in his opinion, a 90-day timeliness standard is unrealistic unless ARB severely restricts the parties’ ability to properly brief the issues pressed. Both monitoring of individual cases and monitoring trends in allegations are important oversight activities. Monitoring the individual cases as they progress is a way to determine whether cases are being resolved in a timely way. Monitoring trends in allegations would help NRC’s Agency Allegation Advisor in overseeing the system’s effectiveness. The NRC report recommended that NRC improve its Allegation Management System to be able to both monitor allegations from receipt to the completion of agency action, and to analyze trends. It could also help improve agency responsiveness, such as when monitoring reveals sudden increases in the time for cases to be resolved, and helps identify licensees who may warrant closer scrutiny, such as a licensee that shows a sharp increase in the number of cases against it or settled by it. NRC agrees with the recommendation and has implemented a new system in its regional offices and in the two headquarters offices with direct regulatory oversight, which officials say will have the capability to track cases through each step of the process. However, at the time of our review, the system did not yet include data from the Offices of Investigations and Enforcement, nor did it include on-line Labor investigation data. Our findings highlight the need for the data tracking system to include the period of time that a case is at Labor. For example, Labor has separate databases and case identifiers at Wage and Hour and OALJ, and the cases cannot easily be matched. As a result, neither Labor nor we can describe the total time it takes cases to be resolved at Labor. In addition, of the 217 cases for which the Secretary of Labor had made a final determination, 22 had no such decision recorded in NRC files. While only one of these cases resulted in a decision of discrimination, this is significant because NRC’s policy is to hold open its enforcement action on complaints until notified that the Secretary has made a final determination. However, without an NRC investigation or an ALJ finding of discrimination, the 5-year limit on civil penalties could be exceeded. NRC officials told us that they have contacted Labor and requested copies of the 22 decisions to update their files. The number of settlements found in our analysis also underscores the significance of the NRC review team report’s recommendation that NRC should track trends in cases closed with a settlement without a finding of discrimination. NRC currently has no systematic way of knowing the extent to which settlements are made by individual licensees or when in the process they occur. Yet, our data showed that numerous settlements occurred at all steps in the process: Wage and Hour settled 22 of its 164 cases; the OALJ recommended settlement approval for 49 of the 118 cases on which it issued recommended decisions; and the Secretary of Labor approved settlements in 74 of the 217 allegations on which final decisions were issued. Labor’s policy is to attempt to conciliate allegations in every case; only if conciliation fails does it proceed with a fact-finding investigation. NRC acknowledges that employee identification of problems is an important part of its system to ensure nuclear power plant safety. NRC also recognizes that the perception of discrimination may be even more important than actual findings in terms of affecting employees’ willingness to report health and safety concerns. Therefore, NRC needs not only factual findings of discrimination but also a way to measure employee perception of discrimination. NRC’s December 1994 OIG report, however, noted that although NRC’s management of discrimination issues focuses on encouraging licensees to foster a retaliation-free work environment, NRC has no program to assess licensees’ work environments except when a serious problem such as a discrimination suit has already occurred. At about the same time, NRC’s review team also concluded that NRC did not have a quantitative understanding of the number of employees who were hesitant to raise these kinds of concerns. Consequently, the review team commissioned Battelle Human Affairs Research Center to study methods for credibly assessing employee feelings about raising health and safety concerns. The Battelle study recommended a three-part strategy for development, implementation, and follow-up validation of the results of a mail-out workforce survey of a sample of nuclear power plants. This approach was then reflected in the NRC review team report’s recommendation that NRC develop a survey to assess a licensee’s work environment. The review team report’s recommendation was prompted, in part, by its recognition of the limitations of some of the assessments NRC had done in the past, such as one-on-one interviews of licensee employees conducted by NRC inspectors. The problem with having NRC inspectors conduct such interviews was illustrated by a September 1996 NRC-chartered study of how employee concerns and allegations are handled at the Millstone power plant. This study concluded that NRC inspectors, in general, understated the extent of the chilling effect at plants and therefore are not qualified to independently detect or assess the work environment at licensee facilities. The Millstone report concluded that NRC’s efforts to gain information on the work environment had not been effective and furthermore cited NRC’s failure to develop a credible survey instrument as one example of the lack of progress toward this end that has lowered public confidence in NRC’s commitment to improve its performance in addressing employee concerns. Nevertheless, NRC’s September 1996 annual report on the status of the allegation system stated that NRC had decided not to implement the recommendation to develop a survey instrument. The report cited a staff recommendation made in November 1994 to not develop a survey because of the cost to develop and process it and the expectation that other actions implemented as a result of the review team report would yield the needed information on work environment. Because employees’ feelings about how NRC handles its allegations process would also affect their willingness to raise health or safety concerns, the review team report recommended that NRC develop a standard form and include it with alleger close-out correspondence to solicit feedback from employees on the way NRC handled their allegations. NRC developed the form and conducted a pilot in December 1995 in which it sent the form to 145 employees; it received feedback from 44. It analyzed comments and acted to address concerns raised. An NRC official said the agency plans to again send the form in 1997 to another sample of employees. After analyzing the 1997 responses, NRC will decide whether to routinely include the form in all close-out correspondence and thereby fully implement the recommendation. In addition, when a finding of discrimination results from an administrative investigation at Labor, NRC issues a “chilling effect” letter asking the licensee to describe actions it has taken or plans to take to remove any chilling effect that may have occurred. The review team and OIG reports both noted that NRC does little follow-up on the actions reported by licensees in response to these letters. This follow-up is necessary not only to verify a licensee’s actions but also to enable NRC to learn the effect of the discrimination finding on the plant’s work environment. Both reports also noted that guidance is needed on when additional NRC action may be necessary if a licensee receives more than one chilling effect letter over a relatively short period of time because this may indicate a serious problem at the plant. NRC has issued guidance that each chilling effect letter should carry an enforcement number so that it can be tracked, but systematic tracking is not currently done. NRC has not developed guidance on how it will follow up on licensee actions or on what actions it should take when a licensee receives multiple chilling effect letters. NRC officials told us they intend to fully implement the recommendation to establish follow-up procedures for chilling effect letters, but they have no schedule for doing so. Allegers and agency officials expressed strong concern about the financial burden on employees in the current protection process. They attributed this burden to the extensive time it took to obtain a final decision, during which the alleger must pay attorney fees and, in some cases, go without pay. One NRC review team report recommendation would provide relief through a statutory change to provide that Labor defend its findings of discrimination from the initial investigation at the ALJ hearing if Labor’s decision is appealed by the employer. The review team noted that this would avoid the perception that the government is leaving the employees to defend themselves after being retaliated against for raising health and safety concerns. After soliciting comments on this proposal in the Federal Register in March 1994 to do by regulation what the recommendation proposed be done by statute, Labor again stated in a March 26, 1996, letter to NRC that it supports having this authority. But Labor also stated that because of the resources needed to meet this added responsibility, if it is granted, Labor expects to exercise this authority selectively and cautiously. The NRC review team report also recommended that the law be amended to allow employees to be reinstated to their previous positions after the initial investigation finds discrimination, even if the case is appealed to the OALJ. Currently, section 211 provides that Labor may order reinstatement following a public hearing. As of January 1997, NRC was drafting legislation that would implement this recommendation. In addition, the review team report recommended that, in certain cases, NRC should ask the licensee to provide the employee with a holding period that would maintain or restore pay and benefits until a finding is issued. A holding period would basically maintain current pay and benefits for the period between the filing of a discrimination complaint and an initial administrative finding by Labor. NRC ultimately decided not to require licensees to establish holding periods. However, a May 1, 1996, policy statement on licensees’ responsibilities for maintaining a safety-conscious work environment stated that if a licensee does provide a holding period, NRC would consider such action as a mitigating factor in any enforcement decisions if discrimination is found to have occurred. Allegers we interviewed generally had mixed responses to the holding period recommendation. Although they generally supported the financial relief that would be provided, some expressed concern that licensees could misuse the holding period to remove an employee from operational duties when this is not warranted. Both the report and allegers believed safeguards should be established for the proper implementation of this recommendation. Licensees also again had reservations about being required to retain an employee who could later be found to be justifiably dismissed. While NRC officials told us the agency is considering requesting the holding period under some conditions, the original position not to implement the recommendation has not changed. The NRC review team report recommended that NRC seek an amendment to the Atomic Energy Act to increase the civil penalty from $100,000 to $500,000 a day for each discrimination violation. The maximum penalty in effect at the time of the report was $100,000, established in 1980. This recommendation was meant to make the civil penalty a more effective deterrent to licensee discrimination. In May 1994, NRC ordered a review of the agency’s enforcement process, part of which focused on civil penalty increases in the context of enforcement. This review concluded that increasing incentives for strong self-monitoring and corrective action programs would be better accomplished by revising the overall civil penalty assessment process than by raising the penalty amounts and that therefore no increase was needed. Recommendations made by the review team report to revise the assessment process were accepted and implemented through agency directives. NRC agreed with the report’s conclusion and decided not to seek an increase in civil penalties. Allegers and some others we interviewed agreed with the review team report that a $100,000 penalty was not an effective deterrent. They had mixed opinions, however, as to whether even an increase to $500,000 would be a sufficient deterrent. Some said the only sanction that really had an impact on licensees was shutting down a plant. Others said that negative publicity had a stronger impact than a civil penalty. The review team report also recommended that NRC make the penalty for all willful violations equal to the penalty currently reserved for the most severe violations. For example, under current procedures, discriminatory actions by a first-line supervisor are considered lesser violations, and receive lesser penalties, than violations that involve a higher level manager, even if they are found to be willful violations. For the same reasons cited for not requesting an increase in civil penalties, NRC decided not to implement this recommendation. The joint NRC and Labor process for resolving allegations of discrimination by nuclear licensees against employees who raise health and safety concerns is intended to discourage discrimination, thereby fostering an atmosphere in which employees feel free to report hazards. But it is unrealistic to expect employees to raise such issues if they believe they may be retaliated against for doing so, the process for seeking restitution will be expensive and lengthy, and they will receive minimal attention and support from the federal government. In response to these concerns, both NRC and Labor have acted on OIG and agency recommendations to enhance their management of nuclear employee discrimination cases. The resulting changes should improve monitoring of the process, increase NRC involvement, and augment licensees’ responsiveness to employee concerns. However, recommendations that would establish standards for timely decisions, permit monitoring of individual cases from start to finish and assessment of overall trends, and enable NRC to measure the work environment at nuclear plants for raising concerns have not been implemented. Improvements in the timeliness of decisions would not only help ensure that employees feel more comfortable in reporting hazards and expedite information to NRC for enforcement actions, but also decrease the financial burden on allegers. At this point, it is unclear whether the time standard recommended by NRC would decrease that burden sufficiently or whether other recommendations for decreasing the financial burden would also need to be implemented to address allegers’ concerns. Nevertheless, establishing and meeting some standard that prevents cases from languishing for many years would greatly improve the present system. Many changes made by NRC were intended to increase its involvement in the protection system and to make the agency proactive in its role. In order to do this, NRC needs more knowledge of the process than it has had in the past. For example, the Agency Allegation Advisor needs a revised tracking system that will monitor trends so that the agency can address problems suggested by those trends. Although this revised tracking system was recommended over 3 years ago and NRC has begun its implementation, the system still does not incorporate vital elements. These elements include current data on cases in the Labor process, data on all settled cases, and information on NRC headquarters inspection and enforcement. It is crucial that NRC management follow through to full implementation of this system so that it can develop trend data for better monitoring and make better-informed decisions on investigations and enforcement actions. Including the Labor data, however, will also require commitment from Labor as well as NRC, and effective coordination between the two agencies. Because information from employees on health and safety problems is critical for NRC to ensure public safety, NRC must know whether employees at nuclear plants are comfortable raising such concerns. Determining the existence of a perception is not an easy task and may require the use of more than one method of gathering information to obtain such knowledge. Several methods, including surveying, developing indicators to flag possible problems, tracking cases and settlements in individual plants, using feedback forms to find out how employees believe their allegations have been handled, and following up on chilling effect letters have been recommended to NRC, but none of these methods have been implemented to date. To improve the timeliness of Labor’s allegations processing, we recommend that the Secretary of Labor establish and meet realistic timeliness standards for all three steps in its process for investigating discrimination complaints by employees in the nuclear power industry. To improve NRC’s ability to monitor the allegation process, we recommend that the Chairman, NRC, complete implementation of the NRC review team’s recommendation to establish and operate the revised Allegation Management System in all organizational components within NRC. We also recommend that the Chairman, NRC, and the Secretary of Labor coordinate efforts to ensure that NRC’s Allegation Management System includes information on the status of cases at Labor. To improve NRC’s knowledge of the work environment at nuclear power plants, we recommend that the Chairman, NRC, ensure the implementation of recommendations to provide information on the extent to which the environment in nuclear plants is favorable for employees to report health or safety hazards without fear of discrimination. This would include recommendations on tracking and monitoring allegation cases and settlements, routinely providing feedback forms in allegation case close-out correspondence, systematically following up on chilling effect letters, and using a survey or other systematic method of obtaining information from employees. In commenting on a draft of this report, NRC’s Executive Director for Operations stated that the report presents an accurate description of the process for handling discrimination complaints and of NRC’s efforts to improve in this area. He also provided some specific concerns and observations and clarified several technical matters in the draft report. NRC’s comments did not address the recommendations included in the report. NRC’s comments appear in appendix IV. We did not receive comments from the Secretary of Labor on our draft report. The Chairman of the ARB, Labor’s Chief Administrative Law Judge, the Assistant Secretary for Employee Standards, and a senior program official in OSHA did, however, provide comments. Comments by these officials addressed the report’s recommendations about Labor’s timeliness standards only from the perspective of their individual offices. The Chairman of the ARB stated that the ARB, as a first step in establishing performance standards, is currently working with union officials to overcome the concern that tracking the date an attorney begins work on a case may constitute an attorney time-keeping requirement. He expects to resolve this concern soon. The Chairman added that the suggested timeliness standard of 90 days for ARB to review ERA cases is not realistic unless the Board severely restricts the parties’ ability to properly brief the issues presented. ARB’s comments appear in appendix V. Labor’s Chief Administrative Law Judge stated that our draft report appeared to provide a fair assessment of NRC’s and Labor’s handling of ERA cases. He agreed that the suggested timeliness standard of 240 days for ALJs to hear a case and issue a recommended decision is a reasonable benchmark, but stated that, in designing any legislation or regulation to implement the benchmark, several factors should be addressed: (1) in appropriate circumstances, there must be provisions to extend the time limit, (2) existing case law conflicts with a strict time limit on discovery and hearing, and (3) timeliness standards are only reasonable if the responsible agency has adequate staff. He also pointed out that ALJs are currently directed to provide NRC information on ERA discrimination cases, information on all ALJ decisions is available on the OALJ Home Page on the World Wide Web, and, if requested, OALJ will work with NRC to improve its monitoring program. OALJ’s comments on our draft report appear in appendix VI. The Assistant Secretary of Labor for Employee Standards commented that the primary purpose of reassigning initial investigations from the Wage and Hour Division to OSHA was part of an exchange of responsibilities. Before the reassignment, OSHA had responsibility for the employee protection, or “whistleblower,” provisions of certain laws and the staff devoted to the enforcement of these provisions. Wage and Hour was responsible for certain employee protections affecting farm workers and made field sanitation inspections as part of its regular investigations. These responsibilities were exchanged in order to better use program expertise and promote effective and efficient use of resources. The Assistant Secretary also clarified several technical matters in the draft report. The Employment Standards Administration’s comments on our draft report appear in appendix VII. A senior OSHA headquarters official responsible for overseeing OSHA investigations of employment discrimination commented that, since OSHA had only recently been assigned responsibility for conducting these investigations, our report should state that almost all the initial Labor investigations discussed were conducted by the Wage and Hour Division. We have considered these comments and revised our report as necessary. As agreed with your office, we will make no further distribution of this report until 15 days from the date of this letter. At that time, we will send copies to interested congressional committees, the Secretary of Labor, and the Chairman of NRC. We will make copies available to others on request. If you have questions about this report, please call me on (202) 512-7014. Other GAO contacts and staff acknowledgments are listed in appendix VIII. To determine the legal protection afforded employees in the nuclear power industry who claim they have been discriminated against for raising health or safety concerns, we reviewed the employee protection provisions of the Energy Reorganization Act (ERA), as amended, and the Atomic Energy Act of 1954. We also examined the legislative history of these provisions. We examined federal regulations relating to Labor’s handling of employee complaints under the ERA, and to NRC’s protection of employees from discrimination by licensees. We also examined the appropriate sections of NRC’s and Labor’s procedure manuals and management directives. We discussed the provisions of these laws and regulations with NRC officials in headquarters and NRC regions I, II, and IV and with Labor officials in headquarters and in the Philadelphia, Atlanta, and Dallas regions. Finally, we obtained and examined regional directives for the management of allegation cases from the three NRC regional offices we visited. We asked NRC and Labor officials, as well as employees who had filed discrimination complaints, licensees, and attorneys who represented them, to identify studies of the process for resolving cases of alleged discrimination. We reviewed those generally acknowledged to be the major studies related to the process. We discussed the status of the recommendations included in these reports with cognizant officials in Labor and NRC and examined available documentary support. We did not independently assess the merit of specific recommendations made in these reports nor audit actual agency implementation of the recommendations. In order to measure the effects of the recommendations on the timeliness of the system, we gathered information on cases closed at each stage of Labor’s process between October 1993 and June 1996. We chose to begin our analysis with October 1, 1993, since that would cover the impact of changes made to the process as a result of the studies we reviewed. Furthermore, NRC’s OIG had already reported on cases through April 1993. Specifically, we selected and analyzed the cases as follows: We obtained automated records from the Wage and Hour Division in Washington, D.C., on all “whistleblower” cases closed between October 1, 1993, and February 28, 1996. We did not independently validate the accuracy or completeness of these records. Since we could not always determine the whistleblower laws under which discrimination complaints were filed, we asked Labor to contact field personnel to identify the cases filed under the ERA. We later obtained data covering a more recent period—March 1, 1996, through June 30, 1996. We also obtained data on 11 ERA cases investigated by OSHA investigators in a pilot project during this period. We obtained a listing of all ERA cases that had received a recommended order between October 1, 1993, and June 30, 1996. We reviewed the timeliness and outcomes of these cases using information posted by the Office of Administrative Law Judges on the World Wide Web. We compiled a listing of all cases that had received a Secretary of Labor decision by using information provided by Labor and NRC for the same period. In addition, we discussed with numerous knowledgeable individuals issues concerning protection of nuclear power industry employees who have raised safety concerns. We spoke with Labor and NRC officials both in headquarters and in the field who had responsibilities relevant to the discrimination complaint process. To obtain the perspective of employees and licensees, we visited two nuclear power plants and, at those facilities and elsewhere, spoke with (1) 10 nuclear industry employees who had filed discrimination complaints with Labor, NRC, or both, including members of the National Nuclear Safety Network; (2) 8 attorneys who have represented employees and licensees in the process; (3) officials of 3 nuclear licensees that have been the subject of numerous discrimination complaints; and (4) officials of the Nuclear Energy Institute, a nuclear power industry association. We performed our work between January and December 1996 in accordance with generally accepted government auditing standards. This appendix lists the recommendations from NRC’s January 7, 1994, report, Report of the Review Team for Reassessment of the NRC’s Program for Protecting Allegers Against Retaliation, and the agency action taken on each. The recommendations have been divided into three categories: implemented, partially implemented, and not implemented. The recommendations are identified with the same number used in the NRC report, to allow for cross-referencing. A final policy statement implementing this recommendation was published in the Federal Register on May 1, 1996. The Commission policy statement proposed in recommendation II.A-1 should include the following: licensees should have a means to raise issues internally outside the normal process and employees (including contractor employees) should be informed how to raise concerns through the normal processes, alternative internal processes, and directly to NRC. The final policy statement implementing this recommendation was published in the Federal Register on May 1, 1996. Regulations in 10 C.F.R. part 19 should be reviewed for clarity to ensure consistency with the Commission’s employee protection regulations. A final rule revising 10 C.F.R. part 19 was issued in February 1996. The policy statement proposed in recommendation II.A-1 should emphasize that licensees (1) are responsible for having their contractors maintain an environment in which contractor employees are free to raise concerns without fear of retaliation and (2) should incorporate this responsibility into applicable contract language. The final policy statement implementing this recommendation was published in the Federal Register on May 1, 1996. NRC should incorporate consideration of the licensee environment for problem identification and resolution, including raising concerns, into the Systematic Assessment of Licensee Performance process. The final revised Management Directive 8.6, which was issued on January 27, 1995, includes consideration of the work environment in the Systematic Assessment of Licensee Performance process. However, an independent agency team that reviewed NRC actions at the Millstone plant looked at the results of NRC inspections on work environment and reported that NRC inspectors generally are not qualified to assess environment and that, therefore, the results of these assessments were not reliable. NRC should develop inspection guidance for identifying problem areas in the work place where employees may be reluctant to raise concerns or provide information to NRC. This guidance should also address how such information should be developed and channeled to NRC management. NRC Inspection Procedure 40500 was revised accordingly in October 1994. Allegation follow-up sensitivity and responsiveness should be included in performance appraisals for appropriate NRC staff and managers. The elements and standards in NRC’s employee performance appraisals were revised to implement this recommendation as of October 1995. NRC should place additional emphasis on periodic training for appropriate NRC staff on the role of allegations in the regulatory process, and on the processes for handling allegations. Refresher training has been required annually since May 1996. NRC should develop a readable, attractive brochure for industry employees. The brochure should clearly present a summary of the concepts, NRC policies, and legal processes associated with raising technical and harassment and intimidation concerns. It should also discuss the practical meaning of employee protection, including the limitations on NRC and Labor actions. In addition, NRC should consider developing more active methods of presenting this information to industry employees. The brochure was issued in November 1996. Management Directive 8.8 should include specific criteria and time frames for initial and periodic feedback to allegers, in order to measure consistent agency practice. The criteria and time frames were incorporated in Management Directive 8.8 as of May 1, 1996, and audits have been conducted to ensure compliance. NRC should designate a full-time senior individual for centralized coordination and oversight of all phases of allegation management as the Agency Allegation Manager, with direct access to the Executive Director for Operations, program office directors, and regional administrators. The position of Agency Allegation Advisor was filled on February 6, 1995, and the Advisor issued the first annual report on the allegation program to the Executive Director for Operations in September 1996. All program office and regional office allegation coordinators should participate in periodic counterpart meetings. Three meetings have taken place, and continued annual meetings are planned. The Agency Allegation Manager should conduct periodic audits of the quality and consistency of review panel decisions, allegation referrals, inspection report documentation, and allegation case files. Two rounds of audits have been completed, and audits will be conducted annually to implement this recommendation. Criteria for referring allegations to licensees should be clarified to ensure consistent application among review panels, program offices, and the regions. The criteria were clarified in Management Directive 8.8, issued May 1, 1996. NRC should periodically publish raw data on the number of technical and harassment and intimidation allegations (for power reactor licensees, this should be per site, per year). A report containing these data, Office for Analysis and Evaluation of Operational Data, Annual Report, FY 1994-95: Reactors, was issued in July 1996. NRC should resolve any remaining policy differences between the Office of Investigations and the Office of Nuclear Reactor Regulation on protecting the identity of allegers (including confidentiality agreements) in inspection and investigation activities. Alleger protection was defined in the revised Management Directive 8.8 and in the revised NRC policy statement of May 1996, which implemented the recommendation. Regional offices should provide toll-free 800 numbers for individuals to use in making allegations. A toll-free number was activated on October 1, 1995. The Commission should support current consideration within Labor to transfer section 211 implementation from the Wage and Hour Division to OSHA. The order to transfer section 211 cases to OSHA was signed by the Secretary of Labor in December 1996 for implementation on February 3, 1997; NRC supported this change. NRC should recommend to the Secretary of Labor that adjudicatory decisions under section 211 be published in a national reporting or computer-based system. Office of Administrative Law Judges (OALJ) and Secretary of Labor decisions are now available on the World Wide Web. NRC should take a more active role in the Labor process. Consistent with relevant statutes, Commission regulations, and agency resources and priorities, NRC should normally make available information, agency positions, and agency witnesses that may assist in completing the adjudication record on discrimination issues. Such disclosures should be made as part of the public record. NRC should consider filing amicus curiae briefs, where warranted, in Labor adjudicatory proceedings. NRC’s Executive Director for Operations issued the revised criteria for use by the staff in October 1995. Management Directive 8.8, issued in May 1996, contains revised guidance on this issue. NRC should designate the Agency Allegation Manager as the focal point to assist people in requesting NRC information, positions, or witnesses relevant to Labor litigation under section 211 (or state court litigation concerning wrongful discharge issues). Information on this process, and on how to contact the NRC focal point, should be included in the brochure for industry employees (see recommendation II.B-6). This responsibility was given to the Agency Allegation Advisor through Management Directive 8.8 as of May 1996. NRC should revise the criteria for prioritizing NRC investigations involving discrimination. The following criteria should be considered for assigning a high investigation priority: (1) allegations of discrimination as a result of providing information directly to the NRC; (2) allegations of discrimination caused by a manager above first-line supervisor (consistent with current Enforcement Policy classification of severity level I or II violations); (3) allegations of discrimination where a history of findings of discrimination (by Labor or NRC) or settlements suggests a programmatic rather than an isolated issue; and (4) allegations of discrimination that appear particularly blatant or egregious. Management Directive 8.8, issued in May 1996, implemented this recommendation. NRC investigators should continue to interface with Labor to minimize duplication of effort on parallel investigations. Where NRC is conducting parallel investigations with Labor, Office of Investigations procedures should provide that its investigators contact Labor on a case-by-case basis to share information and minimize duplication of effort. Labor’s process should be monitored to determine if NRC investigations should be conducted or continued, or priorities changed. In that regard, settlements should be given special consideration. This recommendation was implemented through the Investigation Procedure Manual, section 3.2.2.10.1. When an individual who has not yet filed with Labor brings a harassment and intimidation allegation to NRC, NRC should inform the person (1) that a full-scale investigation will not necessarily be conducted; (2) that Labor and not NRC provides the process for obtaining restitution; and (3) of the method for filing a complaint with Labor. If, after the Allegation Review Board review, the Office of Investigations determines that an investigation will not be conducted, the individual should be so informed. Guidance in Management Directive 8.8, as of May 1996, implemented this recommendation. The Office of Investigations should discuss cases involving section 211 issues with the Department of Justice as early as appropriate so that a prompt Justice declination, if warranted, can allow information acquired by the Office of Investigations to be used in the Labor process. The Investigation Procedure Manual, section 8.2.3, implemented this recommendation. The implementation of the Memorandum of Understanding with the Tennessee Valley Authority (TVA) Inspector General should be reconsidered following the completion of the ongoing review. The Memorandum of Understanding with TVA was terminated on August 30, 1994. For cases that are appealed and result in Labor administrative law judge (ALJ) adjudication, NRC should continue the current practice of initiating the enforcement process following a finding of discrimination by the ALJ. However, the licensee should be required to provide the normal response required by 10 C.F.R. 2.201. This recommendation was implemented through a revision to the Enforcement Policy on December 31, 1994. Additional severity level II examples should be added to the Enforcement Policy to address hostile work environments and discrimination in cases where the protected activity involved providing information of high safety significance. The policy should recognize restrictive agreements and threats of discrimination as examples of violations at least at a severity level III. It should also provide that less significant violations involving discrimination issues be categorized at a severity level IV. This recommendation was implemented through a revision to the Enforcement Policy on December 31, 1994. The Enforcement Policy should be changed, for civil penalty cases involving discrimination violations, to normally allow mitigation only for corrective action. Mitigation for corrective action should be warranted only when it includes both broad remedial action as well as restitution to address the potential chilling effect. Mitigation or escalation for correction should consider the timing of the corrective action. A final revision of the Enforcement Policy in November 1994 implemented this recommendation. For violations involving discrimination issues not within the criteria for a high priority investigation (see recommendation II.C-7) citations should not normally be issued nor NRC investigations conducted if (1) discrimination, without a complaint being filed with Labor or an allegation made to NRC, is identified by the licensee and corrective action is taken to remedy the situation or (2) after a complaint is filed with Labor, the matter is settled before an evidentiary hearing begins, provided the licensee posts a notice that (a) a discrimination complaint was made, (b) a settlement occurred, and (c) if Labor’s investigation found discrimination, remedial action has been taken to reemphasize the importance of the need to be able to raise concerns without fear of retaliation. The Enforcement Policy was revised on November 28, 1994, to implement this recommendation. In taking enforcement actions involving discrimination, use of the deliberate misconduct rule for enforcement action against the responsible individual should be considered. This recommendation was implemented through a revision to the Enforcement Policy on December 31, 1994. Regional administrators and office directors should respond to credible reports of reasonable fears of retaliation, when the individual is willing to be identified, by holding documented meetings or issuing letters to notify senior licensee management that NRC (1) has received information that an individual is concerned that retaliation may occur for engaging in protected activities; (2) will monitor actions taken against this individual; and (3) will consider enforcement action if discrimination occurs, including applying the wrongdoer rule. This recommendation was implemented through guidance in Management Directive 8.8 issued in May 1996. Before contacting a licensee as proposed in recommendation II.E-1, NRC should (1) contact the individual to determine whether he or she objects to disclosure of his or her identity and (2) explain to the individual the provisions of section 211 and the Labor process (e.g., that it is Labor and not NRC that provides restitution.) This recommendation was implemented through guidance in Management Directive 8.8 issued in May 1996. The Commission should include in its policy statement (as proposed in recommendation II.A-1) expectations for licensees’ handling of complaints of discrimination as follows: (1) Senior management of licensees should become directly involved in allegations of discrimination. (2) Power reactor licensees and large fuel cycle facilities should be encouraged to adopt internal policies providing a holding period for their employees and contractors’ employees that would maintain or restore pay and benefits when the licensee has been notified by an employee that, in the employee’s views, discrimination has occurred. This voluntary holding period would allow the licensee to investigate the matter, reconsider the facts, negotiate with the employee, and inform the employee of the final decision. After the employee has been notified of the licensee’s final decision, the holding period should continue for an additional 2 weeks to allow a reasonable time for the employee to file a complaint with Labor. If the employee files within that time, the licensee should continue the holding period until the Labor finding is made on the basis of an investigation. If the employee does not file with Labor within this 2-week period, then the holding period would terminate. (Notwithstanding this limitation on the filing of a complaint with Labor to preserve the holding period, the employee clearly would retain the legal right to file a complaint with Labor within 180 days of the alleged discrimination). The holding period should continue should the licensee appeal an adverse Labor investigative finding. NRC would not consider the licensee’s use of a holding period to be discrimination even if the person is not restored to his or her former position, provided that the employee agrees to the conditions of the holding period and that pay and benefits are maintained. (3) Should it be determined that discrimination did occur, the licensee’s handling of the matter (including the extent of its investigation, its effort to minimize the chilling effect, and the promptness of providing restitution to the individual) would be considered in any associated enforcement action. While not adopting a holding period would not be considered an escalation factor, use of a holding period would be considered a mitigating factor in any sanction. An NRC policy statement published in May 1996 implemented this recommendation. In appropriate cases, the Executive Director for Operations (or other senior NRC management) should notify the licensee’s senior management by letter, noting that NRC has not taken a position on the merits of the allegation but emphasizing the importance NRC places on a quality-conscious environment where people believe they are free to raise concerns, and the potential for adverse impact on this environment if the allegation is not appropriately resolved; requesting the personal involvement of senior licensee management in the matter to ensure that the employment action taken was not prompted by the employee’s involvement in protected activity, and to consider whether action is needed to address the potential for a chilling effect; requiring a full report of the actions that senior licensee management took on this request within 45 days; and noting that the licensee’s decision to adopt a holding period will be considered as a mitigating factor in any enforcement decision should discrimination be determined to have occurred. In such cases, prior to issuing the letter the employee should be notified that (a) Labor and not NRC provides restitution and (b) NRC will be sending a letter revealing the person’s identity to the licensee, requiring an explanation from the company and requesting a holding period in accordance with the Commission’s policy statement. NRC’s policy statement and the revision of Management Directive 8.8 in May 1996 implemented this recommendation. Regarding the 45-day time limit of this recommendation, although NRC has not established this requirement in the Management Directive, an official told us the agency does, in fact, give licensees a time limit within which they must reply. A second investigative finding of discrimination within an 18-month period should normally result in a meeting between the licensee’s senior management and the NRC Regional Administrator. The Enforcement Manual was revised on December 31, 1994, to include this wording. If more than two investigative findings of discrimination occur within an 18-month period, NRC should consider stronger action, including issuing a Demand for Information. The Enforcement Manual was revised on December 31, 1994, to include this wording. NRC should develop a standard form to be included with alleger close-out correspondence to solicit feedback on NRC’s handling of a given concern. NRC developed a feedback form that it sent to a sample of allegers in December 1995, and it plans to send the form again to another sample in 1997. After that survey, the agency will decide whether to provide feedback forms routinely with close-out correspondence. NRC should revise the Allegation Management System to be able to trend and monitor an allegation from receipt to the completion of agency action. On November 1, 1996, NRC installed a revised Allegation Management System in the regional offices. The system is not yet linked to the Office of Investigations and Office of Enforcement information systems, but NRC plans to do this. Because the system was so recently installed and is not fully linked, monitoring trends through the new system has not yet begun. Using the Allegation Management System, NRC should monitor both harassment and intimidation and technical allegations to discern trends or sudden increases that might justify its questioning the licensee as to the root causes of such changes and trends. This effort should include monitoring contractor allegations—both those arising at a specific licensee and those against a particular contractor across the country. As described for recommendation II.B-13, the system was just recently installed, and more time needs to pass before trends can be tracked using the new system. The Commission should support legislation to amend section 211 as follows: (1) revising the statute to provide 120 days from the filing of the complaint to conduct the Labor investigation, 30 days from the investigation finding to request a hearing, 240 additional days to issue an ALJ decision, and 90 days for the Secretary of Labor to issue a final decision, thus allowing a total of 480 days from when the complaint is filed to complete the process; (2) revising the statute to provide that reinstatement decisions be immediately effective following a Labor finding based on an administrative investigation; (3) revising the statute to provide that Labor defend its findings of discrimination and ordered relief in the adjudicatory process if its orders are contested by the employer (this would not preclude the complainant from also being a party in the proceeding). Legislation has been drafted by NRC and submitted for Labor’s review and approval before submission to the Congress for (1) and (2). The recommendation on Labor’s defense of allegers at the ALJ hearing (3) is awaiting the Secretary’s signature, but implementation would be selective, depending on resource availability. NRC should work with Labor to establish a shared database to track Labor cases. This action was delayed pending the transfer of section 211 duties from the Wage and Hour Division to OSHA. The transfer took place on February 3, 1997, and NRC and OSHA are currently discussing how to implement this recommendation. NRC should usually issue a chilling effect letter if a licensee contests a Labor area office finding of discrimination and a holding period is not adopted. A letter would not be needed if section 211 is amended to provide for reinstatement following a Labor administrative finding of discrimination. When a chilling effect letter is issued, appropriate follow-up action should be taken. (See recommendations II.E-3 and II.C-2.) A revision to the Enforcement Manual on December 31, 1994, requires that NRC assign an enforcement number to each chilling effect letter sent. Systematic tracking by NRC has been started, but guidance for follow-up actions and monitoring of trends in plants has not been issued. NRC should consider action when there is a trend in settlements without findings of discrimination. The Enforcement Manual was revised on December 31, 1994, to implement this recommendation. NRC should develop a survey instrument to independently and credibly assess a licensee’s environment for raising concerns. This recommendation will not be implemented, according to NRC’s Annual Report on the Allegations Program, September 1996, because of disagreement among NRC staff about its effectiveness. A current staff proposal, however, contains actions to partially implement the recommendation. The Commission should seek an amendment to section 234 of the Atomic Energy Act of 1954 to provide for a civil penalty of up to $500,000 per day for each violation. If this provision is enacted, the Enforcement Policy should be amended to provide that this increased authority should usually be used only for willful violations, including those involving discrimination. This recommendation will not be implemented because NRC believes that increasing incentives for strong self-monitoring and corrective action programs would be better accomplished by revising the overall civil penalty assessment process than by raising civil penalty amounts. Pending an amendment to section 234 of the Atomic Energy Act, the flexibility in the enforcement policy should be changed to provide that the base penalty for willful violations involving discrimination, regardless of severity level, should be the amount currently specified for a severity level I violation. This recommendation will not be implemented because NRC believes that increasing incentives for strong self-monitoring and corrective action programs would be better accomplished by revising the overall civil penalty assessment process than by raising civil penalty amounts. The Executive Director for Operations or another senior official at NRC should request, in appropriate cases, that the licensee place an employee in a holding period as described in the Commission’s policy statement (see recommendation II.E-3). This part of recommendation II.E-4 will not be implemented, according to NRC’s Annual Report on the Allegations Program, September 1996; however, a staff proposal is being considered that would implement it. This appendix contains the recommendations and their implementation status from the Labor OIG’s May 1993 report, Audit of the Office of Administrative Appeals. The Director of the Office of Administrative Appeals (OAA) should conduct an immediate review of cases pending in OAA to resolve the issues that have prevented these cases from being completed and bring these cases to completion as quickly as possible. OAA has cleared the backlog of cases, thus implementing this recommendation. The Director of OAA should establish timeliness standards for OAA’s case processing and the issuance of decisions, which will meet the requirements of due process, the intent of the Administrative Procedures Act, and customer service expectations of the Secretary. Action on this recommendation is pending. The Director is currently involved in discussions to obtain agreement on timeliness standards. The Director of OAA should develop and implement management information systems to include case management and time distribution data. The agency has developed and implemented a management information system for cases. The Director of OAA should conduct analysis to identify operation changes and resource requirements necessary to achieve and maintain compliance with the newly established case processing standards and present that information in OAA’s planning and budgeting documents. Action is pending. Because timeliness standards have not been established, resource needs cannot be evaluated. The following are GAO’s comments on the Nuclear Regulatory Commission’s letter dated February 21, 1997. 1. Wording revised. 2. Figure revised as suggested. 3. Discussion of when civil penalties are imposed was deleted from this section. 4. Comment not incorporated. According to Labor procedures, NRC is supposed to receive copies of settlement agreements. We did not obtain evidence on whether these procedures were followed. 5. Incorporated as footnote 14. 6. Corrections made. The following are GAO’s comments on the Assistant Secretary of Labor for Employee Standard’s letter dated February 27, 1997. 1. Wording revised. 2. Wording unchanged. We believe that the description of the process in the preceding paragraph adequately conveys that there may be several actions involved at Labor. 3. Wording unchanged. Although the regulation does not specifically state that the 90-day time frame can be waived, current procedures have the same effect as waiving the time frame: Cases are not completed in 90 days. We do not disagree with the Assistant Secretary’s comment that the Wage and Hour Division completed the investigative phase as quickly as possible. In addition to those named above, the following individuals made important contributions to this report: Joan Denomme and Mary Roy gathered and analyzed essential information and drafted the report; Elizabeth Morrison contributed extensively to development and presentation of the report’s message; and Gary Boss and Philip Olson provided technical advice concerning Nuclear Regulatory Commission activities. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Nuclear Regulatory Commission's (NRC) and Department of Labor's implementation of legislation pertaining to the protection of nuclear power industry workers who raise health and safety issues, focusing on: (1) how federal laws and regulations protect nuclear power industry employees from discrimination for raising health and safety concerns; (2) the implementation status of recommendations made in recent NRC and Labor internal reviews and audits of the system for protecting workers; and (3) the resulting changes to the system. GAO noted that: (1) NRC has overall responsibility for ensuring that the nuclear plants it licenses are operated safely, and the Department of Labor also plays a role in the system that protects industry employees against discrimination for raising health and safety concerns; (2) the Atomic Energy Act, as amended, gives NRC responsibility for taking action against the employers it licenses when they are found to have discriminated against individual employees; (3) NRC can investigate when a harassment and intimidation allegation is filed with NRC or when it receives a copy of a discrimination complaint filed with Labor; (4) NRC's Office of Enforcement may use the results of the NRC investigation or a decision from Labor to support enforcement action; (5) in addition, the Energy Reorganization Act, as amended, authorizes the Secretary of Labor to order employers to make restitution to the victims of such discrimination; (6) restitution can include such actions as reinstatement to a former position, reimbursement of all expenses related to the complaint, and removal from personnel files of any adverse references to complaint activities; (7) concerns raised by employees about a lack of protection under the existing process led to studies begun by NRC and Labor in 1992; (8) these concerns included the inordinate amount of time it took Labor to act on some discrimination complaints and NRC's lack of involvement in cases during Labor's decision process; (9) in response to recommendations in reports from these groups, both NRC and Labor have taken actions intended to improve the system for protecting employees; (10) for example, NRC has established a senior position to centrally coordinate and oversee all phases of allegation management, and it has taken other actions to improve overall management of the system, such as establishing procedures to improve communication and feedback among employees, NRC, and licensees; (11) it has also increased its involvement in allegation cases through several actions, including investigating a greater number of allegations; (12) while NRC and Labor have been responsive to these recommendations, other recommendations, which could be implemented through administrative procedural changes and would further improve the system, still need to be addressed; (13) in addition, NRC and Labor have yet to complete action on recommendations requiring statutory and regulatory changes; and (14) these include recommendations to reduce the financial burden on workers with cases pending and to increase the dollar amount of civil penalties.
The Department of Housing and Urban Development (HUD), through the Federal Housing Administration (FHA), insures mortgages on both single-family homes and multifamily rental housing properties for low- and moderate-income households. In addition to mortgage insurance, many FHA-insured multifamily properties receive some form of direct assistance or subsidy from HUD, such as below-market interest rates or Section 8 rental subsidies tied to some or all units (Section 8 project-based assistance). In an effort to resolve long-standing problems with the segment of the insured multifamily portfolio that both has mortgages insured by FHA and receives project-based Section 8 rental subsidies (the insured Section 8 portfolio), HUD during 1995 proposed a major restructuring process that it called “mark-to-market.” In early 1996, HUD made several key changes to its proposal in response to concerns raised by various stakeholders and changed its name for the process from mark-to-market to “portfolio reengineering.” HUD left most of the basic thrust of the original mark-to-market proposal intact, however. FHA insurance protects private lenders from financial losses stemming from borrowers’ defaults on mortgage loans for both single-family homes and multifamily rental housing properties. When a default occurs on an insured loan, a lender may “assign” the mortgage to HUD and receive payment from FHA for an insurance claim. According to the latest data available from HUD, FHA insures mortgage loans for about 15,800 multifamily properties. These properties contain just under 2 million units and have a combined unpaid mortgage principal balance of $46.9 billion.These properties include multifamily apartments and other specialized properties, such as nursing homes, hospitals, student housing, and condominiums. HUD’s Section 8 program provides rental subsidies for low-income families. These subsidies are linked either to multifamily apartment units (project-based) or to individuals (tenant-based). According to HUD’s latest available data, about 1.4 million units at about 20,400 multifamily properties receive Section 8 project-based subsidies. Under the Section 8 program, residents in subsidized units generally pay 30 percent of their income for rent and HUD pays the balance. According to HUD’s data, monthly Section 8 payments to HUD-insured properties average about $300 to $500 per unit. According to HUD, its restructuring proposals apply to 8,636 properties that both have mortgages insured by FHA and receive project-based Section 8 rental subsidies for some or all of their units. In this report, we refer to these properties as HUD’s insured Section 8 portfolio. Data provided by HUD show that, together, these properties contain 859,000 units and have unpaid principal balances totaling $17.8 billion. For various reasons, HUD chose to exclude from its restructuring proposals properties with project-based Section 8 assistance that are insured under its “moderate rehabilitation” program. HUD estimates that about 167 properties containing about 16,800 units are insured under this program. Figure 1.1 shows how the insured Section 8 portfolio fits into HUD’s overall multifamily housing portfolio. Excludes properties with HUD-held mortgages. Excludes 167 properties and about 16,800 units with project-based assistance provided under the Section 8 “moderate rehabilitation” program. According to HUD’s data, about 45 percent of the insured Section 8 portfolio (3,859 properties) consists of “older assisted” properties. These were constructed beginning in the late 1960s under a variety of mortgage subsidy programs, to which project-based Section 8 assistance (Loan Management Set-Aside) was added later, beginning in the 1970s, to replace other subsidies and to help troubled properties sustain operations. About 55 percent of the insured Section 8 portfolio (4,777 properties) consists of “newer assisted” properties. These were built after 1974 under HUD’s Section 8 New Construction and Substantial Rehabilitation programs and received project-based Section 8 subsidies calculated on the basis of formulas with automatic annual adjustments, which, according to HUD, tended to be relatively generous to encourage the production of affordable housing. Figure 1.2 provides additional data on the insured Section 8 portfolio. The project-based Section 8 assistance for properties in the insured Section 8 portfolio is covered by contracts, many of which are for long terms. Under these contracts, property owners agreed to house lower-income tenants for specified periods in exchange for guaranteed rental subsidies for specified units. In the next few years, many of these contracts will expire. According to the available data from HUD, contracts covering about 69 percent of the project-based Section 8 units in the insured Section 8 portfolio will expire by the end of the year 2000 and contracts covering about 98 percent of the units will expire by the end of the year 2006. (See fig. 1.3.) In the early 1990s, most expiring contracts were renewed for 5-year periods, but the terms of Section 8 contracts have been gradually shortened since then. To improve its budgeting for contract renewals, HUD proposes to renew all contracts for 1-year terms, beginning in fiscal year 1997. The insured Section 8 portfolio suffers from three basic problems—high subsidy costs; high exposure to insurance loss; and, in the case of some properties, poor physical condition. A substantial number of properties in the insured Section 8 portfolio now receive subsidized rents above market levels. Many of these rents substantially exceed the rents charged for comparable unsubsidized units. This problem is most prevalent in (but not confined to) the newer assisted segment of the portfolio, where it stems from the design of the Section 8 New Construction and Substantial Rehabilitation programs. The government originally paid to develop these properties under the two Section 8 programs by establishing rents above market levels and then raising them regularly through the application of set formulas that, according to HUD, tended to be generous to encourage the production of new affordable housing. The high cost of Section 8 subsidies is reflected in the cost of renewing the existing project-based contracts for the properties in the insured Section 8 portfolio as they expire. HUD is requesting $863 million in budget authority for fiscal year 1997 to renew expiring contracts covering almost 293,000 insured Section 8 units. As its long-term Section 8 contracts expire and its 1-year contracts are renewed annually, HUD estimates that its annual renewal costs will increase steadily in each of the following 9 fiscal years, resulting in an estimated annual renewal cost of about $6.7 billion by the year 2006 and a 10-year cumulative renewal cost approaching $45 billion. A second key problem affecting the insured Section 8 portfolio is the high risk of insurance loss. Under FHA’s insurance program, HUD bears virtually all the risk in the event of a loan default. According to a recent HUD-contracted study of the Department’s capacity to manage the assisted multifamily portfolio’s financial risk, HUD’s multifamily insurance program depends upon the actions of private parties whose share in the risk and stake in the properties’ financial success may be limited. The study points out that instead of bearing the financial risk of default, private lenders may have a more limited stake in the continuation of mortgages through their servicing rights. Rather than having substantial equity invested in the properties, the owners may possess indirect interests that are hard for HUD to evaluate. Borrowers are often structured into partnerships in which the general partners, who are responsible for the properties’ day-to-day management, may have interests in property management fees through affiliated firms. HUD’s fiscal year 1994 loan loss reserve analysis evaluated the risk of default and insurance loss for a sample of multifamily properties on the basis of a set of financial, physical, and management data. The properties were categorized as excellent, good, standard, substandard, or doubtful, and degrees of risk were assigned on the basis of these categories. According to the analysis, 48 percent of the older assisted properties and 20 percent of the newer assisted properties had a medium to high risk of default. This risk could increase substantially if the properties’ Section 8 contracts are not renewed or are renewed at substantially lower levels. Poor physical condition is a third key problem affecting many properties in the insured Section 8 portfolio. A 1993 study of multifamily rental properties with FHA-insured or HUD-held mortgages found that almost one-fourth of the properties were “distressed.” The properties were considered to be distressed if they failed to provide sound housing and lacked the resources to correct their deficiencies or if they were likely to fail financially. The problems affecting HUD’s insured Section 8 portfolio have several causes. These include (1) program design flaws that have contributed to high subsidies in the Section 8 program and have put virtually all the risk on HUD in the insurance program; (2) HUD’s dual role as both the mortgage insurer and the rental subsidy provider, which has resulted in the federal government’s averting claims against FHA’s insurance fund by supporting a subsidy and regulatory structure that has masked the true market value of the properties; and (3) weaknesses in HUD’s oversight and management of the insured portfolio, which have allowed physical and financial problems at a number of HUD-insured multifamily properties to go undetected or uncorrected. According to a September 1995 paper prepared by the Affordable Housing Preservation Tax Policy Group, a related problem is that the limited-partner investors in many of the properties no longer have an economic incentive to invest, or an interest in investing, additional capital to pay for improvements, such as new roofs, boilers, and updated appliances, which many properties are now starting to need. In May 1995, HUD proposed to address the key problems affecting the insured Section 8 portfolio through a process that it called “mark-to-market.” The principal steps in this process were to reset rents to market levels and reduce mortgage debt if necessary to permit a positive cash flow, terminate FHA’s mortgage insurance, and replace project-based Section 8 subsidies with portable tenant-based subsidies. The basic idea behind HUD’s mark-to-market proposal was to address the three key problems and their causes by decoupling HUD’s mortgage insurance and project-based rental subsidies and subjecting the properties to the forces and disciplines of the commercial market. HUD originally proposed to do this by (1) eliminating project-based Section 8 subsidies as existing contracts expired (or sooner if the owners agreed), (2) allowing owners to rent their apartments for whatever amounts the marketplace would bear, (3) facilitating the refinancing of FHA-insured mortgages with smaller mortgages if needed for the properties to operate at the new rents, (4) terminating FHA’s insurance on the mortgages, and (5) providing the residents of assisted units with portable Section 8 rental subsidies that they could use to either stay in their current apartment or move to another one in accordance with their wishes or financial needs. HUD recognized that many owners could not cover their expenses and might eventually default on their mortgages if their properties were forced to compete in the commercial marketplace without their project-based Section 8 subsidies. The mark-to-market proposal therefore included several alternatives for restructuring the program’s FHA-insured mortgages to bring properties’ income and expenses into line. These alternatives included selling the mortgages, engaging third parties to work out restructuring arrangements, and paying full or partial FHA insurance claims to lenders to reduce the mortgage debt and monthly payments. Each of these alternatives would likely expose HUD to claims against FHA’s insurance fund, but HUD estimated that over the long term this approach would cost the government less than maintaining the status quo. The proposed mark-to-market process would likely affect properties differently, depending on whether their existing rents were higher or lower than market rents and whether they needed funding for capital items, such as deferred maintenance. If the existing rents exceeded market value, the process would lower the mortgage debt, thereby allowing the property to operate and compete effectively at lower market rents. If the existing rents were below market value, the process would allow the owner to increase the rents, potentially providing more money to improve and maintain the property. HUD recognized, however, that some properties would not be able to generate enough income to cover their expenses even if their mortgage payments were reduced to zero. In these cases, HUD proposed using alternative strategies, including demolishing the property and subsequently selling the land to a third party, such as a nonprofit organization or government entity. Although both the Senate and the House held hearings on the mark-to-market proposal, no consensus was reached as to whether it or some other approach should be adopted. No action was taken, in part because reliable information was not available on the properties and their surrounding commercial rental markets. Potential stakeholders raised questions about the proposal that could not be answered, including the following: (1) What are the physical and financial conditions of the properties that make up the insured Section 8 portfolio? (2) What different effects would the proposal have at different types of properties? (3) Would the government realize net savings or incur additional costs in the long run? (4) To what extent would low-income residents be displaced or have to pay higher rents? (5) To what extent could such residents find suitable and affordable alternative housing if they chose to or had to? (6) To what extent would possible income tax consequences and other negative effects on owners cause them to oppose the proposal and hamper HUD’s efforts to implement it? and (7) To what extent would owners with substantial time left on their Section 8 contracts disinvest and let their properties deteriorate? Without this information, it was difficult to predict the overall effects of HUD’s mark-to-market proposal on the properties, their owners, the residents, and the federal government. HUD contracted with Ernst & Young LLP in 1995 to obtain up-to-date information on market rents and the physical condition of properties in the insured Section 8 portfolio, develop a financial model to show how HUD’s proposal would affect the properties, and estimate the subsidy and insurance claims costs associated with the proposal. (See ch. 2 for our analysis of Ernst & Young’s study.) In April 1996, before Ernst & Young completed its study, HUD modified the original mark-to-market proposal in several ways in response to concerns raised by industry officials and resident groups about various issues, such as the elimination of project-based subsidies and the termination of FHA insurance, and changed the name of the process from mark-to-market to portfolio reengineering. HUD left the basic thrust of the original proposal intact but made several key changes. These included (1) giving priority attention for at least the first 2 years to properties with subsidized above-market rents while continuing to discuss approaches with stakeholders for solving capital needs at properties with expiring contracts and subsidized below-market rents; (2) allowing state and local governments to decide whether to continue Section 8 project-based rental subsidies at individual properties after their mortgages are restructured or switch to tenant-based assistance; and (3) allowing owners to apply for FHA insurance on the newly restructured mortgage loans. HUD’s portfolio reengineering proposal further differed from the original mark-to-market proposal in that it (1) put more emphasis on proactively using third parties to restructure and resolve problems with mortgages before properties’ project-based Section 8 contracts expire; (2) better protected current residents from displacement by providing those in assisted apartment units with “enhanced vouchers” that would pay the difference between 30 percent of their income and the market rent for their building (even if that rent exceeded the normal Section 8 limits) and by providing rental assistance to currently unassisted residents if restructuring increased their rent to more than 30 percent of their income; and (3) reflected HUD’s willingness to work with the Congress on developing mechanisms to take into account the tax consequences to the owners of properties whose mortgage debt would be forgiven as part of the restructuring process. More recently, HUD has also proposed deferring action on properties that would not be able to generate enough income to cover their operating expenses after reengineering until strategies have been developed to address the needs of their residents and of the communities in which the properties are located. To assist the Congress in evaluating HUD’s proposal for reengineering the insured Section 8 multifamily housing portfolio, we examined the (1) problems affecting the properties in the portfolio and HUD’s proposals for addressing them; (2) results and reasonableness of a HUD-contracted study carried out by Ernst & Young LLP that assesses, on the basis of a national sample of 558 randomly selected properties, the effects of HUD’s proposal on the portfolio; and (3) key issues facing the Congress in assessing HUD’s proposal. In addition, as discussed in appendix I, we examined the characteristics of 10 properties included in Ernst & Young’s study and the impact of HUD’s proposal on them. To obtain information on the problems affecting the properties in HUD’s insured Section 8 portfolio and HUD’s proposals for dealing with them, we reviewed relevant reports issued by GAO, HUD’s Office of Inspector General (OIG), and HUD. We also reviewed HUD documents discussing the Department’s mark-to-market and portfolio reengineering proposals, as well as comments on the proposals provided by groups representing the multifamily housing industry and residents. We also discussed the proposals with HUD and industry officials and participated in four forums that HUD held in early 1996 to discuss problems pertaining to the insured Section 8 properties and options for addressing them. To evaluate the results and reasonableness of Ernst & Young’s study, we were briefed by staff from Ernst & Young and HUD on the approaches that Ernst & Young planned to use to carry out its study and on the actual methods used. The briefings included discussions about Ernst & Young’s sampling and statistical methods, market surveys for estimating the market rents for the insured Section 8 properties, site inspections for estimating the properties’ deferred maintenance and capital needs, and the financial model for determining the effect of portfolio reengineering on the properties and estimating the costs and savings associated with reengineering. We reviewed selected aspects of Ernst & Young’s sampling and statistical methodology. For example, we reviewed the computer programs that Ernst & Young used to select sample projects and reviewed the statistical methods that the firm planned to use to estimate population totals from the sample. We also reviewed the presentation of information derived from the sample in Ernst & Young’s May 2, 1996, report. When data are missing for sampled projects, a potential exists for the results of the sample to create a biased representation of the entire population of projects. In addition, assigning values on the basis of the observed sample mean can cause the sampling errors to be somewhat understated. We checked the completeness of the data collected for Ernst & Young’s sampled projects. Only one project subject to portfolio reengineering was excluded from the study. For the 558 projects included in the study, the data collection was generally complete. About 85 percent of the projects in the final sample had complete data. For the remaining projects, one or more of the following were missing: (1) data from financial statements, (2) data on tenants’ payments, and (3) data on deferred maintenance. Tenant payment data were missing most frequently—about 12 percent of the time. Financial statement data and deferred maintenance data were missing no more than 3 percent of the time. When data were missing for a project, Ernst & Young assigned a value to it based on the average of the known sample properties or industry standards. (The overall reasonableness of Ernst & Young’s study is discussed in ch. 2.) To evaluate Ernst & Young’s estimates of market rents, we reviewed the firm’s methodology for performing market surveys and, as discussed in greater detail below, contracted with three licensed real estate appraisal firms to estimate the market rents for 10 properties in Ernst & Young’s sample. To assess Ernst & Young’s estimates of deferred maintenance needs and capital costs, we met with Ernst & Young officials to understand the firm’s methodology and underlying assumptions. We also obtained and analyzed related data collection documents used in the firm’s study, including the instructions to those conducting on-site property inspections and the completed inspection forms and supporting documentation for the 10 properties independently assessed by the contract appraisers. We also discussed Ernst & Young’s methodology with industry representatives and provided Ernst & Young’s estimates for the 10 properties to the respective owners and managers and to the contract appraisers. We asked those who reviewed Ernst & Young’s estimates to comment on the reasonableness and accuracy of the estimates; to state whether they generally agreed or disagreed with the estimates; and if they disagreed with an estimate, to provide specific information on the adjustments needed and the reasons for the adjustments. To review Ernst & Young’s financial model for assessing the effects of portfolio reengineering on the sample properties, we obtained a copy of the model and discussed the assumptions used in it with Ernst & Young staff. Because the model contains hundreds of data fields, formulas, and assumptions, we did not attempt to examine every data element or verify every formula or calculation. Rather, we focused on assessing the structure of the model and reviewed its key data elements and the logic of what we considered to be its major assumptions. We also discussed the financing and operating assumptions used in the model with officials of various organizations that have expertise in underwriting and/or servicing mortgages on multifamily housing properties (including Fannie Mae, Freddie Mac, the Reilly Mortgage Group, and GMAC Commercial Mortgage Corporation). Our assessment of the model is discussed in chapter 2. As discussed in chapter 2, we used information obtained from these experts to perform sensitivity analyses that assess the effects of changes in the assumptions on the model’s results. We also used data from Ernst & Young’s sample to estimate certain costs. These estimates apply to the 8,363 projects from which the sample was drawn. Had we made estimates for the number of properties that Ernst & Young assumed to be affected by portfolio reengineering (8,563 properties), our estimates of the totals would have been about 2 percent higher. As discussed earlier, HUD now believes that 8,636 properties would be affected by its proposal. We did not verify the accuracy of the data that Ernst & Young derived from HUD’s data systems for use in its study except for certain data pertaining to the 10 case study properties. We found that the final data used in Ernst & Young’s study for these properties were generally consistent with the data we obtained. HUD’s OIG conducted a more detailed assessment of the data that Ernst & Young derived from HUD’s information systems. The OIG tested 69 of the 189 data elements that Ernst & Young used for 56 projects. The OIG found differences between the data it obtained and the data Ernst & Young used for 423 of the 3,864 data elements it reviewed, 114 of which the OIG determined to be significant. The OIG shared the results of its analysis with Ernst & Young and HUD. Ernst & Young officials informed us that they had used the OIG’s results to improve the study’s data. We provided comments to HUD and Ernst & Young about issues that arose throughout the study’s design and implementation. HUD and Ernst & Young officials were generally responsive to our concerns, replacing their original sample, for example, with one that they could analyze using appropriate statistical methods. We obtained data on the characteristics of 10 properties included in Ernst & Young’s sample and assessed the effects of HUD’s proposal on the properties. We selected these properties judgmentally from a list of properties in Ernst & Young’s sample. The 10 properties are not statistically representative of the properties in either HUD’s insured Section 8 housing portfolio or Ernst & Young’s sample. We selected the 10 properties to reflect differences in geographical location (they are located in six states and the District of Columbia), assisted rent levels, and physical condition (as indicated in physical inspection reports from HUD). We did not have information on many characteristics of the properties—such as how their assisted rents compared with the market rents they could command, who resided in them, and what types of housing markets they were located in)—when we selected the properties. To obtain data on the properties’ characteristics, we visited each property and interviewed its manager and/or owner. We also obtained data on the properties’ characteristics from HUD’s field office and property records. We provided the basic data we obtained on each property to the property owner or manager for review and verification. To develop estimates of the market rents that the properties could command and assessments of the effects that portfolio reengineering would have on the properties, we contracted for the services of three licensed real estate appraisal firms with experience in assessing properties insured or assisted by HUD: Goyette Roark Appraisal Services; Maiden, Haase & Smith, Ltd.; and Miller Appraisal Review. The firms provided us with a report on each of the properties they reviewed. We also obtained comments on each appraisal report from the property’s owner or manager. The results of the reports are summarized in appendix V. We identified and formulated our observations on the key issues facing the Congress through our review of (1) HUD’s proposals, (2) comments on HUD’s proposals and alternative proposals prepared by various parties representing the views of those who would be affected, (3) testimony provided at several congressional hearings and our discussions with housing and lending industry officials and with the owners, managers, and selected tenant representatives at the 10 case study properties. We provided a draft copy of this report to HUD for its review and comment. HUD provided written comments on the draft, and these comments are presented and evaluated in chapter 2 and appendix VI. We conducted our review from August 1995 through September 1996 in accordance with generally accepted government auditing standards. In May 1996 Ernst & Young reported on the results of its study analyzing the effects of HUD’s original mark-to-market proposal on insured Section 8 properties. Ernst & Young’s study indicates that for most of the properties subject to portfolio reengineering, the assisted rents are greater than the estimated market rents. In addition, according to the study, the properties have significant amounts of immediate deferred maintenance and short-term and long-term capital needs. The study further indicates that about 80 percent of the properties would need to have their debt reduced in order to continue operations after reengineering. For approximately 22 to 29 percent of the properties, writing down the existing debt to zero would not reduce their costs enough for them to cover their operating expenses and/or address their deferred maintenance and capital needs. Ernst & Young’s report does not present information gathered during the study on the costs of portfolio reengineering to the government—that is, on how the costs of providing Section 8 assistance would change and what the likely claims against FHA’s insurance fund would be. Our analysis of these data indicates that although the costs of Section 8 assistance would eventually be lower under portfolio reengineering than under the current renewal policies, little or no Section 8 savings would be achieved over the next 10 years if all Section 8 properties were reengineered when their current Section 8 contracts expire. Furthermore, Ernst & Young’s data indicate that the cost of insurance claims associated with the reengineering proposal during the 10-year period would amount to between $6 billion and $7 billion. Ernst & Young’s financial model provides a reasonable framework for projecting the overall results of portfolio reengineering, such as the number of properties that would need to have their debt reduced. Furthermore, we did not identify any substantive problems with the model’s sampling and statistical methodology. However, some assumptions used in the financial model may not reflect the way in which insured Section 8 properties would actually be affected by portfolio reengineering. In addition, our comparison of Ernst & Young’s data with the information we gathered on our 10 case-study properties raises questions about one key data element—the estimated costs of deferred maintenance and capital needs. Specifically, the owners or managers of the 10 properties and the independent appraisers we retained questioned the model’s cost estimates for deferred maintenance at the properties, generally indicating that the estimates were too high. To assess the extent to which the use of different assumptions would affect the results of Ernst & Young’s study, we performed sensitivity analyses of Ernst & Young’s model using two sets of revised assumptions that we developed through our discussions with multifamily housing industry officials. One scenario reflects assumptions that are more optimistic in terms of the cost to the government of portfolio reengineering. The other uses assumptions that are more conservative or pessimistic. Under all scenarios—Ernst & Young’s results and the optimistic and pessimistic variations—a substantial number of properties would likely do well and others would have difficulty sustaining operations. In early 1995, when HUD proposed the mark-to-market initiative, the Department did not have current or complete information on the insured Section 8 portfolio to use as a basis for developing assumptions about, and estimates of, the costs and effects of the proposal. For example, HUD lacked reliable, up-to-date information on both the market rents that the properties could be expected to command and the properties’ physical condition—two variables that strongly influence the effects on properties of the mark-to-market proposal. Information on market rents and physical condition is also needed to estimate (1) the change in Section 8 subsidy costs if assisted rents are replaced with market rents and (2) the claims against FHA’s insurance fund if mortgage debt is reduced to allow the properties to operate at market rents. Because HUD did not have current data on the market rents and physical condition of the properties in the insured Section 8 portfolio, the Department had to rely on data collected for HUD’s 1990 multifamily stock study. An update to this study assessing changes in the stock since 1990 was scheduled to begin in the fall of 1995, but the results were not expected to be available for some time. To obtain interim data to better assess the likely outcomes of the mark-to-market proposal, HUD contracted with Ernst & Young LLP in 1995 for a study of a random sample of HUD-insured properties with Section 8 assistance to (1) determine the market rents and physical condition of the properties and (2) develop a financial model to show the effects of the proposal on the properties and to estimate the costs of the subsidies and claims associated with the proposal. The study was conducted on a sample of 558 properties out of 8,363 properties and extrapolated to the total population of 8,563 properties identified by HUD at that time as representing the population subject to portfolio reengineering. The study was planned to take about 2 months and be completed in 1995. However, the study took longer than estimated, in large part because of delays in completing the physical inspections and the fiscal year 1996 federal budget impasse, which required many government agencies, including HUD, to shut down operations for various periods last fall and winter. HUD and Ernst & Young released the report summarizing the study’s findings on May 2, 1996. Ernst & Young’s report provides current information comparing assisted rents at the properties with market rents, assessing the physical condition of the properties, and estimating the effects on the properties of HUD’s reengineering proposal as it existed while the study was under way. Hence, the study’s results do not reflect the effects of changes that HUD made to its proposal in early 1996. Ernst & Young’s report estimates that the majority of the properties have assisted rents that exceed market rents and significant amounts of immediate deferred maintenance and future capital needs. The analysis also indicates that about 80 percent of the properties would not be able to continue operations without debt restructuring. Ernst & Young conducted market surveys to estimate market rents at the properties. The properties whose assisted rents currently exceed market rents would generate less rental income after reengineering; therefore, they would likely have difficulty meeting their existing debt service requirements when their rents were adjusted to market levels. Ernst & Young’s study estimates that a majority of the properties—between 60 and 66 percent—have above-market rents and between 34 and 40 percent have below-market rents. Most of the properties with assisted rents that exceed market rents are newer assisted properties. Conversely, most of the properties with assisted rents that are less than market rents are older assisted properties. During fiscal years 1997 through 1999, most of the properties whose Section 8 contracts are scheduled to expire are older assisted properties, whereas from fiscal year 2000 and beyond, most of the properties with such contracts are newer assisted properties. The properties whose assisted rents are more than 120 percent above market levels are of special concern because they would likely experience substantial decreases in rental income. The Ernst & Young study estimates that between 41 and 47 percent of the properties have such rents. Ernst & Young hired an engineering firm, Louis Berger & Associates, to identify the properties’ comprehensive capital needs. In order to obtain new loans, the property owners will need sufficient resources to address immediate deferred maintenance as well as future capital needs. As table 2.1 shows, Ernst & Young’s study indicates a widespread need for capital—between $9.2 billion and $10.2 billion—to address the properties’ capital needs. The study defines capital needs as the costs of the improvements needed to bring the properties into adequate physical condition to attract uninsured, market-rate financing. Three categories of capital needs are defined: (1) immediate deferred maintenance, or the estimated costs to bring all operating systems up to market conditions and lenders’ underwriting standards, (2) the short-term capital backlog, or the estimated expired costs for subsystems and components with a remaining useful life of 5 years or less, and (3) the long-term capital backlog, or the estimated expired costs for subsystems and components with a remaining useful life of more than 5 years. The immediate and short-term capital costs are a significant factor in determining the impact of portfolio reengineering on the properties. The study estimates that the properties have only approximately $1.3 billion to $1.6 billion in replacement reserves (i.e., funds set aside to cover future capital needs) and other cash reserves that could be used to address their capital needs, resulting in total net capital needs of between $7.7 billion and $8.7 billion. The average cost per unit of the total capital needs, less the reserves, is estimated to be between $9,116 and $10,366. The study indicates that while the older assisted properties have a high level of capital needs, the newer assisted properties also require a significant investment. For example, the older properties have needs ranging between $3.8 billion and $4.4 billion for immediate deferred maintenance and short-term capital backlog, and the newer properties have needs ranging between $2.5 billion and $3.1 billion. On a per-unit basis, these amounts average between $8,665 and $10,217 for the older properties and between $6,201 and $7,491 for the newer ones. The study was designed to use information on market rents and the properties’ physical condition gathered by Ernst & Young, as well as financial and Section 8 assistance data from HUD’s data systems, in a financial model designed to predict the proposal’s effects on the portfolio as a whole. Specifically, the model estimates the properties’ future cash flows over a 10-year period, assuming that the loans will be reengineered (marked to market) when their current Section 8 contracts expire. The model classifies the loans into four categories—performing, restructure, full write-off, and nonperforming—that reflect the effects of reengineering on the properties. A property’s placement in one of the four categories is based on the extent to which the income from the reengineered property would be able to cover its operating costs, debt service payments, and immediate deferred maintenance and short-term capital expenses. If portfolio reengineering were implemented, Ernst & Young estimates that about 80 percent of the properties—with current estimated unpaid principal balances ranging from $12.6 billion to $14.5 billion—would have to have their existing mortgage debt reduced. In addition, approximately 22 to 29 percent of the properties would not meet all of their needs even if their debt were written down to zero. The study further estimates that between 11 to 15 percent of the properties would not even be able to cover all of their operating expenses. Table 2.2 provides an overview of the results. Ernst & Young’s model estimated the subsidy costs for HUD’s insured Section 8 portfolio before and after reengineering and the claims against FHA’s insurance fund entailed in writing down the mortgages and addressing the deferred maintenance needs at the properties. However, Ernst & Young’s May 2, 1996, report does not present this information. According to HUD’s Deputy Assistant Secretary for Operations, HUD plans to use Ernst & Young’s cost data in developing future budget estimates relating to portfolio reengineering, but it never intended that the cost data be included in Ernst & Young’s May 1996 report or that the model generate budget estimates. For various reasons, the cost estimates in HUD’s fiscal year 1997 budget request and in Ernst & Young’s study differ. For example, the budget request assumes that many loans will be reengineered before the related Section 8 contracts expire, while Ernst & Young’s study assumes that reengineering will occur after the contracts expire. In addition, according to HUD, the budget assumes that Section 8 subsidy costs increase at a faster rate than Ernst & Young’s study. In the model, the claims costs include (1) the amount of debt reduction needed for each property to sustain its operations at market rents and (2) funding for some or all of the property’s immediate deferred maintenance and short-term capital needs. However, the claims costs cannot exceed the unpaid principal balance of the loan at the time of its restructuring. For a property whose estimated capital needs exceed its loan’s unpaid principal balance, any unresolved capital needs are tracked in the model. In addition, the claims costs are based on an evaluation, for each property, of the loan amount that the property could support using standard financial underwriting standards without the continuation of FHA insurance. Our analysis of these data indicates that although the costs of providing Section 8 rental assistance would decrease over the long term, little or no aggregate savings in Section 8 rental assistance costs would accrue over the next 10 years if, as the model assumes, all insured Section 8 properties were reengineered when their current Section 8 contracts expire. These data indicate that, for the period from fiscal year 1996 through fiscal year 2005, there may be little difference in the aggregate costs of Section 8 assistance under the current program and under portfolio reengineering: If project-based assistance is continued at current levels (including inflation), the costs in present value terms are estimated to be between $27.2 billion and $31.0 billion. The cost of Section 8 assistance after reengineering is estimated to be between $26.5 billion and $29.8 billion. A primary reason for the similarity in cost estimates is that the model assumes projects would be reengineered when their current Section 8 contracts expire. This assumption reflects HUD’s contractual obligations, which the Department has repeatedly indicated that it will not abrogate. Because the contracts for many properties with below-market rents will expire during the first part of the 10-year period and the properties would therefore be reengineered early in the process, the costs of providing Section 8 assistance would increase during the early years but then begin to decrease as more projects with above-market rents were reengineered in the later years. In fiscal year 2005, after virtually all of the projects have been reengineered, the Section 8 assistance costs are estimated to be between $1.9 billion and $2.2 billion per year on a present value basis. The model indicates that annual savings of between $298 million to $493 million (between 13 to 19 percent) could subsequently be achieved if reengineering were implemented in place of the current program. However, Ernst & Young’s model does not reflect the changes that HUD made to its proposal in early 1996. Some of the changes offer the potential for additional Section 8 cost savings. For example, HUD is proposing to use a proactive approach to portfolio reengineering, under which it would encourage owners to terminate their Section 8 contracts voluntarily before the contracts expire and go through the reengineering process. However, it is not clear to what extent HUD will succeed in attracting owners to restructure before their Section 8 contracts expire or what additional incentives HUD may have to offer to achieve this goal. In addition, HUD now plans to focus initially on reengineering properties with above-market rents. To the extent that portfolio reengineering focuses on such properties, the savings would increase. For example, Ernst & Young’s data indicate that the 10-year costs of providing Section 8 assistance for properties with above-market rents would be between $21.2 billion and $25 billion under the current program compared with between about $18.5 billion and $21.5 billion if the loans for such properties were restructured when their Section 8 contracts expire. In addition, some further savings would result if, as Ernst & Young’s model assumes, mortgage interest subsidies were terminated when projects were reengineered. Ernst & Young estimates that without reengineering, mortgage interest subsidies would range from about $841 million to $1.1 billion (in present value terms) over the next 10 years. However, most properties that receive interest subsidies are believed to have below-market rents. Our analysis of Ernst & Young’s data indicates that, under portfolio reengineering, the claims against FHA’s multifamily insurance funds—for mortgage write-downs and deferred maintenance and other capital needs for properties with mortgages that need restructuring—would be substantial. The mortgage balances for such properties—including those in the full write-off and nonperforming categories whose mortgages would be fully written off—would need to be reduced by between 61 and 67 percent. Over the next 10 years, according to Ernst & Young’s data, this reduction would result in claims costs, calculated on a present value basis, of between $6 billion and $7 billion. If, however, HUD’s proactive approach were successful, the costs of claims to cover mortgage write-downs could be higher than indicated in Ernst & Young’s study because (1) the loans would be restructured earlier when the unpaid principal balances were higher and (2) the present value of the claims occurring in the earlier years would be higher. However, HUD believes that without a proactive approach, owners would disinvest in the properties. Such disinvestment would have an adverse impact on the properties’ physical condition, resulting in higher claims costs at a later date. The claims payments estimated in Ernst & Young’s study indicate substantial loan loss rates for the government. For example, the portfolio reengineering claims for properties with assisted rents that exceed market rents are estimated to be between $4.8 billion and $5.8 billion and the related unpaid principal balances at the time of restructuring are estimated to be between $6.9 billion and $8.1 billion. The estimated loss rate would be between 67 and 75 percent. Table 2.3 provides the claims, unpaid principal balances, and loss rates for the properties subject to portfolio reengineering. Ernst & Young’s financial model provides a reasonable framework for projecting the overall results of portfolio reengineering, such as the number of properties that would need to have their debt restructured and the related costs of insurance claims. In addition, as discussed in appendix III, we did not identify any substantive problems with Ernst & Young’s sampling and statistical methodology. However, some assumptions used in Ernst & Young’s financial model may not reflect the way in which insured Section 8 properties would actually be affected by portfolio reengineering. Our comparison of Ernst & Young’s data with the information we obtained on 10 case study properties raised questions about one key data element—the estimated costs of deferred maintenance and capital needs. Ernst & Young’s financial model is a 10-year cash flow model that computes the net operating incomes for each property before, during, and after the rents are set at market levels. That is, the model produces annual revenues, operating costs, and replacement reserve requirements (i.e., amounts that need to be set aside to cover future capital needs) and calculates net income on the basis of these amounts. The initial cash flows are based on data, adjusted for inflation, from the properties’ audited financial statements for 1994. The model assumes that income and tenant payments will grow by 3 percent a year and expenses by 4 percent a year. The higher growth rate for expenses was intended to provide more conservative estimates. The model assumes that market rents will be phased in over 9 months, beginning 3 months after the first Section 8 contract for each property expires, and that the operating costs for some properties will be reduced. HUD’s rental assistance, included in the model as part of revenues, is based on the existing project-based subsidies, adjusted for inflation, until 3 months after the first contract expires. After the restructuring, the model assumes, residents will receive tenant-based assistance (certificates or vouchers) covering the estimated market rents at the properties. However, the assistance is no longer linked to specific properties, and the residents could choose to relocate. For each of the 10 years covered, the model computes both a net operating income and an adjusted net operating income. The net operating income represents the total revenues less the operating expenses, whereas the adjusted net operating income is further reduced by the amount required annually for a replacement reserve. Each property is then subjected to two tests of its loan’s performance when the first Section 8 contract expires to determine whether the cash flows provide sufficient income for the property to cover (1) the current debt service (mortgage payment) excluding any interest subsidy currently available and (2) the immediate deferred maintenance and short-term capital backlog costs. If a loan passes both tests, it is categorized as performing. Loans that are not classified as performing are analyzed further to determine whether their appropriate portfolio reengineering category is debt restructure, full write-off, or nonperforming. In general, Ernst & Young’s financial model provides a reasonable framework for analyzing the impact of HUD’s portfolio reengineering proposal on the insured Section 8 portfolio. However, some of its assumptions may not reflect the way in which insured Section 8 properties would actually be affected by portfolio reengineering. In addition, some of the model’s assumptions may not be apparent to readers of Ernst & Young’s May 1996 report. The market rents projected for 10 case study properties by Ernst & Young and by the contract appraisers were generally consistent. However, our comparison of the immediate deferred maintenance needs identified at the 10 properties by Ernst & Young and by the contract appraisers and our discussions with the owners or managers of the properties indicated that the study’s results may not always accurately reflect conditions at these properties. More detailed discussions of the differences between Ernst & Young’s and the contract appraisers’ assessments of the 10 case study properties are presented in appendixes I and V. As part of our review, we contracted with three licensed real estate appraisal firms for assessments of 10 HUD-insured Section 8 properties included in Ernst & Young’s sample. The appraisers’ tasks included studying the local markets in which the properties are located and determining what market rents the properties would be able to command. As table 2.4 indicates, for 8 of the 10 properties, the estimated market rents that Ernst & Young developed in its market surveys are reasonably close to (i.e., within 10 percent of) the rents developed by the appraisers we retained. For two properties, however, there are significant differences. Ernst & Young’s estimates of the market rents for St. Andrew’s Manor and Terrace Gardens are more than 20 percent lower than the contract appraisers’ estimates. This difference reflects, in large measure, Ernst & Young’s use of a different methodology to estimate market rents in neighborhoods consisting primarily of assisted properties—where few, if any, comparable properties with market rents were identified. In these cases, Ernst & Young assumed that because the neighborhoods were essentially maintained by non-market-driven forces, there were no markets for unassisted rents other than those controlled by the local housing authorities. Thus, Ernst & Young based its estimates of market rents on the rents subsidized by the local housing authorities. In contrast, the appraisers believed that there were comparable properties that could be used to estimate market rents for the two properties. While Ernst & Young and the contract appraisers arrived at generally consistent estimates of market rents for the 10 case study properties, they developed widely differing estimates of the properties’ capital needs. In general, Ernst & Young projected significantly higher costs. These differences occurred, in part, because Ernst & Young and the contract appraisers used different approaches for assessing capital needs. Ernst & Young retained a firm to conduct engineering studies at the properties. As discussed earlier, Ernst & Young’s assessment of a property’s capital needs included three components: the immediate deferred maintenance, short-term capital backlog, and long-term capital backlog. In the model, Ernst & Young assumed that funding would be provided to cover the immediate deferred maintenance and short-term capital needs at the time the property was reengineered (up to a full write-down of the property’s mortgage). The short-term capital needs cover the “estimated expired costs” rather than the full replacement costs of the items with remaining useful lives of 5 years or less. For example, a $15,000 roof with an original useful life of 15 years would, when it was 11 years old, have estimated expired costs of $11,000, which would be included in the property’s short-term capital backlog. The additional funding needed to replace the roof in 4 years would be funded by annual replacement reserves factored into the property’s annual cash flows. Thus, the reserves cover part of the short-term capital backlog and the replacement of systems and components that have remaining useful lives of more than 5 years. Ernst & Young’s approach for estimating capital needs involved reviewing a property’s major subsystems and unit components and then estimating, for each, the original useful life, remaining useful life, replacement cost, and need for repairs or replacement. This information was used to calculate the property’s immediate deferred maintenance needs and short-term capital backlog. According to Ernst & Young, the estimates included in the study represent the (1) costs for items that require immediate attention, (2) costs for items that may still be operable but have outlasted their planned useful life,and (3) expired costs (depreciation) for items that are expected to need replacement in the next 5 years. In general, the contract appraisers based their estimate of a property’s capital needs on their assessment of the repairs and renovations required for the property to operate as a market-rate property after reengineering. This approach relies primarily on an evaluation of the property relative to others in the same market, whereas Ernst & Young’s approach depends, in part, on useful-life standards. The appraisers based their assessment on their review of the property’s previous physical inspections and on their own physical inspection. The appraisers were not, however, tasked with performing engineering studies. Because of these methodological differences, direct comparisons of Ernst & Young’s and the appraisers’ estimates are difficult. In our view, the most comparable estimates are for immediate deferred maintenance needs; these estimates for 10 properties appear in table 2.5. Ernst & Young’s estimates are taken from the firm’s May 2, 1996, report. In commenting on this comparison, Ernst & Young officials indicated that their firm’s estimates of deferred maintenance needs are likely to be higher than those of the contract appraisers because they include costs not only for the major subsystems and components that need major repair or are in poor condition but also for items such as appliances and heating and air-conditioning systems that are still functioning but have outlasted their useful life. Ernst & Young’s estimates assumed that investors or lenders would want to replace such items. To demonstrate the effect of this assumption on their firm’s estimates of deferred maintenance needs, Ernst & Young officials provided us with an analysis showing how the exclusion of such items would change the estimates. This additional information showed that using useful-life standards generally resulted in higher cost estimates than using, as the contract appraisers did, the actual condition of systems and components and comparisons of the appraised property with other properties in the local real estate market. Table 2.6 adjusts Ernst & Young’s estimates for the 10 properties’ immediate deferred maintenance needs, eliminating the global assumption that items exceeding their estimated useful life would be replaced. This adjustment reduced Ernst & Young’s estimates, in some cases substantially. For example, the estimate for Murdock Terrace in Dallas, Texas, was adjusted from $5.7 million to $2.1 million when the replacement costs for items that were still operable but had exceeded their useful life were excluded. Even after adjusting Ernst & Young’s estimates, we found that, for some properties, Ernst & Young’s estimates still differed substantially from those of the contract appraisers. For example, Ernst & Young’s estimate of the immediate deferred maintenance needs at Jacksonville Townhouse in Jacksonville, Florida, remained at $797,402, while the appraiser did not identify any deferred maintenance needs. The property’s owner and manager also strongly disagreed with Ernst & Young’s cost estimates for immediate deferred maintenance, especially the estimate of $360,018 to replace heating and air-conditioning systems. The manager said the main system is only 3 years old and is covered by a maintenance contract and that the cost of work in the individual units, which Ernst & Young had estimated at $295,492 (or $3,545 a unit), is more than four times higher than necessary. He said that the heating and air-conditioning systems had recently been replaced in 35 units at a cost of $800 per unit. Because the estimates of capital needs that Ernst & Young presented in its study were difficult to compare directly with those of the contract appraisers, we provided both estimates to the owners and managers of the 10 case study properties for their review and comment. The owners and managers generally disagreed with Ernst & Young’s estimates. For the most part, they said that the estimates were too high and did not accurately reflect the physical condition of the properties. In some cases, the owners and managers questioned some of the underlying assumptions used in developing the estimates and identified cost estimates that they considered too high—in some cases, almost twice as high as they would estimate. For example, one property manager agreed that all of the property’s operating systems needed major rehabilitation. However, his detailed estimate of about $3 million, including a $500,000 allowance for overruns, was about 50 percent lower than Ernst & Young’s estimate of nearly $5.7 million. The contract appraiser for that property also believed that Ernst & Young’s estimate was excessive. He stated that the neighborhood’s standards and rental rates would not justify the renovation costs identified by Ernst & Young. When Ernst & Young adjusted its estimate by removing the replacement costs of items that had exceeded their useful life but were still in working condition, the revised estimate of $2.1 million was more in line with the property manager’s assessment of the property’s physical condition. Another property manager said that Ernst & Young’s estimate was “grossly overstated and in no way accurately represent the condition of the property” because it did not appear to reflect a $2 million rehabilitation that was done in 1991 and 1992. While Ernst & Young estimated immediate deferred maintenance needs of $362,349 for this property, the manager said there were no deferred maintenance needs and the contract appraiser identified no deferred maintenance or other repairs needed for the property to compete in the marketplace. Ernst & Young’s adjusted estimate of the immediate deferred maintenance needs for this property was $128,535. According to an official from the engineering firm retained by Ernst & Young, with whom we discussed the owners’ and managers’ assessments of Ernst & Young’s cost estimates, the owners’ cost estimates may be understated. He said, for example, that current owners may be less concerned than new investors with comparing their property to others in the surrounding market and may therefore not plan for some changes that new owners would want to make. He said the estimates used in Ernst & Young’s study represent the costs of meeting the standards of the industry and of the surrounding market. Other comments provided by owners and property managers and our review of the estimates indicated that Ernst & Young’s estimates may not take into account all of the ongoing maintenance at the properties, such as the cyclical replacement of carpets and other unit items, preventive maintenance performed under contracts, recent improvements, and improvements that were under way at the time of Ernst & Young’s inspections. For example, one manager said that Ernst & Young’s study did not reflect the actual condition of the property’s heating and air-conditioning systems because it included the full replacement cost of $253,000 for the heating system in its estimate of the property’s immediate deferred maintenance needs. However, the manager noted that when the engineering firm retained by Ernst & Young inspected the property in January 1996, the system’s renovation was well under way. The manager said the renovated heating system has a life expectancy of 30 years. According to Ernst & Young, this difference occurred because the study used a “point-in-time” methodology. This approach included only improvements that had been substantially completed at the time of the inspection and specifically excluded those that were planned or ongoing. Consequently, even though the inspector noted that work on the heating system was occurring in most units and would be completed within 2 months, the estimate does not reflect this work because it was not substantially completed. We identified some additional limitations in Ernst & Young’s approach that may affect the accuracy of the firm’s capital needs estimates. For example, officials from Ernst & Young and the engineering firm acknowledged that although they intended to base these estimates on inspections of 10 percent of each property’s randomly selected units, they were not always able to do so because of management, tenant, or timing considerations. At 7 of the 10 case study properties, Ernst & Young’s inspectors examined fewer than 10 percent of the units. Also, Ernst & Young calculated cost estimates for unit items, such as cabinets, appliances, and heating and air-conditioning components, by multiplying the estimated immediate cost per unit by the total number of units at the property. However, in some cases this approach may not have been reliable because of differences among units. For example, at one of the case study properties, which has 112 apartments with kitchens and 92 assisted living units without kitchens, Ernst & Young’s estimate of the property’s immediate deferred maintenance needs included the costs of replacing kitchen cabinets in all of the units. Through our discussions with representatives of multifamily housing lending organizations and other multifamily housing industry officials and through our own analysis, we identified some assumptions used in the financial model that may not (1) reflect the way in which insured Section 8 properties would actually be affected by portfolio reengineering or (2) be apparent to readers of Ernst & Young’s May 1996 report but are important to understanding the study’s results. Ernst & Young’s assumptions about the transition period for reengineered properties may be overly optimistic. During this period, a reengineered property changes from an assisted property with rental subsidies linked to its units to an unsubsidized property competing in the marketplace for residents. The model estimates that the entire transition will be completed within a year after the first Section 8 contract expires. In addition, the model assumes that during this year, the property’s rental income will move incrementally towards stabilization over 9 months. The lenders with whom we discussed the reasonableness of the model’s major assumptions considered a transition period of 1 to 2 years more likely. They also anticipated a less stable transition than the model assumed, with less income and more costs. An Ernst & Young official told us that the 9-month period was designed to reflect an average transition period for reengineered properties. While recognizing that the transition period for some properties would be longer, he believed that for others it could be shorter. In Ernst & Young’s financial model, the first test of a loan’s performance under portfolio reengineering assumes the elimination of the interest subsidy that many older assisted properties currently receive. Specifically, the model compares the net operating income under market rents with the current debt service, excluding any interest subsidy provided with the current loan. This assumption puts fewer loans in the performing category than would appear there if the subsidies were assumed to continue. According to Ernst & Young, the model excludes the current interest subsidies under portfolio reengineering because it assumes that subsidies would not exist under true market conditions. However, such an assumption implies a change in the terms of loans to which both borrowers and lenders have agreed. Hence, while this assumption might be appropriate for restructuring loans on which defaults would occur if the terms of the loans were not changed, it is not, in our view, appropriate for identifying the loans that need restructuring. As long as the borrowers continue to meet the terms of these loans, HUD cannot, as an official indicated, unilaterally discontinue the interest subsidy payments on them. Typically, the interest subsidies reduce interest payments on the loans to 1 percent. If Ernst & Young’s model assumed that interest subsidies would continue, some additional properties would be classified as performing. This change would decrease the model’s estimates of the claims costs associated with portfolio reengineering but would entail the Department’s continuing to incur interest subsidy costs. The debt service coverage ratios, loan-to-value ratios, and amortization periods used in the model provide for higher levels of mortgage debt than the lenders we contacted generally understood to be available. If their understanding is correct, the model’s assumptions would provide for lower claims than might actually result. For example, the lenders we contacted generally believed that most lenders would want to see at least 1 year’s worth of operations at the stabilized level before approving a loan. Without such a stabilized period of operations, they believed, many commercial lenders would consider the properties too risky to provide long-term commercial financing at standard terms. Some officials believed that venture capital firms might be the only firms interested in properties whose operations had not stabilized after reengineering. In any case, they believed that the financing terms available for reengineered properties without proven track records would be more conservative than standard financing terms. The lenders believed that the 1.20 debt service coverage ratio and the 1.0 loan-to-value ratio used in the model would not likely be available for loans on many reengineered properties, particularly given the uncertainties concerning (1) how these properties would operate in a market-rate environment and (2) whether, what type of, and what levels of Section 8 assistance would be available in the future. They believed that higher debt service coverage ratios and lower loan-to-value ratios would be more likely. In addition, they believed that 30-year loan amortizations might not be available. The lenders indicated that 25-year loan amortizations were typical for commercial loans. In commenting on the views of the lenders we contacted, an Ernst & Young official stated that the underwriting criteria would take into account not only the debt service coverage and loan-to-value ratios and the amortization periods but also the level of capital provided through the short-term capital needs estimates and annual replacement reserves, as well as the interest rates, operating expenses, and revenues estimates. He believed that these factors would provide lenders with more comfort about the ability of properties to make the transition to a market environment. When obtaining the views of the lenders, we provided them with information on the full range of underwriting assumptions used by Ernst & Young, including those relating to the funding for capital needs, interest rates, revenues, and operating expenses. The Ernst & Young official also noted that Ernst & Young’s terms assumed the Congress would continue to subsidize residents with Section 8 tenant-based assistance under a multiyear program. Finally, the Ernst & Young official noted that the financial model used 1.0 as a loan-to-value ratio so that the model would calculate the mortgage amounts for reengineered properties on the basis of their debt service coverage ratios rather than their loan-to-value ratios. The model assumes that replacement reserves must cover the estimated annual replacement costs for all major property systems. In contrast, the lenders we spoke with generally require replacement reserves for capital items for a set period of time—such as over the life of the loan or over the life of the loan plus 2 years. Thus, Ernst & Young’s approach requires higher replacement reserves than the private sector may require. The requirements for replacement reserves affect annual cash flows and the funding available to support mortgage debt. For example, in Ernst & Young’s study, if a property’s hot water systems were evaluated to have a remaining useful life of 25 years, the annual replacement reserve would include prorated amounts for the full cost of replacing the hot water systems. However, if the restructured loan were for 15 years, the lenders we spoke with believed that annual funding for replacing the hot water systems typically would not be required. Some replacement reserve items funded in Ernst & Young’s study, such as walls and foundations and parking lots, have useful lives of more than 50 years. The Section 8 costs for reengineering are estimated only for the residents who currently receive Section 8 project-based assistance. In contrast to HUD’s original proposal, which was the basis of Ernst & Young’s study, HUD’s current proposal includes the residents who do not receive Section 8 project-based assistance but would qualify for assistance when market rents were applied. Any estimates of the outcomes and costs of portfolio reengineering are likely to be subject to some error because they rely on predicting the reactions of numerous owners, lenders, and residents. In addition, as discussed above, some assumptions used in Ernst & Young’s financial model may not accurately reflect the effects of portfolio reengineering on insured Section 8 properties or, at a minimum, are subject to debate. To assess the extent to which the use of different assumptions affects the results of Ernst & Young’s study, we performed sensitivity analyses of Ernst & Young’s model using two sets of revised assumptions that we developed through our discussions with multifamily housing industry officials. One scenario reflects assumptions that are more optimistic in terms of the cost to the government of portfolio reengineering. The other uses assumptions that are more conservative or pessimistic. Taken together, these sets of assumptions are intended to reflect the range of potential outcomes resulting from the basic policy assumptions used in Ernst & Young’s study. We recognize that using alternative policy assumptions could produce different outcomes. Appendix IV provides information on the assumptions used in Ernst & Young’s study and in our optimistic and pessimistic analyses. Because the owners and managers and the contract appraisers generally believe that the capital costs for the 10 case study properties were significantly lower than those Ernst & Young estimated, we reduced all capital costs used by Ernst & Young by 25 percent in our optimistic scenario. We did not adjust Ernst & Young’s capital costs in the pessimistic scenario. As table 2.7 indicates, under both the optimistic and the pessimistic alternatives, as well as under Ernst & Young’s original assumptions, a substantial number of properties are likely to do well and other properties will have difficulty sustaining operations. For example, under the optimistic assumptions, between 24 and 30 percent of the properties fall into the performing category, but between 15 and 20 percent fall into the two bottom categories—full write-off or nonperforming. Under the pessimistic assumptions, between 10 and 14 percent are in the performing category and between 39 percent and 46 percent are in the full write-off or nonperforming category. As table 2.8 indicates, the cost of FHA insurance claims associated with portfolio reengineering are estimated to be between $4.9 billion and $5.9 billion under optimistic assumptions and between $8.2 billion and $9.4 billion under pessimistic ones. Because we used the same market rents for our optimistic scenario as Ernst & Young assumed, the 10-year costs of Section 8 assistance are the same. However, the 5-percent reduction in rents assumed in the pessimistic scenario lowered these 10-year costs by between $0.9 billion and $1.0 billion. As previously discussed, these subsidy estimates assume that loans are restructured when their first Section 8 contract expires. However, as noted, HUD is now proposing a proactive approach under which owners would agree to restructure their loans before the first Section 8 contract expires. In addition, HUD is proposing to initially restructure only loans for properties whose assisted rents exceed market rents, thereby providing for decreases in subsidies. Although questions have arisen about some of the data and assumptions used in Ernst & Young’s study, we nevertheless believe that the study represents an important step in understanding the effects of reengineering on and the condition of the properties in HUD’s insured Section 8 portfolio. Quantitative, statistically reliable information based on case-by-case analyses of the properties, such as that produced by the study, can assist the Congress in evaluating HUD’s proposal and comparing it to other reengineering alternatives. As the Congress and HUD continue to address issues associated with portfolio reengineering (see ch. 3), we believe that opportunities exist for HUD to make further use of Ernst & Young’s data and to carry out additional analyses of the insured Section 8 portfolio. One important task will be to incorporate the results of Ernst & Young’s study into HUD’s budget estimates under portfolio reengineering. Other areas that merit additional analysis are the effects of including or excluding various segments of the portfolio in reengineering; the cost implications of continuing versus discontinuing FHA’s insurance after reengineering and of using project-based versus tenant-based assistance; and the options for dealing with those properties that fall into the nonperforming or full write-off categories after reengineering. In addition, given the uncertainties about the capital costs used in the study, further analysis of the physical condition and related capital needs of the insured Section 8 portfolio is needed. The update to HUD’s 1990 multifamily stock study, currently under way, should help to address this open issue. In commenting on a draft of this report, HUD said the report provided an excellent summary of the portfolio reengineering proposal and its likely impact on the insured multifamily portfolio. HUD also noted, among other things, that differences in the estimates of deferred maintenance and capital needs developed by Ernst & Young and by the contract appraisers are due to differences in the methodologies used. (HUD’s comments are reproduced in app. VI). While agreeing that differences in the estimates are due, in part, to differences in the methodologies, we continue to question certain aspects of Ernst & Young’s approach, including (1) the assumption that working systems and components will be replaced if their estimated useful lives have expired and (2) the inclusion in the capital needs estimates of the cost of work that is under way but not yet completed. The fact that Ernst & Young’s estimates for 7 of the 10 case study properties that GAO reviewed were based on inspections of fewer than 10 percent of each property’s units also adds to the uncertainty of the estimates. For these reasons, as noted in our conclusions, we believe that further analysis is needed of the physical condition and capital needs of the insured Section 8 portfolio. HUD’s comments also indicate that HUD inferred from the comments provided by the lenders we contacted that they were not fully informed of the methodology and assumptions used in the Ernst & Young model. In fact, we provided the lenders we spoke to with information on the full range of underwriting assumptions used by Ernst & Young. In addition, HUD commented that the estimated costs of restructuring HUD’s multifamily portfolio that we derived from Ernst & Young’s model do not conform with federal budget rules and scoring methodology and do not reflect all aspects of HUD’s current portfolio reengineering proposal. As stated in the report, the data we present on the cost of restructuring HUD’s multifamily portfolio are intended to reflect the results of Ernst & Young’s financial model, including the assumptions used by Ernst & Young. We recognize that the cost estimates do not conform with federal budget rules and scoring methodology and do not reflect all aspects of HUD’s current portfolio reengineering proposal. Both of these points are discussed earlier in the chapter and were clearly stated in the copy of the draft provided to HUD for comment. The Congress faces a number of significant and complex issues in evaluating HUD’s portfolio reengineering proposal. How these issues are resolved will, to a large degree, determine the extent to which the problems that have long plagued the portfolio are corrected and prevented from recurring, as well as the extent to which restructuring results in savings or costs to the government. Key issues include the following: To what extent should FHA provide insurance for restructured loans? Should rental assistance be project-based or tenant-based? What protection should be given to households at reengineered properties? To what extent should the federal government finance the costs of rehabilitation? What actions should be taken to address problems in HUD’s management of the insured Section 8 portfolio? To what extent should properties with assisted rents below market rents be included in portfolio reengineering? What processes should be used to restructure mortgages? What should be done to help the large number of properties that would have difficulty sustaining operations? To what extent should the government provide tax relief to owners affected by portfolio reengineering? Will the recently enacted portfolio reengineering demonstration program cover the full range of options and outcomes? An issue with short- and long-term cost implications is whether HUD should continue to provide FHA insurance for the restructured loans and, if so, under what terms and conditions. If HUD were to discontinue the insurance when restructuring the loans, as it originally planned, it would likely incur higher debt restructuring costs because lenders would set the terms of the new loans (e.g., interest rates) to reflect the risk of default that they would now assume. The primary benefits of discontinuing FHA insurance are that (1) the government’s dual role as mortgage insurer and rental subsidy provider would end, eliminating the management conflicts associated with this dual role, and (2) the risk of default borne by the government would end as the loans were restructured. If FHA insurance were continued, another issue is whether it would need to be provided for the whole portfolio or could be used selectively. The government could, for example, insure loans only when owners could not obtain reasonable financing without insurance. Also, if FHA insurance were continued, the terms and conditions under which it is provided would affect the government’s future costs. Some lenders have indicated that short-term (or “bridge”) financing insured by FHA might be needed while the properties make the transition to market conditions, after which time conventional financing at reasonable terms would be available. Under such an arrangement, the government could insure loans for 3 to 5 years, instead of bearing the risk of default, as it now does, for the life of the loans—generally 40 years. Finally, legislation could require a portion of the risk of default, now borne entirely by the government, to be assumed by state housing finance agencies or private-sector parties. One of the key issues to be decided in addressing the problems of the insured Section 8 portfolio is whether to continue project-based subsidies, convert the portfolio to tenant-based assistance, or combine the two types of assistance. On the one hand, using tenant-based assistance can make projects more subject to the forces of the real estate market, potentially helping to control housing costs, foster housing quality, and promote residents’ choice. On the other hand, using project-based assistance, which links subsidies directly to rental units, can help sustain properties in housing markets that have difficulty supporting unsubsidized rental housing, such as inner-city and rural locations. In addition, residents who would likely have difficulty finding suitable alternative housing, such as the elderly or disabled and those living in a tight housing market, might prefer project-based assistance to the extent that it would give them greater assurance of being able to remain in their current residence. If a decision is made to convert the Section 8 program from project-based to tenant-based assistance as part of portfolio reengineering, decisions must also be made about whether to protect the current residents from displacement. HUD’s April 1996 reengineering strategy contains several plans to protect the residents affected by rent increases at insured properties. For example, the residents of Section 8 units that were converted from project-based to tenant-based assistance would receive an enhanced voucher to pay the difference between 30 percent of their household’s adjusted income and the market rent for their unit even if the market rent exceeded the area’s fair market rent ceiling. The residents of reengineered properties who live in units without project-based subsidies would receive similar assistance if reengineering increased their rent to more than 30 percent of their household’s adjusted income. Such provisions would limit residents’ rent burden and reduce the likelihood of displacement, but they would also lower the anticipated savings in assistance costs, at least in the short run. The cost estimates in Ernst & Young’s report assume that HUD would continue to assist the residents of currently subsidized units even if the market rent exceeded the fair market rent set by HUD. However, the report’s cost estimates do not include any allowance for assisting the residents of currently unsubsidized units. Who should pay for needed repairs, and how much, is another important issue in setting restructuring policy. As discussed previously, Ernst & Young’s study found substantial unfunded immediate deferred maintenance and short-term capital replacement needs across the insured Section 8 portfolio, particularly among the older assisted properties. Ernst & Young’s data indicate that between 22 and 29 percent of the properties in the portfolio could not cover their immediate deferred maintenance and short-term capital needs even if their mortgage debt were fully written off. HUD has proposed to use the affected properties’ reserve funds and, as necessary, claims against FHA’s insurance funds to pay for a substantial portion of the rehabilitation and deferred maintenance costs associated with restructuring. Others have suggested that HUD use a variety of tools— such as raising rents, restructuring debt, and providing direct grants—but that dollar limits be set on the federal government’s payment per unit, with the expectation that some other source, such as the owner or investor, will pay any remaining costs. A key cause of the current problems affecting the insured Section 8 portfolio has been HUD’s inadequate management of the portfolio. As discussed in chapter 1, weaknesses in HUD’s oversight and management have allowed physical and financial problems at a number of the multifamily properties insured by HUD to go undetected or uncorrected.HUD’s original proposal sought to address these problems by subjecting the properties to the disciplines of the commercial market by converting project-based subsidies to tenant-based assistance; adjusting rents to market levels; and refinancing existing insured mortgages with smaller, uninsured mortgages, if necessary, for the properties to operate at the new rents. However, to the extent that the final provisions of reengineering perpetuate the use of FHA insurance and project-based subsidies, HUD’s ability to manage the portfolio will remain a key concern. Other means will have to be found to address the limitations impeding HUD’s management of the portfolio, particularly in light of the planned staff reductions that will further strain HUD’s management capacity. Deciding which properties to include in portfolio reengineering will likely involve trade-offs between reducing the high costs of subsidies, on the one hand, and improving the poor physical condition of the properties and lowering the government’s exposure to default, on the other hand. Reengineering only those properties with rents above market levels would produce the greatest savings in subsidy costs. Yet HUD has indicated that also including those properties with rents currently below market levels could help improve these properties’ physical and financial condition and reduce the likelihood of default. However, including such properties would decrease the estimated savings in Section 8 subsidy costs. Although HUD’s latest proposal would initially focus on properties with above-market rents, it notes that many of the buildings with below-market rents are in poor condition or have significant amounts of deferred maintenance that will need to be addressed at some point. Selecting a mortgage restructuring process that is feasible and balances the interests of the various stakeholders will be an important but difficult task. Various approaches have been contemplated, including the payment of full or partial insurance claims by HUD, the sale of mortgages, and the use of third parties or joint ventures to design and implement specific restructuring actions at each property. Because of concerns about HUD’s ability to carry out the restructuring process in house, HUD and others envision relying heavily on third parties, such as state housing financing agencies or teams composed of representatives from these agencies, other state and local government entities, nonprofit organizations, asset managers, and capital partners. These third parties would be empowered to act on HUD’s behalf, and the terms of the restructuring arrangements that they work out could to a large extent determine the costs to, and future effects of restructuring on, stakeholders such as the federal government, property owners and investors, mortgage lenders, residents, and state and local government housing agencies. Some, however, have questioned whether third parties would give adequate attention to owners’ interests or to housing’s public policy objectives. Despite these questions, HUD believes that third-party arrangements could be structured to align third parties’ financial interests with those of the federal government to help minimize claims costs. According to Ernst & Young’s assessment, between 22 and 29 percent of HUD’s insured portfolio would have difficulty sustaining operations if market rents replaced assisted rents. Furthermore, between 11 and 15 percent of the portfolio would not even be able to cover operating costs at market rents. If these properties did not receive additional financial assistance, a large number of low-income residents would face displacement. While HUD has not yet developed specific plans for addressing the problems at these properties, different approaches may be needed, depending on the circumstances at individual properties. For example, properties in good condition in tight housing markets may warrant one approach, while properties in poor condition in weak or average housing markets may warrant another. Further analysis of these properties should assist the Department in formulating strategies for addressing their problems. HUD’s portfolio reengineering proposal would be likely to have tax consequences for the owners of some projects. These tax consequences could result either from reductions in the properties’ mortgage principal (debt forgiveness) or from actions that would cause owners to lose their property (for example, as a result of foreclosure). We have not assessed the extent to which tax consequences would be likely to result from portfolio reengineering. However, HUD has stated its belief that tax consequences could be a barrier to getting owners to agree to reengineer their properties proactively. While HUD has not formulated a specific proposal for dealing with the tax consequences of portfolio reengineering, it has expressed its willingness to discuss with the Congress mechanisms to take into account the tax consequences of debt forgiveness for property owners who enter into restructuring agreements. The multifamily demonstration program that HUD recently received congressional authority to implement provides for limited testing of some aspects of HUD’s multifamily portfolio reengineering proposal. Such testing can provide needed data on the effects of reengineering on properties and residents, the approaches that may be used in implementing restructuring, and the costs to the government before a restructuring program is initiated on a broad scale. However, because the program is voluntary, it may not test the full spectrum of effects that portfolio reengineering could have or the full range of restructuring tools that the Department could use. For example, owners may be reluctant to participate in the program if HUD plans to enter into joint ventures with third parties because they may be concerned about losing their properties and/or suffering adverse tax consequences. Another potential limitation of the program is that, according to HUD, the funding provided to modify the multifamily loans may not be sufficient to cover the limited number of units authorized under the demonstration program. In September 1996, the Congress made changes to the demonstration program in legislation on HUD’s fiscal year 1997 appropriation (P.L. 104-204). HUD’s portfolio reengineering initiative recognizes a reality that has existed for some time—namely, that the value of many of the properties in the insured Section 8 portfolio is far lower than the mortgages on the properties suggest. Until now, this reality has not been recognized and the federal government has continued to subsidize the rents at many properties above the level that the properties could command in the commercial real estate market. As the Congress evaluates the options for addressing this situation, the fundamental problems that have affected the portfolio and their underlying causes will be important to consider. Any approach that is implemented should address not only the high costs of Section 8 subsidies but also the government’s high exposure to insurance loss, the poor physical condition of some of the properties, and the underlying causes of these long-standing problems with the portfolio. As the previous discussions of several key issues indicate, questions about the specific details of the reengineering process, such as which properties to include and whether or not to provide FHA insurance, will require weighing the likely effects of various options and the trade-offs involved when a proposed solution achieves progress in one area at the expense of another. Changes to the insured Section 8 portfolio should also be considered in the context of a long-range vision of the federal government’s role—and the size of that role, given the current budgetary climate—in providing housing assistance, and assistance generally, to low-income individuals. Addressing the problems of HUD’s insured multifamily portfolio will inevitably be costly and difficult, regardless of the specific approaches implemented. The overarching objective should be to implement the process as efficiently and cost-effectively as possible, recognizing not only the interests of the parties directly affected by restructuring but also the impact on the federal government and the American taxpayer.
GAO reviewed the Department of Housing and Urban Development's (HUD) proposals to reengineer its portfolio of insured Section 8 multifamily rental housing properties, focusing on the: (1) problems affecting the properties in the HUD insured Section 8 portfolio and HUD plans for addressing them; (2) results and reasonableness of a study performed by Ernst & Young to assess the effects of the HUD proposal on the portfolio's properties; and (3) key issues facing Congress in assessing the HUD proposal. GAO found that: (1) the HUD insured Section 8 portfolio suffers from high subsidy costs, high exposure to insurance loss, and the poor physical condition of some properties; (2) under the HUD mark-to-market proposal, property owners would set rents at market levels, and HUD would reduce mortgages as necessary to achieve positive cash flows, terminate Federal Housing Administration (FHA) mortgage insurance, and replace Section 8 project-based rental subsidies with portable tenant-based subsidies; (3) the Ernst & Young study concluded that under the reengineering proposal, about 80 percent of the properties would need to have their mortgages reduced to some degree and that between 22 and 29 percent of the properties would have difficulty sustaining operations even if their mortgages were totally written off; (4) the study data indicated that the cost to the government of writing down mortgages and addressing deferred maintenance needs at reengineered properties would be high; (5) based on Ernst & Young's assumptions, FHA insurance fund claims would be between $6 billion and $7 billion in present value over the next 10 years and subsidy costs would be comparable to the existing program's subsidy costs if all of the properties were reengineered when their Section 8 contracts expire; and (6) Ernst & Young's financial model was generally reasonable, but some assumptions about the properties' deferred maintenance needs were questionable and some financing assumptions may not reflect the way in which insured Section 8 properties would actually be affected by portfolio reengineering.
Over the last 15 years, the federal government’s increasing demand for IT has led to a dramatic rise in the number of federal data centers and a corresponding increase in operational costs. According to OMB, the federal government had 432 data centers in 1998 and more than 1,100 in 2009. Operating such a large number of centers is a significant cost to the federal government, including costs for hardware, software, real estate, and cooling. For example, according to the Environmental Protection Agency, the electricity cost to operate federal servers and data centers across the government is about $450 million annually. According to the Department of Energy, data center spaces can consume 100 to 200 times more electricity than a standard office space. According to OMB, reported server utilization rates as low as 5 percent and limited reuse of data centers within or across agencies lend further credence to the need to restructure federal data center operations to improve efficiency and reduce costs. Concerned about the size of the federal data center inventory and the potential to improve the efficiency, performance, and the environmental footprint of federal data center activities, OMB, under the direction of the Federal CIO, established FDCCI in February 2010. This initiative’s four high-level goals are to promote the use of “green IT” by reducing the overall energy and real estate footprint of government data centers; reduce the cost of data center hardware, software, and operations; increase the overall IT security posture of the government; and shift IT investments to more efficient computing platforms and technologies. As part of FDCCI, OMB required the 24 agencies to identify a senior, dedicated data center consolidation program manager to lead their agency’s consolidation efforts. In addition, agencies were required to submit an asset inventory baseline and other documents that would result in a plan for consolidating their data centers. The asset inventory baseline was to contain detailed information on each data center and identify the consolidation approach to be taken for each one. It would serve as the foundation for developing the final data center consolidation plan. The data center consolidation plan would serve as a technical road map and approach for achieving the targets for infrastructure utilization, energy efficiency, and cost efficiency. While OMB is primarily responsible for FDCCI, the agency designated two agency CIOs to be executive sponsors to lead the effort within the Federal CIO Council, the principal interagency forum to improve IT- related practices across the federal government. In addition, OMB identified two additional organizations to assist in managing and overseeing FDCCI: The GSA FDCCI Program Management Office is to support OMB in the planning, execution, management, and communications for FDCCI. The Data Center Consolidation Task Force is comprised of the data center consolidation program managers from each agency. According to its charter, the Task Force is critical to supporting collaboration across the FDCCI agencies, including identifying and disseminating key pieces of information, solutions, and processes that will help agencies in their consolidation efforts. In an effort to accelerate federal data center consolidation, OMB has directed agencies to use cloud computing as an approach to migrating or replacing systems with Internet-based services and resources. In December 2010, in its 25 Point IT Reform Plan, OMB identified cloud computing as having the potential to play a major part in achieving operational efficiencies in the federal government’s IT environment. According to OMB, cloud computing brings a wide range of benefits, including that it is (1) economical—a low initial investment is required to begin and additional investment is needed only as system use increases, (2) flexible—computing capacity can be quickly and easily added or subtracted, and (3) fast—long procurements are eliminated, while providing a greater selection of available services. To help achieve these benefits, OMB issued a “Cloud First” policy that required federal agencies to increase their use of cloud computing whenever a secure, reliable, and cost-effective cloud solution exists. GAO, Information Technology Reform: Progress Made but Future Cloud Computing Efforts Should be Better Planned, GAO-12-756 (Washington, D.C.: July 11, 2012) and Information Security: Federal Guidance Needed to Address Control Issues with Implementing Cloud Computing, GAO-10-513 (Washington, D.C.: May 27, 2010). unique to government agencies, such as continuous monitoring and maintaining an inventory of systems. Agencies also noted that, because of the on-demand, scalable nature of cloud services, it can be difficult to define specific quantities and costs and, further, that these uncertainties make contracting and budgeting difficult due to the fluctuating costs associated with scalable and incremental cloud service procurements. Finally, agencies cited other challenges associated with obtaining guidance, and acquiring knowledge and expertise, among other things. More recently, in March 2013, OMB issued a memorandum documenting the integration of FDCCI with the PortfolioStat initiative. Launched by OMB in March 2012, PortfolioStat requires agencies to conduct an annual agency-wide IT portfolio review to, among other things, reduce commodity IT spending, demonstrate how its IT investments align with the agency’s mission and business functions, and make decisions on eliminating duplication. OMB’s March 2013 memorandum discusses OMB’s efforts to further the PortfolioStat initiative by incorporating several changes, such as consolidating previously collected IT-related plans, reports, and data submissions. The memorandum also establishes new agency reporting requirements and related time frames. Specifically, agencies are no longer required to submit the data center consolidation plans previously required under FDCCI. Rather, agencies are to submit information to OMB via three primary means—an information resources management strategic plan, an enterprise road map, and an integrated data collection channel. In July 2012, we issued a report on the status of FDCCI and found that, while agencies’ 2011 inventories and plans had improved as compared to their 2010 submissions, significant weaknesses still remained. Specifically, while all 24 agencies reported on their inventories to some extent, only 3 had submitted a complete inventory. The remaining 21 agency submissions had weaknesses in several areas. For example, while most agencies provided complete information on their virtualization efforts, network storage, and physical servers, 18 agencies did not provide complete data center information, such as data center type, gross floor area, and target date for closure. In particular, several agencies fully reported on gross floor area and closure information, but partially reported data center costs. In addition, 17 agencies did not provide full information on their IT facilities and energy usage. For example, the Department of Labor partially reported on total data center IT power capacity and average data center electricity usage and did not report any information on total data center power capacity. We also noted that 3 agencies had submitted their inventory using an outdated format, in part, because OMB had not publicly posted its revised guidance. Figure 1 provides an assessment of the completeness of agencies’ 2011 inventories, by key element. Officials from several agencies reported that some of the required information was unavailable at certain data center facilities. We reported that, because the continued progress of FDCCI is largely dependent on accomplishing goals built on the information provided by agency inventories, it will be important for agencies to continue to work on completing their inventories, thus providing a sound basis for their savings and utilization forecasts. In addition, while all 24 agencies submitted consolidation plans to OMB, For the remaining 23 agencies, only 1 had submitted a complete plan.selected elements were missing from each plan. For example, among the 24 agencies, all provided complete information on their qualitative impacts, and nearly all included a summary of the consolidation approach, a well-defined scope for data center consolidation, and a high- level timeline for consolidation efforts. However, most notably, 21 agencies did not fully report their expected cost savings; of those, 13 agencies provided partial cost savings information and 8 provided none. Among the reasons that this information was not included, a Department of Defense official told us that it was challenging to gather savings information from all the department’s components, while a National Science Foundation official told us the information was not included because the agency had not yet realized any cost savings and so had nothing to report. Other significant weaknesses were that many agencies’ consolidation plans did not include a full cost-benefit analysis that included aggregate year-by-year investment and cost savings calculations through fiscal year 2015, a complete master program and quantitative goals, such as complete savings and schedule,utilization forecasts. Figure 2 provides an assessment of the completeness of agencies’ 2011 consolidation plans, by key element. Officials from several agencies reported that the plan information was still being developed. We concluded that, in the continued absence of completed consolidation plans, agencies are at risk of implementing their respective initiatives without a clear understanding of their current state and proposed end state and not being able to realize anticipated savings, improved infrastructure utilization, or energy efficiency. We also found that while agencies were experiencing data center consolidation successes, they were also encountering challenges. While almost 20 areas of success were reported, the 2 most often cited focused on virtualization and cloud services as consolidation solutions, and working with other agencies and components to find consolidation opportunities. Further, while multiple challenges were reported, the two most common challenges were both specifically related to FDCCI data reporting required by OMB: obtaining power usage information and providing a quality data center asset inventory. We further reported that, to assist agencies with their data center consolidation efforts, OMB had sponsored the development of a FDCCI total cost of ownership model that was intended to help agencies refine their estimated costs for consolidation; however, agencies were not required to use the cost model as part of their cost estimating efforts. We stated that, until OMB requires agencies to use the model, agencies will likely continue to use a variety of methodologies and assumptions in establishing consolidation estimates, and it will remain difficult to summarize projections across agencies. Accordingly, we reiterated our prior recommendation that agencies complete missing plan and inventory elements and made new recommendations to OMB to publically post guidance updates on the FDCCI website and to require agencies to use its cost model. OMB generally agreed with our recommendations and has since taken steps to address them. More specifically, OMB posted its 2012 guidance for updating data center inventories and plans, as well as guidance for reporting consolidation progress, to the FDCCI public website. Further, the website has been updated to provide prior guidance documents and OMB memoranda. In addition, OMB’s 2012 consolidation plan guidance requires agencies to use the cost model as they develop their 2014 budget request. We and other federal agenciesto develop performance measures to gauge progress. According to government and industry leading practices, performance measures should be measurable, outcome-oriented, and actively tracked and reported. For FDCCI, OMB originally established goals for data center closures and the expected cost savings. Specifically, OMB expected to consolidate approximately 40 percent of the total number of agency data centers and achieve $3 billion in cost savings by the end of 2015, and established the means of measuring performance against those goals through several methods. have documented the need for initiatives The 24 agencies have collectively made progress towards OMB’s data center consolidation goal to close 40 percent, or approximately 1,253 of the 3,133 data centers, by the end of 2015. To track their progress, OMB requires agencies to report quarterly on their completed and planned performance against that goal via an online portal. After being reviewed for data quality and security concerns, the GSA FDCCI Program Management Office makes the performance information available on the federal website dedicated to providing the public with access to datasets developed by federal agencies, http://data.gov. As of February 2013, agencies had collectively reported closing a total of 420 data centers by the end of December 2012, and were planning to close an additional 396 data centers—for a total of 816—by September 2013. While the number of data centers that agencies are planning to close from October 2013 through December 2015 (the planned completion date of FDCCI) is not reported on http://data.gov, OMB’s July 2012 quarterly report to Congress on the status of federal IT reform efforts contains other information on agencies’ data center closure plans. Among other things, the report states that agencies have collectively committed to closing a total of 968 data centers by the end of 2015. According to OMB staff from the Office of E-Government and Information Technology, this figure represents the number of commitments reported by agencies, as compared to the initiative’s overall goal of closing 1,253 data centers by December 2015. The agencies have not identified the remaining 285 consolidation targets to achieve that goal. OMB’s January 2013 quarterly report to Congress does not provide any new information about either planned or completed agency data center closures. See figure 3 for a graphical depiction of agencies’ progress against OMB’s data center consolidation goal. However, OMB has not measured agencies’ progress against the cost savings goal of $3 billion by the end of 2015. According to a staff member from OMB’s Office of E-Government and Information Technology, as of November 2012, the total savings to date had not been tracked but were believed to be minimal. The staff member added that, although data center consolidation involves reductions in costs for existing facilities and operations, it also requires investment in new and upgraded facilities and, as a result, any current savings are often offset by the reinvestment of those funds into ongoing consolidation efforts. Finally, the staff member stated that OMB recognizes the importance of tracking cost savings and is working to identify a consistent and repeatable method for tracking cost savings as part of the integration of FDCCI with PortfolioStat, but stated that there was no time frame for when this would occur. The lack of initiativewide cost savings data makes it unclear whether agencies will be able to achieve OMB’s projected savings of $3 billion by the end of 2015. In previous work, we found that agencies’ cost savings projections were incomplete and, in some cases, unreliable. Specifically, in July 2012, we reported that most agencies had not reported their expected cost savings in their 2011 consolidation plans. Officials from several agencies reported that this information was still being developed. Notwithstanding these weaknesses, we found that agencies collectively reported anticipating about $2.4 billion in cumulative cost savings by the With less than 3 end of 2015 (the planned completion date of FDCCI).years remaining to the 2015 FDCCI deadline, almost all agencies still need to complete their inventories and consolidation plans and continue to identify additional targets for closure. Because closing facilities is a significant driver in realizing consolidation savings, the time required to realize planned cost savings will likely extend beyond the current 2015 time frame. With at least one agency not planning on realizing savings until after 2015 and other agencies having not yet reported on planned savings, there is an increased likelihood that agencies will either need more time to meet the overall FDCCI savings goal or that there are additional savings to be realized in years beyond 2015. Until OMB tracks cost savings data, the agency will be limited in its ability to determine whether or not FDCCI is on course toward achieving planned performance goals. Additionally, extending the horizon for realizing planned cost savings could provide OMB and FDCCI stakeholders with input and information on the benefits of consolidation beyond OMB’s initial goal. We have previously reported that oversight and governance of major IT initiatives help to ensure that the initiatives meet their objectives and performance goals. When an initiative is governed by multiple entities, the roles and responsibilities of those entities should be clearly defined and documented, including the responsibilities for coordination among those entities. We have further reported, and OMB requires, that an executive-level body be responsible for overseeing major IT initiatives. Among other things, we have reported that this body should have documented policies and procedures for management oversight of the initiative, regularly track progress against established performance goals, and take corrective actions as needed. Oversight and governance of FDCCI is the responsibility of several organizations—the Task Force, the GSA FDCCI Program Management Office, and OMB. Roles and responsibilities for these organizations are documented in the Task Force charter and OMB memoranda, while others are described in OMB’s January 2013 quarterly report to Congress or have been communicated by agency officials. See table 1 for a listing of the FDCCI oversight and governance entities and their key responsibilities. The Task Force, the GSA FDCCI Program Management Office, and OMB have performed a wide range of FDCCI responsibilities. For example, the Task Force holds monthly meetings to, among other things, communicate and coordinate consolidation best practices and to identify policy and implementation issues that could negatively impact the ability of agencies to meet their goals. Further, the Task Force has assisted agencies with the development of their consolidation plans by discussing lessons learned during its monthly meetings and disseminating new OMB guidance. GSA has collected responses to OMB-mandated document deliveries, including agencies’ consolidation inventories and plans, on an annual basis. In addition, GSA has collected data related to FDCCI data center closure updates, disseminated the information publically on the consolidation progress dashboard on http://data.gov, and provided ad hoc and quarterly updates to OMB regarding these data. Lastly, as the executive-level body, OMB issued FDCCI policies and guidance in a series of memoranda that, among other things, required agencies to provide an updated data center asset inventory at the end of every third quarter and an updated consolidation plan at the end of every fourth quarter. In addition, OMB launched a publically available electronic dashboard to track and report on agencies’ consolidation progress. However, oversight of FDCCI is not being performed in other key areas. For example, The Task Force has not provided oversight of the agency consolidation peer review process. According to officials, the purpose of the peer review process is for agencies to get feedback on their consolidation plans and potential improvement suggestions from a partner agency with a data center environment of similar size and complexity. While the Task Force documented the agency pairings for 2011 and 2012 reviews, it did not provide agencies with guidance for executing their peer reviews, including information regarding the specific aspects of agency plans to be reviewed and the process for providing feedback. As a result, the peer review process did not ensure that significant weaknesses in agencies’ plans were being identified. As previously mentioned, in July 2012, we reported that all of the agencies’ plans were incomplete except for one. In addition, we noted that three agencies had submitted their June 2011 inventory updates, a required component of consolidation documentation, in an incorrect format—an outdated template. The GSA FDCCI Program Management Office has not executed its responsibilities related to analyzing agencies’ inventories and plans and reviewing these documents for errors. In July 2012, we reported on agencies’ progress toward completing their inventories and plans and found that only three agencies had submitted a complete inventory and only one agency had submitted a complete plan, and that most agencies did not fully report cost savings information and eight agencies did not include any cost savings information. The lack of cost savings information is particularly important because, as previously noted, initiativewide cost savings have not been determined—a shortcoming that could potentially be addressed if agencies had submitted complete plans that addressed cost savings realized, as required. Although OMB is the approval authority of agencies’ consolidation plans, it has not approved agencies’ submissions on the basis of their completeness. In an October 2010 memorandum, OMB stated that its approval of agencies’ consolidation plans was in progress and would be completed by December 2010. However, OMB did not issue a subsequent memorandum indicating that it had approved agencies’ plans, or an updated time frame for completing its review. This is important because, in July 2011 and July 2012, we reported that agencies’ consolidation plans had significant weaknesses and that nearly all were incomplete. OMB has not reported on agencies’ progress against its key performance goal of achieving $3 billion in cost savings by the end of 2015. Although the 2012 Consolidated Appropriations Act included a provision directing OMB to submit quarterly progress reports to the Senate and House Appropriations Committees that identify savings achieved through governmentwide IT reform efforts, OMB has not yet reported on cost savings realized for FDCCI. Instead, the agency’s quarterly reports had only described planned FDCCI-related savings and stated that future reports will identify savings realized. As of the January 2013 report, no such savings have been reported. These weaknesses in oversight are due, in part, to OMB not ensuring that assigned responsibilities are being executed. Improved oversight could better position OMB to assess progress against its cost savings goal and minimize agencies’ risk of not realizing anticipated cost savings. OMB’s recent integration of FDCCI and PortfolioStat made significant changes to data center consolidation oversight and reporting requirements. According to OMB’s March 2013 memorandum, to more effectively measure the efficiency of an agency’s data center assets, agency progress will no longer be measured solely by closures. Instead, agencies will also be measured by the extent to which their data centers are optimized for total cost of ownership by incorporating metrics for energy, facility, labor, and storage, among other things. In addition, OMB stated that the Task Force will categorize agencies’ data center populations into two categories—core and non-core data centers—for which the memorandum does not provide specific definitions. Additionally, as previously discussed, agencies are no longer required to submit the data center consolidation plans previously required under FDCCI. Rather, agencies are to submit information to OMB via three primary means—an information resources management strategic plan, an enterprise road map, and an integrated data collection channel. Using these tools, an agency is to report on, among other things, its approach to optimizing its data centers; the state of its data center population, including the number of core and non-core data centers; the agency’s progress on closures; and the extent to which an agency’s data centers are optimized for total cost of ownership. However, OMB’s memorandum does not fully address the revised goals and reporting requirements of the combined initiative. Specifically, OMB stated that its new goal is to close 40 percent of non-core data centers but, as previously mentioned, the definitions for core and non-core data center were not provided. Therefore, the total number of data centers to be closed under OMB’s revised goal cannot be determined. In addition, although OMB has indicated which performance measures it plans to use going forward, such as those related to data center energy and labor, it has not documented the specific metrics for agencies to report against. The memorandum indicates that these will be developed by the Task Force, but does not provide a time frame for when this will be completed. Lastly, although OMB has previously stated that PortfolioStat is expected to result in savings of approximately $2.5 billion through 2015, its memorandum does not establish a new cost savings goal for FDCCI, nor does it refer to the previous goal of saving $3 billion. Instead, OMB states that all cost savings goals previously associated with FDCCI will be integrated into broader agency efforts to reshape their IT portfolios, but does not provide a revised savings estimate. The lack of a new cost savings goal will further limit OMB’s ability to determine whether or not the new combined initiative is on course toward achieving its planned objectives. In addition, several important oversight responsibilities related to data center consolidation have not been addressed. For example, with the elimination of the requirement to submit separate data center consolidation plans under the new combined initiative, the memorandum does not discuss whether either the Task Force or the GSA Program Management Office will continue to be used in their same oversight roles for review of agencies’ documentation. In addition, while the memorandum discusses OMB’s responsibility for reviewing agencies’ draft strategic plans, it does not discuss the responsibility for approving them. In the absence of defined oversight assignments and responsibilities, it cannot be determined how OMB will have assurance that agencies’ plans meet the revised program requirements and, moving forward, whether these plans support the goals of the combined initiative. In our report being released today, we are making recommendations to better ensure that FDCCI achieves expected cost savings and to improve executive-level oversight of the initiative. Specifically, we are recommending that the Director of OMB direct the Federal CIO to track and annually report on key data center consolidation performance measures, such as the size of data centers being closed and cost savings to date; extend the time frame for achieving cost savings related to data center consolidation beyond the current 2015 horizon, to allow time to meet the initiative’s planned cost savings goal; and establish a mechanism to ensure that the established responsibilities of designated data center consolidation oversight organizations are fully executed, including responsibility for the documentation and oversight of the peer review process, the review of agencies’ updated consolidation inventories and plans, and approval of updated consolidation plans. The Federal CIO stated that the agency concurred with the first and third recommendation. Regarding the second recommendation, OMB neither agreed nor disagreed. However, the Federal CIO stated that, as the FDCCI and PortfolioStat initiatives proceed and continue to generate savings, OMB will consider whether updates to the current time frame are appropriate. In summary, after more than 3 years into FDCCI, agencies have made progress in their efforts to close data centers. However, many key aspects of the integration of FDCCI and PortfolioStat, including new data center consolidation and cost savings goals, have not yet been defined. Further compounding this lack of clarity, total cost savings to date from data center consolidation efforts have not been determined, creating uncertainty as to whether OMB will be able to meet its original cost savings goal of $3 billion by the end of 2015. In the absence of tracking and reporting on cost savings and additional time for agencies to achieve planned savings, OMB will be challenged in ensuring that the initiative, under this new direction, is meeting its established objectives. Recognizing the importance of effective oversight of major IT initiatives, OMB directed that three oversight organizations—the Task Force, the GSA FDCCI Program Management Office, and OMB—be responsible for federal data center consolidation oversight activities. These organizations have performed a wide range of FDCCI responsibilities, including facilitating collaboration among agencies and developing tools to assist agencies in their consolidation efforts. However, other key oversight activities have not been performed. Most notably, the lack of formal guidance for consolidation plan peer review and approval increases the risk that missing elements will continue to go undetected and that agencies’ efforts will not fully support OMB’s goals. Further, OMB’s March 2013 memorandum does not address whether the Task Force and GSA’s Program Management Office will continue their oversight roles, which does not help to mitigate this risk. Finally, while OMB has put in place initiatives to track consolidation progress, consolidation inventories and plans are not being reviewed for errors and cost savings are not being tracked or reported. The collective importance of these activities to federal data center consolidation success reinforces the need for oversight responsibilities to be fulfilled in accordance with established requirements. Chairman Mica, Ranking Member Connolly, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staffs have any questions about this testimony, please contact me at (202) 512-9286 or at pownerd@gao.gov. Individuals who made key contributions to this testimony are Dave Hinchman (Assistant Director), Justin Booth, Nancy Glover, and Jonathan Ticehurst. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 2010, as focal point for information technology management across the government, OMB’s Federal Chief Information Officer launched the Federal Data Center Consolidation Initiative—an effort to consolidate the growing number of federal data centers. In July 2011 and July 2012, GAO evaluated 24 agencies’ progress and reported that nearly all of the agencies had not completed a data center inventory or consolidation plan and recommended that they do so. GAO was asked to testify on its report, being released today, that evaluated agencies' reported progress against OMB’s planned consolidation and cost savings goals, and assessed the extent to which the oversight organizations put in place by OMB for the Federal Data Center Consolidation Initiative are adequately performing oversight of agencies' efforts to meet these goals. In this report, GAO assessed agencies’ progress against OMB’s goals, analyzed the execution of oversight roles and responsibilities, and interviewed OMB, GSA, and Data Center Consolidation Task Force officials about their efforts to oversee agencies’ consolidation efforts. The 24 agencies participating in the Federal Data Center Consolidation Initiative made progress towards the Office of Management and Budget’s (OMB) goal to close 40 percent, or 1,253 of the 3,133 total federal data centers, by the end of 2015, but OMB has not measured agencies’ progress against its other goal of $3 billion in cost savings by the end of 2015. Agencies closed 420 data centers by the end of December 2012, and have plans to close an additional 548 to reach 968 by December 2015—285 closures short of OMB’s goal. OMB has not determined agencies’ progress against its cost savings goal because, according to OMB staff, the agency has not determined a consistent and repeatable method for tracking cost savings. This lack of information makes it uncertain whether the $3 billion in savings is achievable by the end of 2015. Until OMB tracks and reports on performance measures such as cost savings, it will be limited in its ability to oversee agencies’ progress against key goals. Pursuant to OMB direction, three organizations—the Data Center Consolidation Task Force, the General Services Administration (GSA) Program Management Office, and OMB—are responsible for federal data center consolidation oversight activities; while most activities are being performed, there are still several weaknesses in oversight. Specifically, While the Data Center Consolidation Task Force has established several initiatives to assist agencies in their consolidation efforts, such as holding monthly meetings to facilitate communication among agencies, it has not adequately overseen its peer review process for improving the quality of agencies' consolidation plans. The GSA Program Management Office has collected agencies’ quarterly data center closure updates and made the information publically available on an electronic dashboard for tracking consolidation progress, but it has not fully performed other oversight activities, such as conducting analyses of agencies’ inventories and plans. OMB has implemented several initiatives to track agencies’ consolidation progress, such as establishing requirements for agencies to update their plans and inventories yearly and to report quarterly on their consolidation progress. However, the agency has not approved the plans on the basis of their completeness or reported on progress against its goal of $3 billion in cost savings. The weaknesses in oversight of the data center consolidation initiative are due, in part, to OMB not ensuring that assigned responsibilities are being executed. Improved oversight could better position OMB to assess progress against its cost savings goal and minimize agencies’ risk of not realizing expected cost savings. In March 2013, OMB issued a memorandum that integrated the Federal Data Center Consolidation Initiative with the PortfolioStat initiative, which requires agencies to conduct annual reviews of its information technology investments and make decisions on eliminating duplication, among other things. The memorandum also made significant changes to the federal data center consolidation effort, including the initiative’s reporting requirements and goals. Specifically, agencies are no longer required to submit the previously required consolidation plans and the memorandum does not identify a cost savings goal. In its report, GAO recommended that OMB’s Federal Chief Information Officer track and report on key performance measures, extend the time frame for achieving planned cost savings, and improve the execution of important oversight responsibilities. OMB agreed with two of GAO’s recommendations and plans to evaluate the remaining recommendation related to extending the time frame.
In August 1995, IRS signed a $22 million interagency agreement with NTIS. To date, $17.1 million has been advanced to NTIS ($10 million in August 1995 and $7.1 million in December 1995). The agreement stipulated that NTIS would develop and operate Cyberfile, a tax systems modernization (TSM) project that would allow taxpayers to prepare and electronically submit their tax returns using their personal computers. Electronic returns would be submitted via the public switch telephone network or the Internet, accepted at a new NTIS data center, and then forwarded to designated IRS Service Centers. Taxpayers would not be charged a fee to file their returns using Cyberfile. To obtain contractor support to develop Cyberfile, NTIS modified an existing technical services contract awarded on a sole source basis through SBA’s Section 8(a) program for small and disadvantaged businesses. This program permits the award of a contract to the SBA, which then subcontracts with a firm owned by economically and socially disadvantaged individuals. After award, SBA requires the agency to manage the contract and ensure goods and services are received for dollars expended. Further, SBA officials told us that the procuring agency is supposed to obtain SBA approval before modifying the contract. NTIS also acquired systems hardware and services via existing Department of the Navy, Treasury, and General Services Administration (GSA) contracts and other sources. Because NTIS did not have a contracting activity with the authority to make purchases over $50,000, the agency used contracting officers from two other Commerce Department activities to support the Cyberfile procurement. Initially, the Office of Acquisition Management provided a contracting officer. In late November 1995, the Cyberfile procurement was transferred to a contracting officer at the National Institute of Standards and Technology. In December 1995, we briefed the IRS Commissioner on the risks associated with proceeding with Cyberfile as planned. We explained that Cyberfile was not being developed using disciplined systems development processes and that adequate steps were not being taken to protect taxpayer data on the Internet. At that time, Cyberfile development was scheduled for limited operational use by a selected population of taxpayers in February 1996. In March 1996 testimony, we noted that Cyberfile development reflected many of the same management and technical weaknesses we found in TSM systems and delineated in our July 1995 report. We also reported that Cyberfile contractual issues warranted further review. IRS’ Chief Inspector reviewed the Cyberfile acquisition and in an April 1996 briefing to management concluded that IRS did not follow internal procurement procedures, failed to sufficiently oversee the project, and was vulnerable to outside criticism. The Chief Inspector is also performing a physical inventory of equipment purchased by NTIS, which is scheduled to be completed in late August 1996. The Commerce Department’s Inspector General is reviewing NTIS’ operations, including its contracting efforts. Inspector General officials told us they have serious concerns about how NTIS and the department contracted for Cyberfile as well as other projects. These officials said they expect to issue a report in late August 1996. In March 1996, IRS decided to delay Cyberfile operations until after April 15, 1996. Because milestones for delivering Cyberfile kept slipping, IRS contracted with its Federally Funded Research and Development Center on April 16, 1996, to provide options available to IRS for delivering the system for the 1997 or 1998 tax filing seasons. NTIS continued to work on Cyberfile until the $17.1 million advanced from IRS had been obligated. NTIS then requested an additional advance from IRS to fund the $22 million obligation incurred by IRS. IRS did not provide the advance. Instead, it directed NTIS on May 10, 1996, to stop work on Cyberfile. The contractor reported to IRS in July 1996 with options for proceeding with Cyberfile. However, IRS is awaiting the completion of its Electronic Commerce Strategic Plan before deciding on the future course of Cyberfile. IRS has not yet established a completion date for the plan. IRS did not use disciplined processes to manage and control the Cyberfile acquisition. IRS did not perform the necessary requirements analysis for Cyberfile or identify alternative ways to satisfy these requirements. Neither did it prepare an acquisition strategy documenting how it would acquire the most cost-effective alternative. Further, IRS selected NTIS without evaluating its (1) capabilities to build such a system, (2) experience in building similar systems, or (3) ability to deliver cost-effectively as compared with private-sector and other government sources. Federal information management and acquisition regulations and IRS’ own policies and procedures require the use of disciplined, decision-making processes for planning, managing, and controlling the acquisition of information systems and services. These regulations and policies direct that prior to initiating system procurements, such as Cyberfile, IRS (1) identify its information needs, (2) perform a requirements analysis to determine how to support agency needs, (3) identify alternative ways to meet requirements, including the costs and benefits of each alternative, and (4) prepare an acquisition strategy that demonstrates how the agency plans to acquire the most cost-beneficial alternative. These processes would have mitigated the risks of acquiring a system that has yet to be delivered, is over budget, and failed to meet IRS’ objectives. IRS dispensed with disciplined analyses because IRS officials believed that NTIS had the capabilities to deliver Cyberfile by February 1996. They said this belief was based on (1) the fact that NTIS had provided taxpayers access to tax forms via NTIS’ FedWorld Network and (2) briefings by NTIS officials in which they claimed that NTIS could complete Cyberfile by February 1996, in time for the 1996 tax filing season. However, the technical challenge of providing tax forms is not comparable to the much more complex Cyberfile system. Further, NTIS offered no convincing analytical support for its claim that it could deliver Cyberfile by February 1996. For example, it provided no detailed task definitions, work breakdown structures, or interim schedules. IRS top management did not heed warnings, dating back to July 1995, from its acquisitions support staff that IRS’ Cyberfile procurement approach would lead to failure and jeopardize TSM. Our December 1995 briefing to the IRS Commissioner and Deputy Secretary of Commerce, on the risks of continuing with Cyberfile as planned, also did not dissuade IRS from its goal to field Cyberfile for the 1996 tax filing season. Only after NTIS informed IRS in April 1996 that the $17.1 million had been obligated and that the system still was not finished, did IRS stop to reconsider the project. In procuring Cyberfile, IRS did not fully comply with federal acquisition regulations which are designed to help agencies develop and acquire automated systems that meet agency needs and are delivered on time and within budget. IRS cited the Brooks ADP Act, rather than the Economy Act,for its authority to enter into its interagency agreement with NTIS. In this regard, IRS concluded that the Economy Act was not applicable to its agreement with NTIS and, therefore, IRS did not attempt to comply with the requirements of that act. IRS’ position is supported by a recent amendment to the Federal Information Resources Management Regulation, which formalizes GSA’s position that the Economy Act is not applicable to information technology procurements subject to the Brooks ADP Act. Congress may not have contemplated the exemption of such a large portion of federal procurements from the requirements of the Economy Act. Nonetheless, the amendment was not unreasonable and was issued pursuant to GSA’s authority to implement the Brooks ADP Act. Accordingly, we have no basis to object to it. Because section 5101 of the National Defense Authorization Act for Fiscal Year 1996, Public Law No. 104-106, (1996), repealed the Brooks ADP Act, effective August 8, 1996, any authority to initiate interagency agreements under the Brooks ADP Act has expired. However, the Brooks ADP Act was in effect when IRS initiated the interagency agreement with NTIS. Although it cited the Brooks ADP Act as its authority in acquiring Cyberfile, IRS did not follow the Federal Information Resources Management Regulation that implements this law. Specifically, the regulations require agencies to conduct requirements and alternatives analyses prior to procuring information technology. IRS did not conduct either analysis. Without these analyses, IRS could neither define the software capabilities and features needed for Cyberfile nor determine which acquisition and technical options were most advantageous to the government. Further, the Federal Information Resources Management Regulation also requires agencies acquiring information systems and services to obtain a delegation of procurement authority from GSA. Treasury had a delegation from GSA, and in turn required IRS to obtain a delegation of procurement authority from the department for information system initiatives over $15 million. When IRS signed the $22 million interagency agreement with NTIS, it did not obtain the required approval from Treasury. In procuring Cyberfile, NTIS did not fully comply with federal acquisition laws and regulations, which are intended to encourage full and open competition and help agencies develop and acquire information systems that meet their needs and are delivered on time and within budget. Specifically, NTIS (1) awarded a Section 8(a) contract on a sole source basis without making a reasonable determination that the value of the contract was below SBA competition thresholds, (2) improperly modified the contract to add a requirement to develop Cyberfile, and (3) did not effectively hold the contractor accountable for specific deliverable dates, attributes, and quality. According to SBA regulations, Section 8(a) procurements with an estimated award value over certain dollar thresholds must be competed among eligible 8(a) firms, while procurements under the threshold can be awarded on a sole source basis. For procurements such as NTIS’ technical services support contract, the threshold is $3 million. In determining whether this threshold is met, the agency is required to make a reasonable estimate of the contract value. In September 1995, NTIS awarded a sole source contract for $2.3 million to an 8(a) firm to provide technical support services for its FedWorld and other related tasks. We found, however, that NTIS did not have a reasonable basis for its cost estimate prior to awarding this sole source contract. NTIS officials said that at the time of contract award, they estimated that the FedWorld work would cost $1.0 million, but had no idea what Cyberfile tasks would ultimately cost. Rather than developing a cost estimate analytically, NTIS officials said they “plugged in a cost” of $1.3 million for Cyberfile, for a total contract value of $2.3 million. After contract award, the contractor estimated the cost to develop Cyberfile at $3.3 million, which resulted in a $2.0 million contract modification on November 7, 1995, a month and a half after contract award. As of July 11, 1996, the contractor had spent a total of about $3.6 million. Accordingly, NTIS did not have an adequate basis for determining that a sole source award was proper in these circumstances. SBA officials told us that under the Section 8(a) program, SBA requires federal agencies to submit 8(a) contract modifications to SBA for review and approval prior to making the change. Responsible SBA officials told us SBA reviews the modifications to ensure that they do not constitute a circumvention of competition requirements, to determine whether the work is within the scope of the original contract, and to validate that the firm is still eligible for work under the 8(a) program. SBA will not approve modifications that are beyond the scope of the contract if the firm is no longer eligible for the work under the 8(a) program. We found NTIS’ modification of the 8(a) contract improper for three reasons. First, NTIS did not submit the modification to SBA for review and approval. Instead, NTIS executed the modification on its own. Second, SBA officials told us that had they received the modification, they would have disapproved it because such a substantial increase, so soon after contract award, would have been a circumvention of the $3 million threshold for competition. Third, the contractor was not eligible under the 8(a) program for this type of work. In this regard, SBA considered the Cyberfile work envisioned in the modification to be beyond the scope of the work in the original contract and determined that the contractor was no longer eligible to perform this work because its income exceeded 8(a) eligibility requirements. Accordingly, SBA’s Associate Administrator for Minority Enterprise Development has taken the position that had NTIS submitted the modification for its approval, SBA would have rejected it. Federal acquisition regulations require that under cost reimbursement contracts, like the one awarded to NTIS’ contractor, only costs that are properly allocable to the contract can be paid. In order to make these determinations, the contract’s statement of work must be clear enough to determine whether costs claimed by the contractor are incurred for specified work. The work statements should describe the government’s requirements, including definitions of all deliverables and the condition of their acceptability. We found that the contract work statement for Cyberfile was too vague to properly allocate costs. Specifically, the contracting officer from Commerce’s National Institute of Standards and Technology told us that the statement of work did not include all the deliverables and milestones needed to verify payments due and was too vague to determine whether to pay the contractor. In this regard, when the contractor requested an additional $4 million on April 30, 1996, to finish the project, the contracting officer could not determine if the request was for work that should have been completed under the existing contract or for additional work not authorized by the contract. To make this determination, the contracting officer directed NTIS to rewrite the statement of work with sufficient detail and sent it to the contractor on May 14, 1996, requesting supporting documentation for all costs incurred. The contractor provided documentation on July 11, 1996, and it is being reviewed by the contracting officer. IRS abdicated its responsibility to ensure that NTIS was managing the Cyberfile effort efficiently and effectively. Program oversight was generally limited to (1) weekly progress meetings with NTIS officials, who repeatedly assured IRS officials that Cyberfile would be ready for the 1996 tax filing season without providing any convincing basis for these assurances, (2) reviewing monthly budget and schedule reports, which IRS project managers said were useless because the information provided was inaccurate and not current, and (3) participating in acceptance testing of portions of the system as they were delivered. Under the interagency agreement and IRS’ policy for implementing it, IRS was required to review and approve invoices submitted by NTIS to ensure that NTIS’ performance was consistent with terms and conditions of the interagency agreement. However, IRS officials said that they were unaware of this requirement and did not request the invoices from NTIS. Agencies are required to maintain adequate systems of internal controls to ensure effective stewardship of public funds. However, our review of Cyberfile transactions recorded in NTIS’ financial management system disclosed significant internal control weaknesses which resulted from not following generally accepted practices. Specifically, for the Cyberfile transactions reviewed, NTIS often failed to record obligations and costs promptly and accurately and properly document financial transactions. Because of these weaknesses, neither IRS nor NTIS management had the reliable financial management information needed to effectively oversee and monitor the progress of the Cyberfile project. In this regard, the total obligations and costs reported to IRS by NTIS on June 28, 1996, were inaccurate. We also found that IRS did not properly account for Cyberfile obligations and costs because it did not effectively discharge its financial management responsibilities for the project. The Federal Managers’ Financial Integrity Act of 1982 (31 U.S.C. 3512) requires that agency systems of internal and accounting and administrative control must comply with internal control standards prescribed by the Comptroller General and must provide reasonable assurances that: obligations and costs comply with applicable law; assets are safeguarded against waste, loss, unauthorized use, and revenues and expenditures applicable to agency operations are recorded and accounted for properly so that accounts and reliable financial and statistical reports may be prepared and accountability of the assets may be maintained. Agency heads are required to prepare an annual report which is to be transmitted to the President and the Congress on whether their agency’s internal control systems fully comply with the act’s requirements. The act requires that the report identify any material systems weaknesses together with plans for corrective actions. The internal control standards that agencies are to follow are contained in the Standards for Internal Controls in the Federal Government. These were issued in 1983 by GAO as required by the Federal Managers’ Financial Integrity Act and provide 12 internal control standards that agencies should follow. Further, the Chief Financial Officers Act of 1990 requires agencies to develop and maintain financial management systems that comply with internal control standards and provide complete, reliable, consistent, and timely information. In addition, the financial data are to be prepared uniformly and be responsive to the financial information needs of agency management. As envisioned by the Federal Managers’ Financial Integrity Act and the Chief Financial Officers Act, the ultimate responsibility for good internal controls rests with management. An internal control system is not a specialized or separate system. Rather, internal controls are to be an integral part of each system that management uses to regulate and guide its operations. In this sense, they are management’s controls. Good internal controls are essential to achieving the proper conduct of government business with full accountability for the resources made available. They also facilitate the achievement of management objectives by serving as checks and balances against undesired actions and the resulting negative consequences. One of the 12 internal control standards requires that transactions be promptly and properly classified. This is essential to maintaining good financial management information and effectively tracking project obligations and costs. Therefore, management needs to ensure that adequate controls are implemented to ensure that transactions are promptly and accurately recorded. Our review of Cyberfile transactions disclosed that it sometimes took months before an obligation was recorded. Specifically, we reviewed about $16 million of obligations and found that about $10.8 million (67 percent) of them were recorded more than 30 days after the obligation date. Such delays create an unnecessary risk of financial commitments exceeding spending authority. Some examples follow: An $886,100 obligation for a computer system was made on November 28, 1995, but was not recorded in the accounting records until February 20, 1996. A major Cyberfile contract was signed in September 1995 with an initial value of about $2.3 million. However, this obligation was not recorded promptly. Specifically, obligations totaling about $2 million were recorded in the accounting system between December 1995 and April 1996, as the invoices were received. Similarly, the contract was modified in November 1995, and the total contract value was raised to about $4.3 million, but an obligation for about $2.1 million was not recorded until June 17, 1996. As of June 27, 1996, the remaining $200,000 had not been recorded. In addition, we identified cases where NTIS did not record costs when goods and services were received and accepted. For example, invoices totaling $3.4 million for goods and services provided for the project were dated March 26, 1996, ($1.2 million) and June 13, 1996, ($2.2 million). NTIS officials agreed that the goods and services associated with the $1.2 million invoice had been received and accepted by April 2, 1996, while the goods and services for the $2.2 million invoice had been received and accepted by NTIS by June 14, 1996. However, as of June 27, 1996, only about $46,000 of these costs were recorded. Transactions must be recorded accurately to ensure that the financial management system can be used to effectively oversee and monitor a project’s progress. However, we identified the following examples where obligations and/or costs were recorded inaccurately. We identified two cases where items coded as belonging to other projects were improperly obligated for the Cyberfile project. These obligations, which totalled about $256,000, were charged to the Cyberfile project until they were credited in late July 1996. NTIS personnel and IRS internal auditors reviewed the items charged to the project and have identified several items, totaling over $300,000, that should not have been charged to the project. Although all but about $11,000 has now been credited to the project for these items, other related costs have not. For example, the Cyberfile project was initially charged about $138,000 for computers that were used by NTIS’ FedWorld project. Cyberfile was subsequently credited for this amount. However, this purchase also required the payment of about $5,500 in administrative fees to the agency administering the contract. These fees were also charged to Cyberfile but the project was not credited for these fees until July 10, 1996. According to NTIS officials, these fees were paid separately from the equipment and were overlooked when the credit for the equipment was recorded. NTIS personnel also identified about $7,000 in equipment costs which should have been charged to the Cyberfile project, but were erroneously charged to NTIS’ FedWorld project. Another of the 12 internal control standards requires that agencies clearly document all transactions and other significant financial events and that the documentation be readily available for examination. Our review found that NTIS did not maintain adequate supporting documentation for many Cyberfile transactions. For example, between March 22, 1996, and April 17, 1996, NTIS recorded obligations totaling $850,000 to another federal agency for renovation costs of the space to be used for the Cyberfile project. However, at that time, NTIS did not have a signed interagency agreement with this agency and thus did not have a valid basis for obligating funds. In cases such as this, 31 U.S.C. 1501 requires that obligations only be recorded “when supported by documentary evidence.” NTIS eventually signed an interagency agreement with this federal agency on May 22, 1996. This agreement also covered rental costs for the Cyberfile space. Agencies are also required to only make disbursements against valid obligations. However, we identified problems with some payments made for the renovation. NTIS disbursed $70,609 in September 1995 and $28,860 in November 1995 for space renovations, 8 months and 6 months respectively, before the interagency agreement was signed. We also noted documentation problems with other transactions. For example, NTIS made payments totaling $44,548 to a vendor. However, when the funds were disbursed, only $24,560 was supported by a valid obligating document (a purchase order). The remaining $19,988 was obligated based on a purchase order dated 2 weeks after the last payment was made. Because of the internal control weaknesses relating to Cyberfile transactions, neither IRS nor NTIS management had the financial management information needed to effectively oversee and monitor the project. In particular, although the interagency agreement required NTIS to submit monthly billings to IRS for costs incurred, these billings were not requested or provided. Moreover, because of the financial weaknesses identified above, NTIS did not have the reliable financial management information needed to properly prepare such billings. In addition, the total obligations and costs reported to IRS in a June 28, 1996, letter were incorrect. On June 28, 1996, NTIS sent a letter to IRS which summarized the obligations and costs of the Cyberfile project. An attachment to the letter showed that NTIS had incurred Cyberfile obligations of $20.5 million, and about $13.6 million of costs had been incurred against these obligations through June 27, 1996. These amounts excluded June 1996 labor, benefits, and other related costs such as overhead. However, as discussed above, the reliability of the reported amounts is questionable because of NTIS’ failure to consistently record Cyberfile obligations and costs promptly and accurately. We also noted that the June 28, 1996, letter did not identify the amount of obligations that may be deobligated in the future. Specifically, because of changing IRS requirements, data storage devices costing over $650,000 that were originally purchased for Cyberfile were no longer needed for the Cyberfile project. NTIS officials stated that they are in the process of returning these items. However, they are unable to determine the amount of funds that will be credited to the project since they have not yet obtained the necessary information to determine the costs, such as restocking fees, associated with returning the items. They expect this information to be provided shortly. While it was correct to show the $650,000 as a Cyberfile related obligation and cost until the credit is received, the letter should have noted that a significant deobligation will be recorded once the credit is received from the vendor. Compounding the problems we noted at NTIS, IRS also did not effectively discharge its financial management responsibilities for the Cyberfile project. Our review identified two problems related to IRS’ treatment of Cyberfile related transactions. First, it improperly treated the $17.1 million in advances as an expense. Therefore, the information contained in IRS’ financial management system did not accurately reflect the expenses incurred based on the goods and services provided by NTIS and accepted by IRS. Second, it did not properly record the amount of obligations associated with Cyberfile in its financial management records. As a result, IRS’ financial management system significantly understates the obligations available to pay for Cyberfile. NTIS received two advances totaling about $17.1 million from IRS. IRS erroneously recorded these payments as expenses instead of advances. IRS’ procedures require it to obtain evidence that goods and services called for under the terms of an interagency agreement and related detailed statements of work are received and accepted before recording an expense. Accordingly, IRS should have recorded the $17.1 million as an advance and then transferred amounts to expense as the goods and services were received and accepted. However, as previously noted, NTIS did not submit and IRS did not request the required monthly billings for costs incurred. As a result, IRS could not determine the amount of goods and services NTIS provided. The problems we found in IRS’ accounting for the Cyberfile project with NTIS are consistent with the results of our financial audits. We reported in our audits of IRS’ financial statements for fiscal years 1992 through 1995,that IRS often does not have adequate support for amounts it reported as operating expenses. Our reports noted that IRS did not have documentation to support that the goods or services had been received for expenses recorded and that this problem was most evident in transactions for goods and services provided by other government agencies. The August 21, 1995, interagency agreement between IRS and NTIS had an expiration date of December 31, 1996, and provided for a maximum cost of $22 million, which the parties estimated to be necessary for the work. The agreement required NTIS to notify IRS when costs incurred and outstanding allowable commitments equalled 75 percent of the estimated total cost, and provided that no further costs would be incurred or further work performed when the maximum was reached. In accordance with 31 U.S.C. 1501, IRS should have recorded a $22 million obligation in its financial management system on August 21, 1996. As of August 3, 1996, however, IRS has only recorded about $17.1 million. IRS was unable to provide information which would support it recording an obligation of less than $22 million for Cyberfile. Adequate financial and program management controls were not implemented to ensure that Cyberfile was acquired cost-effectively. As a result, excess costs were incurred. Specifically, the Cyberfile project was schedule driven rather than event driven which led to goods and services not always being acquired cost-effectively, neither NTIS nor IRS acted promptly to avoid incurring unnecessary costs once the project was suspended, and the agreement between IRS and NTIS was inadequately structured to minimize Cyberfile project costs. We have previously reported that Cyberfile was schedule rather than event driven and delineated the system development problems caused by this approach. This exaggerated focus on schedule, which was self-imposed and lacked convincing justification, also led to goods and services not always being acquired cost-effectively. We found: Premiums were paid to expedite equipment delivery. We identified 19 cases of expedited, overnight, or Saturday delivery, totaling over $10,000. In one case, almost $725 was paid for overnight delivery of a rack costing $1,670. In two other cases, about $7,700 was paid in shipping charges to expedite delivery of computers. Requirements were not accurately determined before goods and services were procured. As a result, data storage devices costing about $600,000 were purchased, later determined to be unneeded, and are in the process of being returned. NTIS told us that restocking fees are about $90,000, or 15 percent, of the equipment’s cost. The necessary actions have not been undertaken to reduce the costs associated with Cyberfile. Specifically, since the suspension period began, costs have continued to be incurred for goods and services through ongoing rental agreements (e.g., equipment leases) that could have been avoided if the underlying agreements were canceled. “NTIS understands that it is to stop all work on CyberFile, and furthermore, NTIS will suspend all contracts and/or agreements that would constitute a further obligation of IRS funds. . . . As a result of this action NTIS will shut down all equipment, suspend telecommunications services, and remove NTIS and contractor personnel from the project.” According to the IRS contracting officer, the NTIS letter accurately portrayed the verbal order to NTIS. The IRS contracting officer also stated that she told NTIS in their May 10, 1996, conversation that IRS had no more funding and all contracts were to stop. The IRS contracting officer further stated that she believed the letter meant that NTIS would terminate any existing contracts where possible. However, IRS did not follow up with a letter ensuring that the parties clearly understood the specific actions NTIS would take to control obligations and costs. NTIS officials stated that the May 13, 1996, letter to IRS did not require them to terminate existing contracts where possible. Our review disclosed since the suspension period began, costs have continued to be incurred for goods and services through ongoing rental agreements (e.g., equipment leases). Although these avoidable costs were not detailed in NTIS’ June 28, 1996, letter to IRS, on July 17, 1996, NTIS provided IRS a list of these recurring costs that could have been avoided if the underlying agreements were terminated. A review of this list shows that the monthly costs for these items are about $30,000 and the underlying contracts can be terminated with 30 days notice. Only one of these contracts required a cancellation fee. Examples of these recurring costs follow. $10,404 per month for Internet service, $7,954 per month for a mail sorting machine, and $5,172 per month for rental and maintenance of a high speed printing machine. NTIS officials stated that they prepared this list to notify IRS that costs were still being incurred and were awaiting direction from IRS on whether the agreements should be terminated. According to the IRS contracting officer, when this letter was received, IRS called the NTIS program manager and instructed him to cancel the contracts. The IRS contracting officer said that she did not believe IRS had to formally document this decision. However, in another case, IRS did document its decision to cancel a contract relating to Cyberfile. Specifically, in a May 21, 1996, letter to NTIS from the Acting Executive for Electronic Filing, IRS formally notified NTIS to cancel a contract relating to support services. This contract was canceled. As of August 2, 1996, NTIS officials stated they still had not received formal notification to terminate the contracts identified in the July 17, 1996, letter. Since these contracts were not canceled shortly after the May 13, 1996, letter from NTIS to IRS, unnecessary rental costs for July and August of about $60,000 have been incurred. If it is determined that these costs are appropriate charges for the Cyberfile project, then these costs would also appear subject to NTIS’ 10 percent management fee. Either IRS or NTIS could have prevented these costs. For example, IRS could have clearly documented its understanding of the actions that NTIS would take to avoid additional costs. As discussed above, IRS clearly instructed NTIS to cancel a support services contract and the contract was promptly terminated. On the other hand, NTIS could have clearly documented its understanding of IRS’ desire to retain these contracts much earlier than the July 17, 1996, letter. IRS did not structure its agreement with NTIS to minimize its costs. Our review of the agreement disclosed that it allowed NTIS to assess a management fee for items which IRS could have readily obtained directly and provided to NTIS, and costs associated with NTIS’ mismanagement, such as interest costs associated with paying vendors late. NTIS procured over $5.5 million in equipment and services using existing contracts held by other government agencies, which are then subject to NTIS’ 10 percent management fee. IRS could have reduced its costs by either (1) stating in the agreement that certain costs, such as the costs of items procured under existing contracts, were not subject to the NTIS management fee or (2) procuring the items itself, based on NTIS requirements, and providing them to NTIS. If IRS had exercised either of these options, it could have significantly reduced the costs subject to the management fee. For example, NTIS purchased computers costing almost $300,000 under a contract administered by another federal agency. In this case, NTIS simply placed an order. IRS could have avoided about $30,000 for NTIS management fees if it had placed the order itself. NTIS purchased items costing over $886,000 under an existing Treasury contract which is administered by IRS. If IRS had purchased these items directly and provided them to NTIS, it could have avoided NTIS management fees totaling about $89,000. The Prompt Payment Act of 1982 requires agencies to pay interest penalties to compensate vendors when agencies do not pay their bills on time. NTIS records show that it incurred about $2,100 in penalties through June 27, 1996, because it did not pay Cyberfile bills on time. Even though the late payments were NTIS’ fault, they were included in Cyberfile’s costs and subject to the 10 percent management fee. In light of the severity of acquisition and financial problems identified, we recommend that, before resuming the Cyberfile project, the Commissioner of the Internal Revenue Service: Provide to the Senate Committee on Governmental Affairs, the House Committee on Government Reform and Oversight, the Senate and House Appropriations Committees, the Senate Committee on Finance, and the House Committee on Ways and Means, a report detailing the weaknesses in IRS’ acquisition and financial management processes and controls that permitted Cyberfile mismanagement (e.g., permitted IRS to disregard system acquisition policies and procedures, disregard federal acquisition regulations, and provide inadequate oversight of NTIS system development and acquisition efforts); actions that have been taken to ensure that these weaknesses in IRS’ processes and controls have been corrected and that resulting mismanagement does not recur; and IRS’ plans for Cyberfile, including a business case analysis addressing costs, mission-related benefits and technological risks, schedule and milestones, and acquisition strategy. Ensure that (a) IRS’ Chief Financial Officer adjusts the obligations and costs recorded for Cyberfile to reflect the actual obligations and costs associated with the interagency agreement with NTIS and (b) NTIS identifies all obligations and costs that can be avoided while Cyberfile is suspended and takes needed contractual action to do so. Report the acquisition weaknesses as material weaknesses in the agency’s system of internal controls under the Federal Managers’ Financial Integrity Act to the extent they remain uncorrected at the close of fiscal year 1996 and reassess these controls periodically to ensure they are adequate and are being adhered to as required by the act. We recommend that, before permitting NTIS to resume work on the Cyberfile project or accept new systems development projects from other federal agencies (e.g., work NTIS solicits, such as providing information management solutions, performing program management and software development, and building state-of-the-art customized programs), the Secretary of Commerce: Provide to the Senate Committee on Governmental Affairs, the House Committee on Government Reform and Oversight, the Senate and House Appropriations Committees, the Senate Committee on Commerce, Science, and Transportation, and the House Committee on Science, a report detailing the weaknesses in NTIS’ acquisition and financial management processes and controls that permitted Cyberfile mismanagement (e.g., permitted NTIS to disregard procurement laws and regulations and dispense with acceptable financial accounting practices); and actions that have been taken to ensure that these weaknesses in NTIS’ processes and controls have been corrected and that resulting mismanagement does not recur. Ensure that NTIS’ Director immediately identifies all costs that can be avoided while Cyberfile is suspended and takes needed contractual action to do so. Rescind all charges made to IRS associated with NTIS mismanagement, such as costs and fees for prompt payment penalties. Rescind management fees for all items purchased from existing government contracts. Report the acquisition and financial management weaknesses as weaknesses in the agency’s system of internal controls under the Federal Managers’ Financial Integrity Act to the extent they remain uncorrected at the close of fiscal year 1996 and reassess these controls periodically to ensure they are adequate and are being adhered to as required by the act. In commenting on our report, Treasury agreed with our findings and recommendations. It stated that it shared our concerns regarding IRS’ management of the Cyberfile project and that the experience with the project underscores the importance of IRS implementing our recommendations. In its comments, IRS agreed that Cyberfile was not successful and had encountered problems, even though IRS expected to expand its technical capability by using NTIS. IRS explained that it is conducting an internal review of Cyberfile to identify a full range of corrective actions. Commerce also supported many of our recommendations. However, it disagreed that NTIS should (1) rescind management fees associated with ordering equipment from existing government contracts and (2) refrain from accepting new projects from other agencies until the reported weaknesses are corrected. First, in refusing to rescind the management fees, Commerce stated that IRS agreed to pay these fees on equipment ordered from existing government contracts “for its own convenience,” and that NTIS was entitled by the interagency agreement to collect them. This position misses the point of the recommendation. The issue is not whether Commerce is entitled to assess these charges under the interagency agreement (the report explicitly states that it is), but rather whether these charges represent judicious management of federal funds. In executing an interagency agreement, all parties are required to ensure that the best interests of the government are served, and that federal funds are prudently expended. Charging IRS an $89,000 management fee for purchasing equipment from an existing contract administered by the IRS itself, and, in addition, hundreds of thousands of dollars in unnecessry fees for placing orders with other federal agencies that IRS could have placed itself, is not judicious management of federal funds and is not in the best interest of the federal government. Second, Commerce said that it would not refrain from accepting new projects from other agencies before the causes of Cyberfile mismanagement had been identified, corrected, and reported to the Congress, because most NTIS projects involve routine information dissemination. This recommendation was not intended to address NTIS projects involving only routine information dissemination. Our intent was to ensure that NTIS accepted no new systems analysis, development, or management projects, such as those solicited on NTIS’ Internet site (i.e., providing other agencies with information management solutions, performing program management and software development, and building state-of-the-art customized programs) while weaknesses in NTIS acquisition and financial management processes persist. We have modified the recommendation to state our intention more precisely. In its response, Commerce also took the position that (1) the project took longer than the scheduled 6 months because IRS increased systems requirements after major milestones were met and (2) when IRS suspended the Cyberfile project in May 1996, the system was near completion. However, as we testified in March 1996, there was no formal process in place to define, manage, and control Cyberfile systems requirements. For example, there were no established security requirements or requirements baseline. Further, since Cyberfile was developed using undisciplined and ad hoc software development processes, NTIS has no analytical basis to determine whether the system was “near completion,” when it would be complete, or how much it would cost. Finally, Commerce claimed that it did not have enough time to review the facts in the draft report. However, NTIS was well aware of all the facts and had commented on them orally and in writing before the draft report was sent. Specifically, before sending the draft report, we provided NTIS with written statements detailing the facts, held meetings with NTIS to discuss the facts on July 26, August 2, and August 6, 1996, and received and responded to NTIS’ written comments on the facts. We then sent Commerce the draft report on August 8, 1996, and allowed 8 days for the response. Given that the facts already had been thoroughly discussed, this should have been adequate time for a complete review. As agreed with your office, unless you publicly announce the contents of this report earlier, we will not distribute it until 30 days from its date. At that time, we will send copies to the Ranking Minority Member of the Senate Committee on Governmental Affairs as well as the Chairmen and the Ranking Minority Members of the House Committee on Government Reform and Oversight; the Senate Committee on Finance; the House Committee on Ways and Means; the Senate and House Committees on Appropriations; the Subcommittees on Treasury, Postal Service and General Government of the Senate and House Appropriations Committees; the Senate Committee on Commerce, Science, and Transportation; and the House Committee on Science. We are also sending copies to the Secretary of the Treasury, the Secretary of Commerce, Commissioner of the Internal Revenue Service, the Director of the National Technical Information Service, the Director of the National Institute of Standards and Technology, and the Director of the Office of Management and Budget. Copies will also be made available to others upon request. If you have questions about this letter, please contact me at (202) 512-6412. Major contributors are listed in appendix V. To determine IRS’ rationale for selecting NTIS to develop and acquire Cyberfile, we reviewed IRS policies and procedures for initiating and justifying new information system projects and the documentation that IRS prepared for the Cyberfile project in accordance with the guidance. We also reviewed NTIS’ Cyberfile study and proposal as well as the August 1995 interagency agreement between IRS and NTIS and supporting documentation. Finally, we reviewed IRS’ and NTIS’ December 1994 interagency agreement to develop an electronic bulletin board for tax forms. We interviewed IRS and NTIS program and information system officials to understand (1) why NTIS was considered to develop Cyberfile, (2) how IRS evaluated NTIS, and (3) how NTIS performed on other projects done for IRS. We also coordinated with IRS’ internal auditors, reviewing their audit memoranda and write-ups to ensure no duplication of effort. To determine whether IRS and NTIS followed applicable procurement laws and regulations in acquiring Cyberfile equipment and services, we reviewed the Competition in Contracting Act, the Economy Act, the Brooks ADP Act, the Federal Acquisition Regulation, the Federal Information Resources Management Regulation, and SBA’s Section 8(a) regulations. We also examined procurement policies and procedures for IRS and NTIS, including the IRS policy on interagency agreements. We reviewed pertinent Cyberfile contract files to document the chronology of events and verified them through interviews with IRS, NTIS, National Institute of Standards and Technology, and SBA procurement officials. We then compared the contracting actions with the laws and regulations to assess their appropriateness. We also interviewed Department of Commerce Inspector General staff, who were reviewing procurement and other management practices at NTIS, to confirm our understanding of Commerce’s procurement processes and verify our findings. To determine if Cyberfile purchases were properly accounted for and were cost-effective, we worked in conjunction with IRS’ internal auditors who were performing a full inventory of all purchases related to Cyberfile. For selected transactions, we compared obligation and disbursement dates to dates recorded in the accounting records and reviewed supporting documentation. We also reviewed procurement files to verify the validity of obligations and disbursements and reviewed related interagency agreements and contracts. In addition, we contacted the federal agency personnel working with NTIS to renovate space for the Cyberfile computer center. Our work was performed at IRS headquarters in Washington, D.C.; the IRS facilities in Bethesda and Oxon Hill, Maryland; the Department of Commerce headquarters in Washington, D.C.; NTIS in Springfield, Virginia; the National Institute of Standards and Technology in Gaithersburg, Maryland, SBA headquarters and Washington District Office in Washington, D.C.; and the technical services contractor’s location in Bethesda, Maryland. Our work was conducted from April 1996 through early August 1996. We performed our work in accordance with generally accepted government auditing standards. Frank Maguire, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Internal Revenue Service's (IRS) acquisition of Cyberfile, an electronic filing system that the National Technical Information Service (NTIS) is developing for IRS, focusing on whether: (1) the IRS decision to use NTIS to develop Cyberfile was based on sound analysis; (2) IRS and NTIS followed applicable procurement laws and regulations; (3) IRS and NTIS properly accounted for Cyberfile obligations and costs; and (4) IRS cost-effectively acquired equipment and services. GAO found that: (1) IRS did not adequately analyze its requirements, consider alternative ways to satisfy its requirements, prepare a strategy for how it would acquire the most cost-effective alternative, or assess NTIS ability to develop, deliver, and operate an electronic filing system before deciding to use NTIS to develop Cyberfile; (2) IRS selected NTIS because of expediency and its belief that NTIS could meet a delivery date of February 1996; (3) IRS suspended development after advancing $17.1 million to NTIS; (4) IRS is reevaluating the project, since NTIS did not deliver it on time; (5) IRS cited the Brooks Act for its authority to procure Cyberfile, but it did not fully comply with the implementing regulation's requirements; (6) IRS did not obtain the proper delegation of procurement authority from the Treasury Department; (7) NTIS did not obtain the Small Business Administration's (SBA) approval to modify an existing sole-source Section 8(a) contract to add the Cyberfile project for a total cost of $3.3 million, and violated SBA rules for competing the procurement among eligible firms; (8) NTIS did not hold the contractor accountable for delivery dates and costs; (9) IRS did not ensure that NTIS efficiently and effectively managed the project; (10) IRS understated Cyberfile obligations and improperly accounted for the $17.1-million NTIS advance; (11) NTIS did not properly document significant financial transactions or record obligations and costs; and (12) IRS did not implement adequate controls to ensure that it did not incur excess costs after the project's suspension.
Interior, working with the Department of Agriculture’s Forest Service, has taken steps to help manage perhaps the most daunting challenge to its resource protection mission—protecting lives, private property, and federal resources from the threats of wildland fire. But concerns remain. Interior also faces challenges in managing oil and gas operations on federal lands, adapting to climate change, and resolving natural resource conflicts through collaborative management. The wildland fire problems facing our nation continue to grow. The average annual acreage burned by wildland fires has increased by approximately 70 percent since the 1990s, and appropriations for the federal government’s wildland fire management activities tripled from about $1 billion in fiscal year 1999 to nearly $3 billion in fiscal year 2007. As we have previously reported, a number of factors have contributed to worsening fire seasons and increased firefighting expenditures, including an accumulation of fuels resulting from past land management practices; drought and other stresses, in part related to climate change; and an increase in human development in or near wildlands. While Agriculture’s Forest Service receives the majority of fire management resources, Interior agencies—the National Park Service (NPS); the Bureau of Indian Affairs (BIA); the U.S. Fish and Wildlife Service (FWS); and, particularly, the Bureau of Land Management (BLM)—are key partners in responding to the threats of wildland fire. Consequently, most of our work and recommendations on wildland fire management address agencies in both departments. Specifically, we have called on the agencies to develop a cohesive strategy that identifies options and associated funding to reduce potentially hazardous vegetation and address wildland fire problems. In 1999, to address the problem of excess fuels and their potential to increase the severity of wildland fires and the cost of suppression efforts, we recommended that a cohesive strategy be developed to identify the available long-term options for reducing fuels and the associated funding requirements. Six years later, in 2005, we reiterated the need for a cohesive strategy and broadened our recommendation’s focus to better address the interrelated nature of fuel reduction efforts and wildland fire response. In January 2009, agency officials told us they were working to create such a cohesive strategy, although they could not provide an estimate of when it would be completed. establish clear goals and a strategy to help contain wildland fire costs. In 2007, we reported that the agencies were taking a number of steps intended to help contain wildland fire costs, but had not clearly defined cost-containment goals or developed a strategy for achieving those goals. Agency officials identified several documents that they believed provide clearly defined goals and objectives that make up Interior’s strategy to contain costs. However, the documents lack the clarity and specificity officials in the field need to help manage and contain wildland fire costs. We therefore continue to believe that our recommendations, if effectively implemented, would help the agencies better manage their cost-containment efforts and improve their ability to contain wildland fire costs. continue to improve their processes for allocating fuel reduction funds and selecting fuel reduction projects. Also in 2007, we identified several shortcomings in the agencies’ processes for allocating fuel reduction funds to field units and selecting fuel reduction projects, shortcomings that limited the agency’s ability to ensure that funds are directed where they will reduce risk most effectively. While Interior has taken steps to improve its processes for allocating fuel reduction funds and the information it uses in selecting fuel reduction projects, we believe that Interior must continue these efforts so that it can more effectively use its limited fuel reduction dollars. take steps to improve its use of a new interagency budgeting and planning tool. In 2008, we reported on the Forest Service’s and Interior’s development of a new planning tool known as fire program analysis (FPA). FPA was intended, among other things, to allow the agencies to analyze potential combinations of firefighting assets, and potential strategies for reducing fuels and fighting fires so that they could determine the most cost-effective mix of assets and strategies. While recognizing that FPA represents a significant step forward and shows promise in achieving certain of its objectives, we believe the agencies’ approach to FPA’s development hampers it from meeting other key objectives. We made a number of recommendations designed to enhance FPA and the agencies’ ability to use it, and Interior, in conjunction with the Forest Service, has identified several steps it is considering taking to do so. It is not yet clear how successful these steps will be. Furthermore, the steps the agencies outlined do not address all the shortcomings we identified. We continue to believe agency improvements are essential if the full potential of FPA is to be realized. The number of oil and gas operations that are permitted by BLM for access to federal oil and gas resources has increased dramatically—more than quadrupling from fiscal year 1999 to fiscal years 2006 and 2007—in part as a result of the desire to reduce the country’s dependence on foreign sources of oil and gas. In June 2005, we reported that BLM has struggled to deal with the increase in the permitting workload while also carrying out its responsibility to mitigate the impacts of oil and gas development on land that it manages. Overall, BLM officials told us that staff had to devote increasing amounts of time to processing drilling permits, leaving less time to ensuring the mitigation of the environmental impacts of oil and gas development. While the Interior, Environment, and Related Agencies Appropriation Act of Fiscal Year 2008 required BLM to charge a $4,000 processing fee for drilling permits, the act provided that the appropriation for permit processing would be reduced by the amount of fees received; thus the fee did not provide any additional resources for BLM to increase its monitoring and enforcement activities for oil and gas development. In its fiscal year 2009 budget request, BLM requested authority to (1) permanently implement a cost recovery fee for processing applications for permits to drill, (2) set the cost recovery fee at $4,150 for fiscal year 2009, and (3) deposit the revenues generated from the cost recovery fee in BLM’s Service Charges, Deposits and Forfeitures Account. BLM estimated the cost recovery fee would generate $34 million for fiscal year 2009. Within the energy and minerals budget for fiscal year 2009, BLM also requested a net increase of $7.8 million for oil and gas activities. Just as we have had concerns about BLM’s protection of environmental resources from oil and gas activities, we have had concerns, as we reported in 2003, that FWS’s oversight of oil and gas operations on wildlife refuge lands was not adequate. For example, we found that some refuge managers took extensive measures to oversee operations and enforce environmental standards, while others exercised little or no control. Such disparities occurred for two primary reasons. First, FWS had not officially determined its authority to require permits—which would include environmental conditions to protect refuge resources—of all oil and gas operations in refuges; we believe the agency has such authority. Second, refuge managers lacked guidance, adequate staffing levels, and training to properly oversee oil and gas activities. We also found that FWS was not collecting complete and accurate information on damage to refuge lands as a result of oil and gas operations and not identifying the steps needed to address that damage. In June 2007, we reported that the FWS had generally not taken sufficient actions to address five of the six recommendations we had made in 2003 to improve FWS’s management and oversight of oil and gas activities on national wildlife refuges. A growing body of evidence shows that increasing concentrations of greenhouse gases—primarily carbon dioxide, methane, and nitrous oxide—in the Earth’s atmosphere have resulted in a warmer global climate system, among other changes. In August 2007, we reported that, according to experts, federal land and water resources are vulnerable to a wide range of effects from climate change, some of which are already occurring. These effects include (1) physical effects, such as droughts, floods, glacial melting, and sea level rise; (2) biological effects, such as increases in insect and disease infestations, shifts in species distribution, and changes in the timing of natural events; and (3) economic and social effects, such as adverse impacts on tourism, infrastructure, fishing, and other resource uses. BLM, FWS, and NPS have not made climate change a priority, and the agencies’ strategic plans do not specifically address it. To better enable federal resource management agencies to take into account the existing and potential future effects of climate change on federal resources, we recommended that the Secretary of the Interior and two other departments develop guidance incorporating agencies’ best practices that advises managers on how to address climate change effects on the resources they manage. Interior and the other agencies generally agreed with our recommendation. The effects of a warmer climate have been clearly evident in Alaska. In December 2003, we reported that coastal villages in Alaska are becoming more susceptible to flooding and erosion in part because rising temperatures cause protective shore ice to form later in the year, leaving the villages vulnerable to fall storms. In addition, rising temperatures in recent years have led to widespread thawing of the permafrost (permanently frozen subsoil that is found in over approximately 80 percent of Alaska), causing serious damage. At that time, we found that flooding and erosion affects 184 out of 213, or 86 percent, of Alaska Native villages to some extent, and four villages in imminent danger planned to relocate. Interior’s management of its vast federal estate is largely characterized by the struggle to balance the demand for greater use of its resources with the need to conserve and protect them for the benefit of future generations. In February 2008, we reported that conflicts over the use of our nation’s natural resources, along with increased ecological problems, has led land managers to seek cooperative means to resolve natural resource conflicts and problems. Collaborative resource management is one such approach that communities began using in the 1980s and 1990s. In 2004, an executive order on cooperative conservation encouraged such efforts. Experts generally view collaborative resource management— involving public and private stakeholders in natural resource decisions— as an effective approach for managing natural resources. The benefits that result from using collaborative resource management include less conflict and litigation and improved natural resource conditions, according to experts. Many experts also noted that there are limitations to the approach, such as the time and resources it takes to bring people together to work on a problem and reach a decision. BLM, FWS, NPS, and Agriculture’s Forest Service face challenges in determining whether to participate in a collaborative effort, measuring participation and monitoring results, and sharing agency and group experiences. To enhance the federal government’s support of and participation in collaborative resource management efforts, we recommended that the Council on Environmental Quality, working with the departments of the Interior and of Agriculture take several actions to enhance the federal government’s support of and participation in collaborative resource management efforts, including the preparation of a written plan identifying goals, actions, and time frames for carrying out cooperative conservation activities. Interior generally agreed with our recommendations. We have reported on management weaknesses in Indian and island community programs for a number of years—most recently on serious delays in BIA’s program for determining whether the department will accept land in trust and the need to assist seven island communities— four U.S. territories and three sovereign island nations—with long-standing financial and program management deficiencies. BIA is the primary federal agency charged with implementing federal Indian policy and administering the federal trust responsibility for about 2 million American Indians and Alaska Natives. BIA provides basic services to 562 federally recognized Indian tribes throughout the United States, including natural resources management on about 54 million acres of Indian trust lands. Trust status means that the federal government holds title to the land in trust for tribes or individual Indians; land taken in trust is no longer subject to state and local property taxes and zoning ordinances. In 1980, the department established a regulatory process intended to provide a uniform approach for taking land in trust. While some state and local governments support the federal government’s taking additional land in trust for tribes or individual Indians, others strongly oppose it because of concerns about the impacts on their tax base and jurisdictional control. We reported in July 2006 that while BIA generally followed its regulations for processing land in trust applications from tribes and individual Indians, it had no deadlines for making decisions on them. Specifically, the median processing time for the 87 land in trust applications with decisions in fiscal year 2005 was 1.2 years—ranging from 58 days to almost 19 years. We recommended, among other things, that the department move forward with adopting revisions to the land in trust regulations that include (1) specific time frames for BIA to make a decision once an application is complete and (2) guidelines for providing state and local governments more information on the applications and a longer period of time to provide meaningful comments on the applications. While the department agreed with our recommendations, it has not revised the land in trust regulations. The Secretary of the Interior has varying responsibilities to the island communities of American Samoa, Guam, the Commonwealth of the Northern Mariana Islands, and the U.S. Virgin Islands, all of which are U.S. territories—as well as to the Federated States of Micronesia, the Republic of the Marshall Islands, and the Republic of Palau, which are sovereign nations linked with the United States through Compacts of Free Association. The Office of Insular Affairs (OIA), which carries out the department’s responsibilities for the island communities, is to assist the island communities in developing more efficient and effective government by providing financial and technical assistance and to help manage relations between the federal government and the island governments by promoting appropriate federal policies. The island governments have had long-standing financial and program management deficiencies. In December 2006, we reported on serious economic, fiscal, and financial accountability challenges facing American Samoa, Guam, the Commonwealth of the Northern Mariana Islands, and the U.S. Virgin Islands. The economic challenges stem from dependence on a few key industries, scarce natural resources, small domestic markets, limited infrastructure, shortages of skilled labor, and reliance on federal grants to fund basic services. In addition, efforts to meet formidable fiscal challenges and build strong economies are hindered by financial reporting that does not provide timely and complete information to management and oversight officials for decision making. As a result of these problems, numerous federal agencies have designated these governments as “high- risk” grantees. To increase the effectiveness of the federal government’s assistance to these island communities, we recommended, among other things, that the department increase coordination activities with other federal grant-making agencies on issues of common concern relating to the insular area governments. The department agreed with our recommendations, stating that they were consistent with OIA’s top priorities and ongoing activities. We will continue to monitor OIA’s actions on our recommendations. Also in December 2006, we reported on challenges facing the Federated States of Micronesia and the Republic of the Marshall Islands. In 2003, the United States approved amended compacts with the countries by signing Compacts of Free Association with the two governments. The amended compacts provide the countries with a combined total of $3.6 billion from 2004 to 2023, with the annual grants declining gradually. The single audits for 2004 and 2005 for both countries reported (1) weaknesses in their ability to account for the use of compact funds and (2) noncompliance with requirements for major federal programs. We recommended, among other things, that the department work with the countries to establish plans to minimize the impact of declining assistance and to fully develop a reliable mechanism for measuring progress towards program goals. Furthermore, in June 2007 we reported that trust funds for both nations may not provide sustainable income after the compact grants end, and we recommended, among other things, improvements in trust fund administration. The department agreed with the recommendations in our December 2006 and June 2007 reports. In our June 2008 assessment of the Compact of Free Association with the Republic of Palau, we reported on the challenges Palau faced in dealing with persistent financial management weaknesses and with achieving long term economic self-sufficiency. We recommended that the department formally consult with the government of Palau regarding Palau’s financial management challenges and target future technical assistance toward building Palau’s financial management capacity. The department concurred with our recommendations. As the steward of more than 500 million acres of federal land, land consolidation through sales and acquisitions and land management are important functions for the Department of the Interior. However, the Federal Land Transaction Facilitation Act of 2000 which, in part, was intended to facilitate land consolidation, has had limited success. In February 2008, we reported that BLM had raised $95.7 million in revenue through May 2007 under the Federal Land Transaction Facilitation Act. About 92 percent of this revenue came from land transactions in Nevada. However, the four land management agencies (BLM, FWS, NPS, and Agriculture’s Forest Service) have spent only $13.3 million of the revenues raised for acquiring certain nonfederal lands, primarily those lying within the boundaries of national parks, forests, wildlife refuges, and other designated areas, known as inholdings, ($10.1 million) or for administrative expenses to prepare land for sales ($3.2 million). The agencies face several challenges to completing future land acquisitions under the act. Most notably, the act requires that the agencies use most of the funds to purchase land in the state in which the funds were raised; this restriction has had the effect of making little revenue available outside of Nevada. If Congress decides to reauthorize the act, we suggested that it consider including additional lands for sale and greater flexibility for acquisitions. We also made a number of recommendations to the agencies to improve the implementation and compliance with the act. Interior generally agreed with our recommendations. In addition, Interior’s Fish and Wildlife Service is unlikely to achieve its goals to protect certain migratory bird habitat, and it is generally not managing a majority of its farmlands. In September 2007, we reported that since the inception of the Small Wetlands Acquisition Program in the late 1950s, FWS has acquired and permanently protected about 3 million acres of wetlands and grasslands in the Prairie Pothole Region. However, at the current pace of acquisitions, it could take FWS about 150 years and billions of dollars to acquire and permanently protect as much as possible of an additional 12 million acres of “high-priority” habitat. Some emerging market forces suggest that FWS may have only several decades before most of its goal acreage is converted to agricultural uses. We also reported in September 2007 that, according to FWS data, since 1986, the Service has received at least 1,400 conservation easements and fee-simple farmlands covering 132,000 acres from the Department of Agriculture’s Farm Service Agency. However, FWS is generally not managing a majority of its farmlands. For 2002 through 2006, FWS has inspected an annual average of only 13 percent of these lands. Because the farmlands are now part of the National Wildlife Refuge System, we found that FWS cannot dispose of unwanted farmlands. As a result, we recommended that FWS develop a proposal to Congress seeking authority for additional flexibility in dealing with farmlands FWS determines may not be in the best interests of the National Wildlife Refuge System. Interior agreed with our recommendations. Interior also faces a challenge in adequately maintaining its facilities and infrastructure. The department owns, builds, purchases, and contracts services for assets such as visitor centers, schools, office buildings, roads, bridges, dams, irrigation systems, and reservoirs; however, repairs and maintenance on these facilities have not been adequately funded. The deterioration of facilities can impair public health and safety, reduce employees’ morale and productivity, and increase the need for costly major repairs or early replacement of structures and equipment. In November 2008, the department estimated that the deferred maintenance backlog for fiscal year 2008 was between $13.2 billion and $19.4 billion (see table 1). Interior is not alone in facing daunting maintenance challenges. In fact, we have identified the management of federal real property, including deferred maintenance issues, as a governmentwide high-risk area since 2003. Interior has made progress addressing prior recommendations to improve information on the maintenance needs of NPS facilities, BIA schools, and BIA irrigation projects. For example, in February 2006 we reported that BIA plans to hire experts in engineering and irrigation to thoroughly assess the condition of all 16 irrigation projects every 5 years to further refine the deferred maintenance estimate for these projects. It completed its first assessment in July 2005, and expects to complete all 16 assessments by 2010. Although Interior has made a concentrated effort to address its deferred maintenance backlog, the dollar estimate of the backlog has continued to escalate. The 2008 backlog estimate is more than 60 percent higher than the 2003 estimate of between $8.1 billion and $11.4 billion. The funds included in the recently enacted stimulus package for Interior may reverse this trend. Interior collects, on average, over $10 billion annually in mineral lease revenues, but many material weaknesses in federal oil and gas management and revenue collection processes and practices place an unknown but significant proportion of royalties and other oil and gas revenues at risk. These weaknesses also raise questions about whether Interior is collecting an appropriate amount of revenue for the rights to explore for, develop, and produce oil and gas on federal lands and waters. With regard to overall revenue collection, in September 2008, we reported that compared with other countries, the United States receives one of the lowest shares of revenue for its oil and gas resources. A number of these other countries and resource owners had responded to higher oil and gas prices by increasing their share of oil and gas revenues to potentially generate substantially more revenue. However, despite significant changes in the oil and gas industry and widely fluctuating prices, Interior has not systematically reexamined how the federal government is compensated for oil and gas on federal lands for over 25 years. Furthermore, we have found that Interior does less to encourage development of its leases than do some state and private landowners. Also in September 2008, we reported that Interior’s Minerals Management Service’s (MMS) management of cash royalty collection lacks key controls, such as the ability to effectively monitor and validate oil and gas company adjustments to self-reported royalty data, including those made after audits have been completed. Furthermore, MMS’s royalty compliance efforts rely too heavily on self-reported data, but the more consistent use of available third-party data as a check on self-reported data could provide greater assurance that royalties are accurately assessed and paid. In another September 2008 report, we found that for MMS’s Royalty-in-Kind program, in which companies provide the federal government with oil or gas in lieu of cash royalty payments, MMS’s oversight of natural gas volumes is less robust than its oversight of oil volumes—a finding that raises questions about the accuracy of company-reported volumes of natural gas from which MMS must determine whether it is receiving its appropriate share of production. In addition, we found that MMS’s annual reports to Congress do not fully describe the performance of the Royalty- in-Kind program and, in some instances, may overstate the benefits of the program. Concerning workforce issues, we reported in June 2005 that BLM has encountered persistent problems in hiring and retaining sufficient and adequately trained staff to keep up with an increasing workload as a result of rapid increases in oil and gas operations on federal lands. For example, between 1999 and 2004, when applications for permits to drill more than tripled, BLM was unable to keep up with the commensurate increase in its workload, in part, as a result of an ineffective workforce planning process, the lack of key data on workload activities, and a lack of resources. BLM’s inability to attract and retain sufficiently trained staff has kept the agency from meeting requirements to inspect the drilling and production of oil and gas on federal lands. Lack of inspection puts federal revenues at risk because inspections have found violations, including errors in the volumes of oil and gas that operators reported. Furthermore, in one of our September 2008 reports, we reported that Interior is not meeting statutory or agency targets for inspecting certain onshore and offshore leases and metering equipment for measuring oil and gas production, raising questions about the accuracy of company-reported oil and gas production figures. As a result, and based on Interior’s comments, we recommended that Interior report to Congress any year in which it does not meet its legal and agency requirements for completing production inspections, along with the cause and a plan for achieving compliance. In 2007 and 2008, we reported on MMS’s implementation of the Outer Continental Shelf Deep Water Royalty Relief Act of 1995 and other authorities for granting royalty relief for oil and gas leases. We found that MMS had issued lease contracts in 1998 and 1999 that failed to include price thresholds above which royalty relief would no longer be applicable. As a result, large volumes of oil and natural gas are exempt from royalties, which significantly reduces the amount of royalty revenues that the federal government can collect. At least $1 billion in royalties has already been lost because of this failure to include price thresholds. We developed a number of scenarios that showed that forgone royalties from leases issued between 1996 and 2000 under the act could be as high as $53 billion. However, there is much uncertainty in this scenario as a result of the inherent difficulties in estimating future production, ongoing litigation over MMS’s authority to set price thresholds for some leases, and widely fluctuating oil and gas prices. Other authorities for granting royalty relief may also affect future royalty revenues. Specifically, under discretionary authority, the Secretary of the Interior administers programs granting relief for certain deep water leases issued after 2000, certain gas wells drilled in shallow waters, and wells nearing the end of their productive lives. In addition, the Energy Policy Act of 2005 mandates relief for leases issued in the Gulf of Mexico during the 5 years following the act’s passage, provides some relief for some gas wells that would not have previously qualified for royalty relief, and addresses relief in certain areas of Alaska where there currently is little or no production. Additional revenues or financial assurances could be generated through hardrock mining operations by amending the General Mining Act of 1872 so that the federal government could collect federal royalties on minerals extracted from U.S. mineral rights and by requiring adequate financial assurances from hardrock mining operations to fully cover estimated reclamation costs. Additional revenues could also be generated by increasing the grazing fee for public lands managed by Interior’s Bureau of Land Management. The General Mining Act of 1872 helped open the West by allowing individuals to obtain exclusive rights to mine billions of dollars worth of hardrock minerals from federal lands without having to pay a federal royalty. In July 2008 we reported that the 12 western states, including Alaska, assess multiple types of royalties on mining operations. States may use similar names for the royalties they assess, but these can vary widely in their forms and rates. Unlike the federal government, these states charge royalties that allow them to share in the proceeds from hardrock minerals extracted from state-owned lands, as well as levy taxes that function like royalties, on private, state, and federal lands. Under BLM regulations, hardrock mining operators who extract gold, silver, copper, and other valuable mineral deposits from land belonging to the United States are required to provide financial assurances, before they begin exploration or mining, to guarantee that the costs to reclaim land disturbed by their operations are paid. However, we reported in June 2005 that BLM did not have a process for ensuring that adequate assurances were in place. When operators with insufficient financial assurances fail to reclaim BLM land disturbed by hardrock mining operations, BLM is left with public land that poses risks to the environment and public health and safety, and requires millions of federal dollars to reclaim. In March 2008, we found that the financial assurances required by BLM were not adequate to fully cover estimated reclamation costs. According to BLM, mine operators had provided financial assurances valued at approximately $982 million to guarantee reclamation costs for 1,463 hardrock operations on BLM land. BLM also estimated that 52 mining operations had financial assurances that amounted to about $28 million less than needed to fully cover estimated reclamation costs. However, we found that the financial assurances for these 52 operations were in fact about $61 million less than needed to fully cover estimated reclamation costs. The $33 million difference between our estimated shortfall and BLM’s occurs because BLM calculated its shortfall by comparing the total value of financial assurances in place with the total estimated reclamation costs. This calculation approach has the effect of offsetting the shortfalls in some operations with the financial assurances of other operations. However, financial assurances that are greater than the amount required for an operation cannot be transferred to another operation that has inadequate financial assurances. BLM officials agreed that it would be valuable to report the dollar value of the difference between financial assurances in place and required for those operations where financial assurances are inadequate. Ten federal agencies manage grazing on over 22 million acres, with BLM and the Forest Service managing the vast majority of this activity. In total, federal grazing revenue amounted to about $21 million in fiscal year 2004, although grazing fees differ by agency. For example, in 2004, BLM and the Forest Service charged $1.43 per animal unit month, while other federal agencies charged between $0.29 and $112 per animal unit month. We reported in 2005 that while BLM and the Forest Service charged generally much lower fees than other federal agencies and private entities, these fees reflect legislative and executive branch policies to support local economies and ranching communities. Specifically, BLM fees are set by a formula that expired in 1985, but was extended indefinitely by executive order in 1986. This formula takes into account a rancher’s ability to pay and, therefore, the purpose is not primarily to recover the agencies’ costs or capture the fair market value of forage. Instead, the formula is designed to set a fee that helps support ranchers and the western livestock industry. Other federal agencies employ market-based approaches to setting grazing fees. Using this formula, BLM collected about $12 million in receipts in fiscal year 2004, while its costs for implementing its grazing program, including range improvement activities, were about $58 million. Were BLM to implement approaches used by other agencies to set grazing fees, it could help to close the gap between expenditures and receipts and more closely align its fees with market prices. Instead, for 2007, 2008, and 2009, the grazing fee was set at $1.35 per animal unit month, the lowest level allowable under the executive order. We recognize, however, that the purpose and size of BLM’s grazing fee are ultimately for Congress to decide. Mr. Chairman, this concludes our prepared statement. We would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact Robin M. Nazzaro or Frank Rusco at (202) 512-3841 or nazzaror@gao.gov and ruscof@gao.gov, respectively. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Jeffery D. Malcolm, Assistant Director, and Ross Campbell. Also contributing to this testimony were Ron Belak, Jonathan Dent, Glenn Fischer, Emil Friberg, Steve Gaty, Richard P. Johnson, Marissa Jones, Carol Kolarik, Carol Herrnstadt Shulman, and Desirée Thorp. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. High-Risk Series: An Update. GAO-03-119. Washington, D.C.: January 2003. High-Risk Series: Federal Real Property. GAO-03-122. Washington, D.C.: January 2003. 2009 Congressional and Presidential Transition: Department of the Interior (Web-based—http://www.gao.gov/transition_2009/agency/doi/). Posthearing Questions: Major Management Challenges at the Department of the Interior. GAO-07-659R. Washington, D.C.: March 28, 2007. Department of the Interior: Major Management Challenges. GAO-07-502T. Washington, D.C.: February 16, 2007. Major Management Challenges at the Department of the Interior (2005 Web-based Update—http://www.gao.gov/pas/2005/doi.htm). Wildland Fire Management: Interagency Budget Tool Needs Further Development to Fully Meet Key Objectives. GAO-09-68. Washington, D.C.: November 24, 2008. Wildland Fire Management: Federal Agencies Lack Key Long- and Short- Term Management Strategies for Using Program Funds Effectively. GAO-08-433T. Washington, D.C.: February 12, 2008. Wildland Fire Management: Better Information and a Systematic Process Could Improve Agencies’ Approach to Allocating Fuel Reduction Funds and Selecting Projects. GAO-07-1168. Washington, D.C.: September 28, 2007. Wildland Fire Management: Lack of Clear Goals or a Strategy Hinders Federal Agencies’ Efforts to Contain the Costs of Fighting Fires. GAO-07-655. Washington, D.C.: June 1, 2007. Wildland Fire Suppression: Lack of Clear Guidance Raises Concerns about Cost Sharing between Federal and Nonfederal Entities. GAO-06-570. Washington, D.C.: May 30, 2006. Wildland Fire Management: Update on Federal Agency Efforts to Develop a Cohesive Wildland Fire Strategy. GAO-06-671R. Washington, D.C.: May 1, 2006. Wildland Fire Management: Important Progress Has Been Made, but Challenges Remain to Completing a Cohesive Strategy. GAO-05-147. Washington, D.C.: January 14, 2005. Wildland Fires: Forest Service and BLM Need Better Information and a Systematic Approach for Assessing the Risks of Environmental Effects. GAO-04-705. Washington, D.C.: June 24, 2004. Wildland Fire Management: Additional Actions Required to Better Identify and Prioritize Lands Needing Fuels Reduction. GAO-03-805. Washington, D.C.: August 15, 2003. Western National Forests: A Cohesive Strategy is Needed to Address Catastrophic Wildfire Threats. GAO/RCED-99-65. Washington, D.C.: April 2, 1999. Endangered Species Act: Many GAO Recommendations Have Been Implemented, but Some Issues Remain Unresolved. GAO-09-225R. Washington, D.C.: December 19, 2008. Federal Land Management: Use of Stewardship Contracting Is Increasing, but Agencies Could Benefit from Better Data and Contracting Strategies. GAO-09-23. Washington, D.C.: November 13, 2008. Bureau of Land Management: Effective Long-Term Options Needed to Manage Unadoptable Wild Horses. GAO-09-77. Washington, D.C.: October 9, 2008. Wildlife Refuges: Changes in Funding, Staffing, and Other Factors Create Concerns about Future Sustainability. GAO-08-797. Washington, D.C.: September 22, 2008. U.S. Fish and Wildlife Service: Endangered Species Act Decision Making. GAO-08-688T. Washington, D.C.: May 21, 2008. Hardrock Mining: Information on Abandoned Mines and Value and Coverage of Financial Assurances on BLM Land. GAO-08-574T. Washington, D.C.: March 12, 2008. Yellowstone Bison: Interagency Plan and Agencies’ Management Need Improvement to Better Address Bison-Cattle Brucellosis Controversy. GAO-08-291. Washington, D.C.: March 7, 2008. Natural Resource Management: Opportunities Exist to Enhance Federal Participation in Collaborative Efforts to Reduce Conflicts and Improve Natural Resource Conditions. GAO-08-262. Washington, D.C.: February 12, 2008. Climate Change: Agencies Should Develop Guidance for Addressing the Effects on Federal Land and Water Resources. GAO-07-863. Washington, D.C.: August 7, 2007. U.S. Fish and Wildlife Service: Opportunities Remain to Improve Oversight and Management of Oil and Gas Activities on National Wildlife Refuges. GAO-07-829R. Washington, DC: June 29, 2007. Endangered Species: Many Factors Affect the Length of Time to Recover Select Species. GAO-06-730. Washington, D.C.: September 6, 2006. Invasive Forest Pests: Lessons Learned from Three Recent Infestations May Aid in Managing Future Efforts. GAO-06-353. Washington, D.C.: April 21, 2006. Endangered Species: Time and Costs Required to Recover Species Are Largely Unknown. GAO-06-463R. Washington, D.C.: April 6, 2006. Wind Power: Impacts on Wildlife and Government Responsibilities for Regulating Development and Protecting Wildlife. GAO-05-906. Washington, D.C.: September 16, 2005. Hardrock Mining: BLM Needs to Better Manage Financial Assurances to Guarantee Coverage of Reclamation Costs. GAO-05-377. Washington, D.C.: June 20, 2005. Oil and Gas Development: Increased Permitting Activity Has Lessened BLM’s Ability to Meet Its Environmental Protection Responsibilities. GAO-05-418. Washington, D.C.: June 17, 2005. Invasive Species: Cooperation and Coordination Are Important for Effective Management of Invasive Weeds. GAO-05-185. Washington, D.C.: February 25, 2005. Oil and Gas Development: Challenges to Agency Decisions and Opportunities for BLM to Standardize Data Collection. GAO-05-124. Washington, D.C.: November 30, 2004. Endangered Species: More Federal Management Attention Is Needed to Improve the Consultation Process. GAO-04-93. Washington, D.C.: March 19, 2004. Invasive Species: Clearer Focus and Greater Commitment Needed to Effectively Manage the Problem. GAO-03-1. Washington, D.C.: October 22, 2002. Indian Issues: BLM’s Program for Issuing Individual Indian Allotments on Public Lands Is No Longer Viable. GAO-07-23R. Washington, D.C.: October 20, 2006. Indian Issues: BIA’s Efforts to Impose Time Frames and Collect Better Data Should Improve the Processing of Land in Trust Applications. GAO-06-781. Washington, D.C.: July 28, 2006. Indian Irrigation: Numerous Issues Need to Be Addressed to Improve Project Management and Financial Sustainability. GAO-06-314. Washington, D.C.: February 24, 2006. Alaska Native Allotments: Conflicts with Utility Rights-of-Way Have Not Been Resolved Through Existing Remedies. GAO-04-923. Washington, D.C.: September 7, 2004. Columbia River Basin: A Multilayered Collection of Directives and Plans Guide Federal Fish and Wildlife Plans. GAO-04-602. Washington, D.C.: June 4, 2004. Alaska Native Villages: Most Are Affected by Flooding and Erosion, but Few Qualify for Federal Assistance. GAO-04-142. Washington, D.C.: December 12, 2003. Commonwealth of the Northern Mariana Islands: Managing Potential Economic Impact of Applying U.S. Immigration Law Requires Coordinated Federal Decisions and Additional Data. GAO-08-791. Washington, D.C.: August 4, 2008. American Samoa: Issues Associated with Potential Changes to the Current System for Adjudicating Matters of Federal Law. GAO-08-655. Washington, D.C.: June 27, 2008. Compact of Free Association: Palau’s Use of and Accountability for U.S. Assistance and Prospects for Economic Self-Sufficiency. GAO-08-732. Washington, D.C.: June 10, 2008. Commonwealth of the Northern Mariana Islands: Pending Legislation Would Apply U.S. Immigration Law to the CNMI with a Transition Period. GAO-08-466. Washington, D.C.: March 28, 2008. Compacts of Free Association: Trust Funds for Micronesia and the Marshall Islands May Not Provide Sustainable Income. GAO-07-513. Washington, D.C.: June 15, 2007. Compacts of Free Association: Micronesia’s and the Marshall Islands’ Use of Sector Grants. GAO-07-514R. Washington, D.C.: May 25, 2007. Compacts of Free Association: Micronesia and the Marshall Islands Face Challenges in Planning for Sustainability, Measuring Progress, and Ensuring Accountability. GAO-07-163. Washington, D.C.: December 15, 2006. U.S. Insular Areas: Economic, Fiscal, and Financial Accountability Challenges. GAO-07-119. Washington, D.C.: December 12, 2006. Compacts of Free Association: Development Prospects Remain Limited for Micronesia and the Marshall Islands. GAO-06-590. Washington, D.C.: June 27, 2006. U.S. Insular Areas: Multiple Factors Affect Federal Health Care Funding. GAO-06-75. Washington, D.C.: October 14, 2005. Compacts of Free Association: Implementation of New Funding and Accountability Requirements Is Well Underway, but Planning Challenges Remain. GAO-05-633. Washington, D.C.: July 11, 2005. American Samoa: Accountability for Key Federal Grants Needs Improvement. GAO-05-41. Washington, D.C.: December 17, 2004. Compact of Free Association: Single Audits Demonstrate Accountability Problems over Compact Funds. GAO-04-7. Washington, D.C.: October 7, 2003. Compact of Free Association: An Assessment of Amended Compacts and Related Agreements. GAO-03-890T. Washington, D.C.: June 18, 2003. Federal Land Management: Federal Land Transaction Facilitation Act Restrictions and Management Weaknesses Limit Future Sales and Acquisitions. GAO-08-196. Washington, D.C.: February 5, 2008. Prairie Pothole Region: At the Current Pace of Acquisitions, the U.S. Fish and Wildlife Service Is Unlikely to Achieve Its Habitat Protection Goals for Migratory Birds. GAO-07-1093. Washington, D.C.: September 27, 2007. U.S. Fish and Wildlife Service: Additional Flexibility Needed to Deal with Farmlands Received from the Department of Agriculture. GAO-07-1092. Washington, D.C.: September 18, 2007. Interior’s Land Appraisal Services: Action Needed to Improve Compliance with Appraisal Standards, Increase Efficiency, and Broaden Oversight. GAO-06-1050. Washington, D.C.: September 28, 2006. National Park Service: Major Operations Funding Trends and How Selected Park Units Responded to Those Trends for Fiscal Years 2001 through 2005. GAO-06-431. Washington, D.C.: March 31, 2006. Indian Irrigation Projects: Numerous Issues Need to Be Addressed to Improve Project Management and Financial Sustainability. GAO-06-314. Washington, D.C.: February 24, 2006. Recreation Fees: Comments on the Federal Lands Recreation Enhancement Act, H.R. 3283. GAO-04-745T. Washington, D.C.: May 6, 2004. National Park Service: Efforts Underway to Address Its Maintenance Backlog. GAO-03-1177T. Washington, D.C.: September 27, 2003. Bureau of Indian Affairs Schools: Expenditures in Selected Schools Are Comparable to Similar Public Schools, but Data Are Insufficient to Judge Adequacy of Funding and Formulas. GAO-03-955. Washington, D.C.: September 4, 2003. Bureau of Indian Affairs Schools: New Facilities Management Information System Promising, but Improved Data Accuracy Needed. GAO-03-692. Washington, D.C.: July 31, 2003. National Park Service: Status of Agency Efforts to Address Its Maintenance Backlog. GAO-03-992T. Washington, D.C.: July 8, 2003. Oil and Gas Royalties: MMS’s Oversight of Its Royalty-in-Kind Program Can Be Improved through Additional Use of Production Verification Data and Enhanced Reporting of Financial Benefits and Costs. GAO-08-942R. Washington, D.C.: September 26, 2008. Mineral Revenues: Data Management Problems and Reliance on Self- Reported Data for Compliance Efforts Put MMS Royalty Collections at Risk. GAO-08-893R. Washington, D.C.: September 12, 2008. Oil and Gas Royalties: The Federal System for Collecting Oil and Gas Revenues Needs Comprehensive Reassessment. GAO-08-691. Washington, D.C.: September 3, 2008. Oil and Gas Royalties: Litigation over Royalty Relief Could Cost the Federal Government Billions of Dollars. GAO-08-792R. Washington, D.C.: June 5, 2008. Mineral Revenues: Data Management Problems and Reliance on Self- Reported Data for Compliance Efforts Put MMS Royalty Collections at Risk. GAO-08-560T. Washington, D.C.: March 11, 2008. Oil and Gas Royalties: A Comparison of the Share of Revenue Received From Oil and Gas Production by the Federal Government and Other Resources. GAO-07-676R. Washington, D.C.: May 1, 2007. Oil and Gas Royalties: Royalty Relief Will Cost the Government Billions of Dollars but Uncertainty Over Future Energy Prices and Production Levels Make Precise Estimates Impossible at this Time. GAO-07-590R. Washington, D.C.: April 12, 2007. Royalties Collection: Ongoing Problems with Interior’s Efforts to Ensure a Fair Return for Taxpayers Require Attention. GAO-07-682T. Washington, D.C.: March 28, 2007. Oil and Gas Royalties: Royalty Relief Will Likely Cost the Government Billions, but the Final Costs Have Yet to Be Determined. GAO-07-369T. Washington, D.C.: January 18, 2007. Royalty Revenues: Total Revenues Have Not Increased at the Same Pace as Rising Oil and Natural Gas Prices due to Decreasing Production Sold. GAO-06-786R. Washington, D.C.: June 21, 2006. Oil and Gas Development: Challenges to Agency Decisions and Opportunities for BLM to Standardize Data Collection. GAO-05-124. Washington, D.C.: November 30, 2004. Mineral Revenues: Cost and Revenue Information Needed to Compare Different Approaches for Collecting Federal Oil and Gas Royalties. GAO-04-448. Washington, D.C.: April 16, 2004. Mineral Revenues: A More Systematic Evaluation of the Royalty-in-Kind Pilots Is Needed. GAO-03-296. Washington, D.C.: January 9, 2003. Hardrock Mining: Information on State Royalties and Trends in Mineral Imports and Exports. GAO-08-849R. Washington, D.C.: July 21, 2008. Hardrock Mining: Information on Abandoned Mines and Value and Coverage of Financial Assurances on BLM Land. GAO-08-574T. Washington, D.C.: March 12, 2008. Recreation Fees: Agencies Can Better Implement the Federal Lands Recreation Enhancement Act and Account for Fee Revenues. GAO-06-1016. Washington, D.C.: September 22, 2006. National Park Air Tour Fees: Effective Verification and Enforcement Are Needed to Improve Compliance. GAO-06-468. Washington, D.C.: May 11, 2006. Livestock Grazing: Federal Expenditures and Receipts Vary, Depending on the Agency and the Purpose of the Fee Charged. GAO-05-869. Washington, D.C.: September 30, 2005. Hardrock Mining: BLM Needs to Better Manage Financial Assurances to Guarantee Coverage of Reclamation Costs. GAO-05-377. Washington, D.C.: June 20, 2005. Oil and Gas Development: Increased Permitting Activity Has Lessened BLM’s Ability to Meet Its Environmental Protection Responsibilities. GAO-05-418. Washington, D.C.: June 17, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of the Interior is responsible for managing much of the nation's vast natural resources. Its agencies implement an array of programs intended to protect these precious resources for future generations while also allowing certain uses of them, such as oil and gas development and recreation. In some cases, Interior is authorized to collect royalties and fees for these uses. Over the years, GAO has reported on challenges facing Interior as it implements its programs. In addition to basic program management issues, Interior faces difficult choices in balancing its many responsibilities, and in improving the condition of the nation's natural resources and the department's infrastructure, in light of the federal deficit and long-term fiscal challenges facing the nation. This testimony highlights some of the major management challenges facing Interior today. It is based on prior GAO reports. As GAO's previous work has shown, the Department of the Interior faces major management challenges in the following six areas: (1) Strengthening resource protection; (2) Strengthening the accountability of Indian and island community programs; (3) Improving federal land acquisition and management; (4) Reducing Interior's deferred maintenance backlog; (5) Ensuring the accurate collection of royalties; and (6) Enhancing other revenue collections and financial assurances. Interior has not yet developed a cohesive strategy to address wildland fire issues, as GAO recommended in 1999 and 2005. In addition, Interior faces challenges in managing oil and gas operations on federal lands, adapting to climate change, and resolving natural resource conflicts through collaborative management. Having a land base is important to Indian tribes. Concerns remain about delays in decisions about land that Interior will take into trust status. In addition, programs for seven island communities--four U.S. territories and three sovereign island nations--continue to have financial and program management deficiencies. As the steward of more than 500 million acres of federal land, land consolidation through sales and acquisitions and land management are important functions for the department. The Federal Land Transaction Facilitation Act has had limited success and Interior's U.S. Fish and Wildlife Service is unlikely to achieve its goals to protect certain migratory bird habitat and it is generally not managing a majority of its farmlands. While Interior has improved inventory and asset management systems, the dollar estimate of the deferred maintenance backlog has continued to grow. The 2008 estimate of between $13.2 billion and $19.4 billion is more than 60 percent higher than the 2003 estimate. The funds for Interior in the recently enacted stimulus package may reverse this trend. GAO and others have found many material weaknesses in their numerous evaluations of federal oil and gas management and revenue collection processes. These weaknesses place an unknown but significant proportion of royalties and other oil and gas revenues at risk and raise questions about whether Interior is collecting an appropriate amount of revenue for the rights to explore for, develop, and produce oil and gas from federal lands and waters. Additional revenues or financial assurances could be generated by (1) amending the General Mining Act of 1872 to collect federal royalties on gold, silver, copper, and other valuable minerals belonging to the United States, (2) requiring adequate financial assurances from hardrock mining operations to fully cover estimated reclamation costs, and (3) increasing the grazing fee for public lands managed by Interior's Bureau of Land
DOD’s Real Property Management Program is governed by statute and DOD regulations, directives, and instructions that establish real property accountability and financial reporting requirements. These laws, regulations, directives, and instructions—a selection of which are discussed below—require DOD and the military departments to maintain a number of data elements about their facilities to help ensure efficient property management and thus help identify potential facility consolidation opportunities. Department of Defense Directive 4165.06, Real Property (Oct. 13, 2004, certified current Nov.18, 2008). includes Army Regulation 405-70; the Naval Facilities Engineering Command P-78; and Air Force Policy Directive 32-10. The guidance requires, among other things, that real property records be accurate and be managed efficiently and economically. It also requires the military departments to maintain a complete and accurate real property inventory with up-to-date information, to annually certify that the real property inventory has been reconciled, and to ensure that all real property holdings under the military departments’ control are being used to the maximum extent possible. Appendix II describes some of the guidance from DOD and the military departments and includes excerpts of the related requirements to manage real property. In managing the real property under their control, the military departments are responsible for implementing real property policies and programs to, among other things, hold or make plans to obtain the land and facilities they need for their own missions and for other DOD components’ missions that are supported by the military departments’ real property. Additionally, the military departments are required to (1) budget for and financially manage so as to meet their own real property requirements; (2) accurately inventory and account for their land and facilities; and (3) maintain a program monitoring the use of real property to ensure that all holdings under their control are being used to the maximum extent possible consistent with both peacetime and mobilization requirements. The military departments’ processes for managing and monitoring the utilization of facilities generally occur at the installation level. According to OSD guidance, inventories are to be conducted every 5 years except for those real property assets designated as historic, which are to be reviewed and physically inventoried every 3 years. According to DOD Instruction 4165.70, the military departments’ real property administrators are accountable for maintaining a current inventory count of the military departments’ facilities and up-to-date information regarding, among other things, the status, condition, utilization, present value, and remaining useful life of each real property asset. Inventory counts and associated information should be current as of the last day of each fiscal year. When DOD’s real property is no longer needed for current or projected defense requirements, it is DOD’s policy to dispose of it. In addition, DOD Instruction 4165.70 requires the military departments to periodically review their real property holdings, both land and facilities, to identify unneeded and underused property. The three military departments maintain a number of real property databases that are to be used to manage real property assets for the Army, Navy, Marine Corps, and the Air Force as shown in table 1. OSD’s Base Structure Report Fiscal Year 2013 Baseline (OSD’s Base Structure Report) is a summary of DOD’s real property inventory and a “snapshot” of DOD’s real property data collected as of September 30, 2012, and serves as the beginning balance for fiscal year 2013. The report identifies DOD’s real property assets, including buildings, structures, and linear structures, worldwide. Table 2 shows the total assets, percentages, and plant replacement values of real property assets for each of the military departments and the Washington Headquarters Services. OSD compiles and maintains the department’s real property assets inventory in a single database, called the Real Property Assets Database. OSD’s Real Property Assets Database contains specific reporting data on the military departments’ real property records and is considered the single authoritative source for all DOD real property inventory. OSD’s objectives for the Real Property Assets Database are to comply with current DOD business architecture, support the DOD standardized real property requirements, and implement DOD Instruction 4165.14: Real Property Inventory and Forecasting. The Real Property Assets Database is the source used for OSD’s annual real property reporting that includes the Federal Real Property Profile reportReport. OSD’s Base Structure Report is a snapshot of real property assets as of September 30 of the previous fiscal year and serves as the baseline for each contemporaneous fiscal year. It is a consolidated summary of the three military departments’ real property inventory data, submitted annually. The three military departments’ real property inventory records, which are the source for compiling DOD’s real property records on an annual basis, are uploaded to OSD’s Real Property Assets and OSD’s Base Structure Database. Additionally, the Secretaries of the military departments are to certify annually that the real property inventory records have been reconciled. we found that as of September 30, 2010, DOD’s In September 2011,Real Property Assets Database reported utilization data for fewer than half of DOD’s total inventory of facilities and that much of the data were outdated and did not reflect the true usage of the structures. OSD stated at the time that utilization data in its database did not cover the full DOD inventory because the primary focus of the department’s efforts to collect and record such data had been in response to reporting requirements from the Federal Real Property Council, which requires annual reports on utilization of five categories of buildings for the Federal Real Property Profile. However, OSD annually reports all of its real property in its Base Structure Report. Further, we found that when utilization-rate data were recorded in OSD’s database, the recorded entry often did not reflect the true usage of the facilities. For example, we found that in fiscal year 2010 the real property data for the Air Force reported a utilization rate of 0 percent for 22,563 buildings that were reported to be in an active status. As a result, we recommended that the Secretary of Defense direct the Deputy Under Secretary of Defense for Installations and Environment to (1) develop and implement a methodology for calculating and recording utilization data for all types of facilities and to modify their processes for updating and verifying the accuracy of reported utilization data to reflect a facility’s true status and (2) develop strategies and measures to enhance the management of DOD’s excess facilities after the current demolition program ends, taking into account external factors that might affect future disposal efforts. OSD partially concurred with our first recommendation because it stated that it had some actions already underway to address the recommendation. However, at that time, OSD did not specify what actions it had undertaken to date or the time frames for completing efforts to improve the collection and reporting of utilization data. DOD concurred with our second recommendation, but did not provide any details or specific time frames for efforts to address it. As of June 2014, according to OSD officials, they have not fully implemented these two recommendations. Our body of work on results-oriented management has shown that successful organizations in both the public and private sectors use results-oriented management tools to help achieve desired program outcomes. These tools, or principles, derived from the Government Performance and Results Act (GPRA) of 1993, provide agencies with a management framework for effectively implementing and managing programs and shift program-management focus from measuring program activities and processes to measuring program outcomes. The framework can include various management tools, such as long-term goals, performance goals, and performance measures, which can assist agencies in measuring performance and reporting results. Our prior work has also shown that organizations need effective strategic management planning in order to identify and achieve long-term goals. We have identified key elements that should be incorporated into strategic plans to help establish a comprehensive, results-oriented management framework for programs within DOD. Further our prior body of work has also shown that organizations conducting strategic planning need to develop a comprehensive, results- oriented management framework to remain operationally effective, efficient, and capable of meeting future requirements. A results-oriented management framework provides an approach whereby program effectiveness is measured in terms of outcome metrics. Approaches to such planning vary according to agency-specific needs and missions; however, irrespective of the context in which planning is done, our prior work has shown that such a strategic plan should contain the following seven critical elements: (1) a comprehensive mission statement; (2) long- term goals; (3) strategies to achieve the goals; (4) use of metrics to gauge progress; (5) identification of key external factors that could affect the achievement of the goals; (6) a discussion of how program evaluations will be used; and (7) stakeholder involvement in developing the plan. In our analysis of OSD’s Real Property Assets Database over the past 4 fiscal years, we found that although the department has made some progress in improving its real property records, OSD continued to collect incomplete utilization data for its real property assets. Specifically, we found that OSD’s methodology for calculating and recording utilization data has not changed since our September 2011 report and the data continue to be incomplete and not encompass all of DOD’s assets. OSD guidance requires that utilization rates be included for all categories of its real property asset records. The percentage of total real property assets with a reported utilization rate increased from 46 percent to 53 percent over the past 4 fiscal years, as shown in table 3. For example, as of September 30, 2013, we found that facility utilization data were missing for 245,281 of DOD’s 524,189 assets—that is, about 47 percent of its total real property assets. Although the percentage of facilities not reporting any utilization rate decreased since 2011, OSD’s fiscal year 2013 database still reflects that almost half of DOD’s total real property assets records do not reflect a utilization rate. Further, related to accuracy of the data, we found a number of real property assets reporting a zero utilization rate, which may indicate either inaccurate records or some type of a consolidation opportunity. We used three data fields to determine whether a facility’s utilization was consistently reported in OSD’s Real Property Assets Database. Specifically, we used the following three criteria—a utilization rate reported as “zero” (indicating the facility was not being utilized), a status reported as “active” (indicating the facility was being utilized), and the type of asset described as a “building.” We found that as of September 30, 2013, OSD reported 7,596 buildings across the four military services with inconsistent or inaccurate reported utilization, as shown in table 4. We then assessed these facilities and found that 30 percent (2,255 of the 7,596 facilities) were also described as “utilized” in the Real Property Assets Database. Having a utilization rate of zero and being in an active status and described as utilized shows potential inconsistencies or inaccuracies in the data. We analyzed the inconsistencies across the four services and found the following: The Army reported 6,391 real property records with a zero utilization rate, but 1,734 (about 27 percent) of those buildings were described as utilized; the remainder of the Army’s records noted 37 buildings (about 0.01 percent) described as underutilized; and 4,620 (about 72 percent) of those buildings had no utilization description. The Navy’s 13 buildings and the Marine Corps’ 18 buildings that were reported with a zero utilization rate had no utilization description. Of the Air Force’s 1,174 buildings reported with a zero utilization rate, 521 (about 44 percent) were described as utilized and 653 (about 56 percent) had no utilization description. Our analysis also showed that OSD has made some improvements in addressing some other inaccuracies in the utilization rates in its real property records. For example, we found that OSD corrected its real property records for those reported with a utilization rate greater than 100 percent. Specifically, our analysis showed that OSD had previously reported real property records of 2,270; 2,093; and 999, in fiscal years 2010, 2011, and 2012, respectively, with a utilization rate greater than 100 percent. In fiscal year 2013, OSD had addressed this inaccuracy and reported no real property records with a utilization rate greater than 100 percent. As another example, according to OSD’s real property inventory data element dictionary, the utilization rate for its real property records should be reported as a whole number from 0 percent to 100 percent. Our analysis found that, since fiscal year 2010, OSD has been making progress in addressing the utilization rates that were not reported as whole numbers and that, overall, the total number of real property records in OSD’s Real Property Assets Database reporting a utilization rate that is not a whole number has steadily decreased over the past 4 fiscal years. As with our analysis of OSD’s Real Property Assets Database, we found that the military departments do not collect and maintain accurate real property records in their respective databases, which limits the use of the databases as a tool to identify consolidation opportunities. We found, first, that at all 11 of the military service installations we visited, according to the installation officials, the utilization data are not systematically updated, but instead are updated when (1) there is a request for space; (2) a facility is consolidated or remodeled; (3) an area is being reviewed for potential military construction projects; (4) there may be a transfer of personnel at the installation; or (5) there is a periodic review of their real property holdings, both land and facilities, to identify unneeded and underused property. Real property officials at all 11 of the military service installations we visited told us that evaluating the utilization of facilities requires physical inspections to verify and validate the accuracy of the utilization data within their real property inventory records. For example, according to Army Regulation 405-70, Army installations are required to perform an annual utilization survey and report findings of The Navy and the Air unused, underutilized, or excess real property.Force do not have a similar requirement for annual utilization surveys. The Army regulation requires a report containing a list of unused or underutilized buildings by facility classes and category code, building number, total gross square feet, gross square feet available, type of construction (permanent, semi-permanent, or temporary), and disposition. However, the real property officials at the three Army installations we visited told us that they had not completed the annual utilization surveys for their installations, because they did not have the manpower, the time to accomplish what they characterized as a time-consuming task on an annual basis, or the resources to pay a contractor to accomplish the task. Secondly, we found during our discussions with service headquarters officials and visits to installations that those real property inventory records that are maintained in the military departments’ authoritative real property inventory databases are not always accurate. For example: Army headquarters officials demonstrated a recently developed program called the Army’s Quality Assurance Reporting Tool, which is used to detect inaccuracies within its real property inventory database at the installation level. In August 2013, Army officials showed us more than 45,000 errors of all types within the real property database for one of the installations we planned to visit. As of August 2013, Army headquarters officials provided us with a listing from one of their real property databases showing the dates when the installation facilities were reviewed. Based on our analysis of the list of facility review dates, we found significant anomalies. For example, we found that the list of facility review dates included such erroneous entries as the years 0012, 0013, 0201, 0212, 0213, 1012, 1776, 1777, 1839, 1855, 1886, 1887, 1888, 1889, 2020, 2030, 2114, 2114, 2201, and 3013. We told Army headquarters’ officials about these particular facility review dates, and they responded that they would correct them. Table 5 below shows our analysis of the Army’s review dates, building count, and percentage reviewed. In order to determine if established internal control procedures over the Air Forces’ real property were operating effectively, a Real Property Assertion Team consisting of representatives from Headquarters Air Force, Civil Engineering, Asset Accountability and Optimization, the Deputy Assistant Secretary, Accounting and Financial Operations, and independent contractors was assembled. and the authoritative real property inventory system provided inaccurate data and could not support audit readiness assertions over real property assets. Included in an Air Force Audit Agency report at one of the installations we visited are five recommendations to develop and implement oversight procedures to validate the accuracy of Air Force’s real property data. Military installation-level officials at all 11 locations we visited told us that they use the departments’ databases as a tool to help identify space requirements and potential consolidation opportunities; however, incomplete and inaccurate data limit the usefulness of the databases to do so. Specifically, according to these installation-level officials, because the utilization data currently contained in their databases are often missing, out of date, or inaccurate, the installations rely on physical verifications of facilities’ utilization to identify consolidation opportunities. The installation-level officials stated that these physical verifications are performed as a result of requests for space or other common real property management processes, such as changes to mission and personnel at the installation. For example, at the 11 installations we visited, we found that consolidations had been performed in the past reactively in response to events, such as new or changing mission requirements, changes to force structure, or requests for facility space. Overall, the four military services use similar criteria and methodologies to address changes in mission requirements or requests for space at an installation. The installations’ civil engineers, real property planners, and facility specialists analyze the installations’ mission requirements and the space that is authorized to fulfill those missions in order to determine different potential courses of action for use of installation facilities. The installations are required by DOD Instruction 4165.14 to perform physical inventories every 5 years for real property and every 3 years for historical real property. Thus, according to the military installation-level officials, they generally complete 20 percent of the inventories each year, including verifying and correcting real property record data such as the utilization rate. We analyzed OSD’s Real Property Assets Database as of September 30, 2013, to determine whether some of the data fields could be used to identify potential consolidation opportunities.among the 11 locations we visited, that there were 12 real property assets or facilities that had data fields that indicated that they may have potential consolidation opportunities, which were located at 3 of the locations we visited. In our analysis, we found, At the first location, with 7 such facilities (including 4 office facilities), according to the real property officer, one of the office facilities was demolished in December 2013 and the real property record removed in January 2014. Another of the office facilities was demolished in November 2007, and the real property record should have been removed, yet it was present in DOD’s September 30, 2013, real property records— reflecting an error that has been ongoing for more than 6 years. In addition, the real property officer noted that the remaining 2 office facilities that had reported zero utilization rates could be identified as potential consolidation opportunities, but had not been identified until we pointed out our findings to the official. The 3 other facilities at that location (which were not offices) were marked for demolition. The second location had 1 facility, and, according to the Public Works official, this facility is 100 percent utilized and the real property record was reported correctly in the Army’s General Fund Enterprise Business System. However, the official noted that this facility had two real property unique identifier numbers—reflecting an error in DOD’s Real Property Assets Database, which had not been found until we identified it. The third location had 4 facilities, and, according to the real property officer, 2 of the facilities were put on the installation’s demolition list as of February 2014 and the other 2 facilities have usable space that is being considered for reuse by other activities needing space. OSD and the military departments have taken some steps to make improvements to the completeness and accuracy of their data since 2011; however, based on our analysis of OSD’s Real Property Assets Database, there continues to be incomplete and inaccurate data. In September 2011, we recommended that DOD develop and implement a methodology for calculating and recording utilization data for all types of facilities, and modify processes to update and verify the accuracy of reported utilization data to reflect a facility’s true status. As previously discussed, DOD partially concurred with the recommendation and stated that it recognized the need for further improvements in the collection and reporting of utilization data across the department. Further, DOD stated at the time that it had already begun some efforts to improve utilization data, but it did not specify what actions it had completed to date or the time frames for completing efforts to improve collection and reporting of utilization data. Fully implementing our September 2011 recommendation would help provide reasonable assurance that the utilization data are complete and accurate, which could also help better position the military services to identify consolidation opportunities and realize the potential attendant cost avoidance from no longer maintaining and operating more facility space than needed. OSD does not have a strategic plan to manage DOD’s real property efficiently and facilitate the department in identifying opportunities for consolidating unutilized or underutilized facilities. According to DOD Directive 4165.06, it is DOD policy that DOD real property shall be managed to promote the most efficient and economic use of DOD real property assets and in the most economical manner, consistent with defense requirements. In addition, our prior work has shown that organizations need sound strategic management planning in order to identify and achieve long-range goals and objectives. Our prior work also identified critical elements that should be incorporated into strategic plans to establish a comprehensive, results-oriented management framework. A results-oriented management framework approach includes a strategic plan with, among other things, long-term goals, and strategies to achieve the goals, and metrics or performance measures to gauge progress of the implementation to meet the goals. While OSD has established a directive and a number of instructions for the management of real property, including for the maintenance of data elements about their facilities, OSD has not developed a strategic plan nor established department-wide goals, strategies to achieve the goals, or metrics to gauge progress for how it intends to manage its real property in the most economical and efficient manner. Two critical elements of a strategic plan are the establishment of long-term goals and a description of strategies to achieve those goals. Such goals could be focused on correcting inaccurate and incomplete facility utilization-rate data in OSD’s Real Property Assets Database to provide better visibility on the status of the utilized, unutilized, and underutilized facilities. Another goal could be to identify opportunities for consolidating unutilized or underutilized facilities in order to effectively and efficiently use facilities as well as to reduce operation and maintenance costs in a time of declining defense budgets. Further, OSD has not established department-wide metrics for assessing progress related to real property management. Such metrics could be used to gauge progress in the efficient utilization of DOD’s current real property inventory. For example, a metric could be established for the military departments to complete a 100 percent inventory of all their real property at their respective installations within a specific time frame in order to baseline the number of utilized, unutilized, and underutilized facilities, which could help them to identify consolidation opportunities. OSD officials acknowledged that there is currently no OSD strategic plan that clearly establishes long-term goals, strategies to achieve the goals, and the use of metrics to gauge progress to manage DOD’s real property, because DOD has focused on other priorities. However, real property management is a long-standing issue and DOD’s real property assets represent significant resources, as well as the opportunity for cost savings through the consolidation or disposal of unutilized or underutilized inventory. Without a strategic plan that includes long-term goals, strategies to achieve the goals, and metrics to gauge progress, it will be difficult for OSD to effectively manage its facilities, and it may be missing opportunities to identify additional consolidation opportunities, and therefore may not be utilizing its facilities to their utmost extent. OSD has made some progress in improving the completeness and accuracy of its facility utilization data in its Real Property Assets Database. However, there continues to be incomplete and inaccurate data at the OSD and military-service level. We continue to believe that fully implementing our 2011 recommendation to develop and implement a methodology for calculating and recording utilization data for all types of facilities, and to modify processes to update and verify the accuracy of reported utilization data to reflect a facility’s true status, would help provide reasonable assurance that the utilization data are complete and accurate. Further, OSD’s lack of a strategic plan to facilitate the department’s management of its real property puts OSD and the military departments at risk for missing consolidation opportunities. As part of a results-oriented management framework, such a strategic plan should contain, among other things, long-term goals; strategies to achieve the goals; and the use of metrics to gauge progress. Without an OSD strategic plan, OSD and the military departments will be challenged in managing their real property in an efficient and economical manner, as required, and in identifying utilized, unutilized, or underutilized facilities as well as consolidation opportunities. To better enable DOD to manage its real property inventory effectively and efficiently, we recommend that the Secretary of Defense direct the Deputy Under Secretary of Defense for Installations and Environment to establish a strategic plan as part of a results-oriented management framework that includes, among other things, long-term goals, strategies to achieve the goals, and use of metrics to gauge progress to manage DOD’s real property and to facilitate DOD’s ability to identify all unutilized or underutilized facilities for potential consolidation opportunities. We provided a draft of this report to DOD for official review and comment. In its comments, DOD concurred with our recommendation and stated that a strategy review is currently underway with initial guidance and initiatives to be identified by the close of the calendar year. DOD also provided technical comments which we incorporated in our report as appropriate. DOD’s written comments are reproduced in their entirety in appendix III. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; Deputy Under Secretary of Defense for Installations and Environment; the Secretaries of the Army, Navy, and Air Force; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To determine the extent to which the Office of the Secretary of Defense (OSD) has improved the completeness and accuracy of facility utilization data in its Real Property Assets Database and the military services use the data contained in their respective real property inventory databases to identify potential consolidation opportunities, we obtained selected data fields containing the military services’ real property records from OSD’s Real Property Assets Database. We selected the same data fields we had used as part of our methodology and analysis for our September 2011 report. Specifically, we analyzed the utilization-rate data fields for fiscal years 2010 through 2013—the most recent full year available at the time of this review—to determine whether more complete utilization-rate data had been entered since our previous review of the fiscal year 2010 data. We assessed the reliability of the Department of Defense’s (DOD) real property inventory data by (1) performing electronic testing for obvious errors in accuracy and completeness, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable to assess the trends of the utilization data reported in OSD’s Real Property Assets Database for fiscal years 2010 through 2013. We also reviewed our prior work on excess and underutilized real property to understand issues previously identified with real property management. We gathered and analyzed documentation, such as a DOD directive and instructions as well as military department regulations, reflecting OSD’s and the military departments’ management of real property and how OSD used the data contained in its Real Property Assets Database to identify unutilized or underutilized facilities or potential consolidation opportunities. We interviewed officials in the Office of the Under Secretary of Defense for Installations & Environment; each of the three military departments, which include the four military services; and the military service installations we visited, and discussed their processes to manage real property. We selected 11 active military installations to visit to include installations from the four services and to reflect those with high numbers of buildings.While the results of our interviews and visits cannot be generalized to all installations, they provided perspectives on how installations manage their real property. Using OSD’s Real Property Assets Database and the following data fields—the utilization rate, the status as “active,” and the property description as “building,” we used a fourth data field, which described the asset as “utilized,” to determine any inconsistencies that might exist between these data fields. Using these four criteria, we reviewed the real property records for the 11 installations we visited to identify the extent to which other consolidation opportunities, if any, may exist on the installations and potential inconsistencies and inaccuracies. We contacted and received information from DOD representatives, as delineated in table 6. To determine the extent to which OSD has a strategic plan to manage DOD’s real property efficiently and to facilitate the identification of unutilized and underutilized facilities, we obtained and analyzed documentation, such as the Office of the Deputy Under Secretary of Defense for Installations and Environment report, 2013 Accomplishments and 2014 Goals and Objectives, a DOD directive and instructions, and military department regulations for the management of real property. We also reviewed OSD’s Real Property Assets Database as of September 30, 2013—the most recent data available at the time of our review—to identify what facilities, if any, were reported as being unutilized or underutilized and ascertain how OSD implemented its policy and guidance to manage real property in the most economical manner. We discussed the policies and guidance used in managing these facilities with officials in the Office of the Under Secretary of Defense for Installations & Environment and compared OSD’s efforts and guidance to the DOD directive and instructions for real property management and the results-oriented management framework as a best practice for strategic planning. We conducted this performance audit from July 2013 to September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Department of Defense’s (DOD) Real Property Management Program is governed by statute and DOD regulations, directives, and instructions that establish real property accountability and financial reporting requirements. Table 7 describes some of the guidance from DOD and the military departments and includes excerpts of the related requirements to manage real property. In addition to the contact named above, Harold Reich (Assistant Director), James Ashley, Ronnie Bergman, Pat Bohan, Tracy Burney, Cynthia Grant, Mary Catherine Hult, Cheryl Weissman, and Michael Willems made key contributions to this report.
GAO has designated DOD's Support Infrastructure Management as a high-risk area in part due to challenges DOD faces in reducing excess infrastructure. DOD manages a global real property portfolio of over 557,000 facilities DOD estimates to be valued at about $828 billion as of September 30, 2012. In September 2011, GAO found that DOD was limited in its ability to reduce excess inventory because OSD did not maintain accurate and complete data on the utilization of its facilities in its Real Property Assets Database. House Report 113-102 mandated GAO to review DOD efforts to improve these data. This report examines the extent to which OSD has (1) improved the completeness and accuracy of facility-utilization data in its Real Property Assets Database and the military departments' use of data to identify consolidation opportunities, and (2) a strategic plan to manage DOD's real property efficiently and to facilitate the identification of unutilized and underutilized facilities. GAO analyzed OSD's real property data from fiscal years 2010 through 2013, visited 11 active DOD installations from the four services to reflect those with high numbers of buildings, and interviewed officials. While not generalizable, the interviews provided perspectives about facility utilization. The Office of the Secretary of Defense (OSD) has made some improvements, but OSD's utilization data continue to be incomplete and inaccurate; and data limitations affect the military departments' use of their databases to identify consolidation opportunities. GAO's analysis found that the percentage of total real property assets with a reported utilization rate in OSD's Real Property Assets Database increased from 46 to 53 percent over the past 4 fiscal years. OSD made some improvements in addressing inaccuracies in the utilization rates in its real property records, such as correcting records for those facilities reported with a utilization rate greater than 100 percent. The military departments use databases to a certain degree to identify opportunities to consolidate facilities, but primarily only in response to specific events, such as requests for space. Officials at all 11 installations GAO visited stated that inaccurate and incomplete data in the departments' databases limited opportunities to identify these opportunities. In September 2011, GAO recommended that the Department of Defense (DOD) develop and implement a methodology for calculating and recording utilization data, and modify processes to update and verify the accuracy of reported data. OSD partially concurred because it stated that it had some actions already underway to address the recommendation. However, at that time, OSD did not specify what actions it had undertaken. Moreover, the recommendation has not yet been fully implemented. Fully implementing GAO's recommendation would help provide reasonable assurance that the utilization data are complete and accurate and better position the department to use the databases to identify consolidation opportunities. OSD does not have a strategic plan, with goals and metrics, to manage DOD's real property efficiently and facilitate identifying opportunities for consolidating unutilized or underutilized facilities. According to a DOD directive, it is DOD policy that DOD real property shall be managed to promote the most efficient and economic use of DOD real property assets, and in the most economical manner consistent with defense requirements. However, OSD officials stated that there is currently no OSD strategic plan to manage DOD's real property nor established department-wide goals, strategies to achieve the goals, or metrics to gauge progress for how it intends to manage its real property in the most efficient manner. Such goals could focus on correcting inaccurate and incomplete facility utilization data to provide better visibility on the status of facilities and to identify opportunities for consolidating unutilized or underutilized facilities and reducing operations and maintenance costs. GAO's prior work has shown that organizations need sound strategic planning to identify and achieve long-range goals and objectives. Without a strategic plan, it will be difficult for OSD to effectively manage its facilities and utilize them efficiently. GAO recommends that OSD establish a strategic plan to identify unutilized and underutilized facilities. In written comments on a draft of the report, DOD concurred with the recommendation.
RUS, an agency in USDA’s Rural Development mission area, oversees three programs for deploying broadband infrastructure in rural communities. The Telecommunications Infrastructure Loan and Loan Guarantee Program (Infrastructure Program) has funded traditional telephone networks but, since the mid-1990s, has been used primarily to fund broadband network infrastructure that can provide both voice and data services. The Rural Broadband Access Loan and Loan Guarantee program (Broadband Program) and the Community Connect Grant Program (Community Connect) are assistance programs that are specifically dedicated to financing broadband deployment. Differences among these programs include their definitions of rural areas and their eligibility rules for recipients and services. Infrastructure Program: The largest and oldest of the three programs, the Infrastructure Program was created as part of the Rural Electrification Act of 1936, as amended. The program provides loans and loan guarantees for the deployment of telecommunications systems, including broadband systems, to rural areas. The program is generally not available to any city, village, or borough having a population exceeding 5,000. Since fiscal year 2008, the authorized principal amount for the loans and loan guarantees (“lending authority”) has been $690 million. According to RUS, since 2004, individual loans have ranged from $81,600 to $90 million, depending on the size of the project. A loan recipient has 5 years to complete its infrastructure project. According to RUS, the terms of Infrastructure Program loans are typically around 20 years, depending on the nature of the facilities to be financed. This program often sees repeat borrowers as borrowers use the funds to either upgrade existing services in rural areas or expand their rural service area. Broadband Program: Authorized in 2002, this program provides loans and loan guarantees for the construction, improvement, and acquisition of facilities and equipment for broadband service in eligible rural communities. Recent amendments have revised the program including the definition of rural area, among others. The lending authority for this program has decreased from $602 million in fiscal year 2004 to $20.6 million in fiscal year 2016. According to RUS, the terms of the Broadband Program’s loans depend on the type of broadband system being deployed: generally around 20 years for fiber systems and around 12 years for wireless systems. According to RUS, since 2004, individual loans have ranged from $24,000 to $244 million, depending on the size of the project. Community Connect: This program started as a pilot program in fiscal year 2002, with $20 million in competitive grants. Noting the positive response it received, RUS made the Community Connect program an annual competitive grant program in fiscal year 2004. Annual appropriations for the program have ranged from $9 million to $18 million. As of December 2016, Community Connect grants have funded approximately 138 projects across the nation intended to improve broadband service. In the past few years, grant awards could not be greater than $3 million per project, with a 15-percent matching- fund requirement placed on the recipient. Further, projects must be in rural areas, as confirmed by the most recent decennial Census. A recipient has 3 years to complete construction of its broadband infrastructure project. Tables 1 and 2 show the annual lending authority for the loan programs and the annual appropriations for the grant program, respectively. As shown in figure 1, the dollar amount of loans approved by RUS varies by state, with Colorado, Kansas, and North Dakota receiving the largest loan amounts—each receiving over $500 million in loans from fiscal years 2004 through 2016. Conversely, Vermont, Maine, and New Hampshire received the lowest loan amounts—each receiving around $6 million or less during this time. While most states have obtained the majority of their funding from the Infrastructure Program, Colorado and Hawaii used mainly the Broadband Program. As shown in figure 2, the dollar amounts of Community Connect grants approved by RUS from fiscal years 2004 through 2016 varied, with Oklahoma receiving the largest amount of grant funds (about $22.5 million) and Pennsylvania receiving the lowest amount (around $290,000). Overall, RUS has procedures and activities addressing the leading practices we identified, including the key activities associated with these practices, as part of its management of the rural broadband programs. We found that RUS has procedures and activities consistent with the leading practices for reviewing applications, conducting external training, communicating with applicants and recipients, and coordinating with other federal agencies. RUS has procedures and activities that are partially consistent with leading practices for conducting program performance measurement, conducting risk assessments, mapping, monitoring loan and grant infrastructure projects, communicating internally, and providing written program documentation. RUS has procedures and activities that are partially consistent with the leading practice of program performance measurement because USDA has identified a goal and a performance measure at a high level. However, at the individual program level, RUS has not established a process that ensures program goals are identified, tracked, and fulfilled; has not developed performance measures linked to goals; and does not evaluate or document the results of program measurement activities. First, we have previously found that results-oriented organizations implement two key practices to lay a strong foundation for successful program management—setting performance goals to clearly define desired program outcomes and developing performance measures that are clearly linked to the performance goals. Through our review of USDA and RUS documentation, we identified a goal set by USDA at the Rural Development mission area level. Specifically, USDA’s Fiscal Year 2015 Annual Performance Report and Fiscal Year 2017 Annual Performance Plan describes the year-end progress of USDA toward achieving the department’s strategic goals, objectives, and performance measures. USDA sets forth a strategic goal “to assist rural communities to create prosperity so they are self-sustaining, repopulating, and economically thriving.” Under this strategic goal, the annual performance report contains a performance measure for RUS: the annual number of borrowers or subscribers receiving new or improved telecommunications services. According to the report, the performance target for 2016 was 120,000 borrowers or subscribers receiving new or improved telecommunications services; the 2017 target is 100,000. Outside of this one high-level strategic goal and performance measure, RUS officials told us they do not have formal documented program performance goals and measures for the individual loan and grant programs. RUS officials told us that they believe their goals for each of the three programs are to ensure that facilities are constructed properly and that the service is actually provided. However, these goals are not documented, and there are no specific performance measures that link to these goals. Federal agencies can use the information gained from performance measurement to make various types of management decisions to improve programs and results. Both the Government Performance and Results Act (GPRA) and OMB’s Circular A-129 highlight the use of performance measures and goals as a means to evaluate program performance. For example, GPRA requires agencies to develop a performance plan covering each program activity set forth in the budget, including program goals that are objective, quantifiable, and measurable. Although such practices are only required at the federal department or agency level under GPRA, they can serve as leading practices for planning at lower levels within federal agencies, such as at an individual program or initiative level. In addition, OMB’s Circular A-129 stipulates that for credit programs, agencies shall periodically evaluate programs in terms of the policy goals of the program and the program’s effectiveness towards addressing those goals. Without specific, documented goals for each loan and grant program—and specific performance measures that are crafted around those goals—it is difficult to determine in an objective, quantifiable way if these programs are fulfilling USDA’s strategic goal of assisting rural communities, and it could be more difficult for RUS to manage the programs in a proactive, results- oriented manner. RUS has procedures and activities that are partially consistent with the leading practice of risk assessment because RUS conducts a variety of risk assessment activities at the application and the individual project level, as well as having procedures to guard against fraud; however, RUS has not established procedures to conduct risk assessment activities at the program level. The Green Book defines the standards for internal control in the federal government, noting that management should: define objectives clearly to enable the identification of risks and define identify, analyze, and respond to risks related to achieving the defined objectives; consider the potential for fraud when identifying, analyzing, and responding to risks; and identify, analyze, and respond to significant changes that could impact internal controls. OMB’s Circular A-129 provides that, for credit programs, agencies must have robust management and oversight frameworks for credit programs to monitor the program’s progress towards achieving policy goals within acceptable risk thresholds, reinforce these frameworks with appropriate internal controls, and take action where appropriate to increase efficiency and effectiveness. RUS’s risk assessment efforts have focused on the proposed and funded broadband projects and their financial risks, particularly for the loan programs, which have greater inherent risk to the federal government because borrowers are expected to repay loans with interest. The application review process for both of the loan programs includes a financial risk review to determine whether the borrower has a sufficient forward-looking return on investment. Borrowers are required to maintain a times interest earned ratio (TIER) between 1.0 and 1.5, based on the projected TIER determined by a feasibility study prepared for each loan. According to RUS, all borrowers receive the same treatment once their loans are approved, irrespective of the risk involved with the project. In other words, RUS does not vary interest rates based on risk or set a higher TIER requirement for riskier borrowers. However, because the loan programs target rural areas that, as previously mentioned, may not appeal to private broadband providers, the programs tend to attract some applicants that may present higher financial risks. Since 2004, the Broadband Program’s loans have had defaults on 22 of 108 loans. In analyzing the risk factors behind these defaults, RUS determined that the majority of the defaulting companies were startup firms. In response, RUS has put in place new financial requirements on startup firms to better ensure that such borrowers are financially sound and less likely to default. For 2017 Broadband applicants, RUS has a Calculation of Additional Cash Requirement for startup operations or firms that have not demonstrated a positive cash flow from operations for the 2 years prior to the application date. This stipulation is in addition to the audited financial statements, tax returns, methodology, and assumptions that must be part of the application package. According to RUS officials, these requirements will enable RUS to place greater emphasis on evaluating these applicants’ subscriber and revenue projections to help address default risks. With regard to fraud risks, there are procedures to help limit fraud incidents. For instance, the grant agreements that Community Connect recipients are required to sign stipulate that invoices are to be submitted with requests for advance or with reimbursement forms before grant funds are disbursed. Recipients, depending on entity type, are to provide RUS with an audit for each year in which grant funds are expended and an annual project performance activity report. Loan agreements lay out specific conditions that loan recipients are required to follow for loan advances. Loan recipients are required by their loan agreements to have fidelity bond or theft insurance coverage and maintain all documentation, such as invoices, receipts, and annual financial reports, available for federal inspection, if requested. Loan recipients are required to provide RUS with annual audited financial statements until the loans are paid off. RUS performs compliance audits for all grant and loan projects on a 2-5 year cycle (depending on the amount of unaudited advances) until all funds are disbursed. Furthermore, USDA requires GFRs to submit reports on construction status based on regular site visits. These site visits also allow GFRs an opportunity to examine the projects for any misuse of funds. If a discrepancy is found, RUS officials told us that they will immediately disallow funding. If fraud involving a grant project is suspected, RUS officials said they would turn the information over to USDA’s Office of Inspector General (OIG) for investigation. Investigations can also be turned over to the Department of Justice for further action, such as a criminal indictment or an action to recover funds. According to RUS officials, fraud cases have been rare and have involved fake invoices and employee theft. While RUS has risk assessment activities at the application and individual project level and procedures related to fraud risks, RUS has no risk assessment activities at the overall program level. As set forth in the Green Book, a precondition to risk assessment is the establishment of clear, consistent program objectives. When clear program objectives are established up front, then internal controls can be designed around the fundamental risk that program objectives will not be met. As previously discussed, RUS does not have clear goals and performance measures in place for its loan and grant programs. RUS officials acknowledged that they have not conducted a formal risk assessment of the broadband loan and grant programs because to date, as noted, they have focused on risk assessment at the application and project level. But a higher-level, programmatic risk assessment would provide a holistic look at the programs’ core processes and practices and assess internal controls over each program. Such a programmatic risk assessment could include an examination of risks at the portfolio level for both the portfolio of loans and the portfolio of grants. RUS officials told us that they recognize the need for portfolio risk assessments and would like to put procedures in place in the future to assess the loan and grant portfolios. In late 2016, USDA hired a Chief Risk Officer for the Rural Development mission level. While RUS’s efforts to address risks in applications and funded projects and its recent creation of the Chief Risk Officer position are positive steps, these efforts are not fully consistent with the level of risk assessment that is intended under the Green Book. Those standards call for first establishing clear objectives for each program and then for comprehensively identifying risks to meeting those objectives. Without doing so, RUS is missing information crucial to the thoughtful design of an internal control structure that appropriately considers program risks for each of the three programs. RUS has procedures and activities consistent with the leading practice for application review, such as procedures for assigning applications to reviewers, reviewing and scoring applications, recording the results of application reviews, resolving scoring variances, and ensuring consistent reviews across reviewers. We found some differences among the individual loan- and grant-application review processes as a result of the nature of their individual funding mechanisms. For example, the Infrastructure loan application process is not competitive and uses a first- come, first-served procedure as long as the applications meet eligibility requirements. The Broadband Program is a competitive program that currently requires two application windows per year. Priority is given to those applications with the highest percentage of unserved areas. The grant program is also competitive but selects applications based on a scoring process. In addition, RUS has procedures for training reviewers, ensuring relevant expertise and the appropriate application of criteria, and guarding against conflicts of interest. For example, RUS uses a combination of guidance documents and on-the-job training to train reviewers, and helps ensure relevant expertise by hiring staff in particular job classifications for particular types of reviews (e.g., only engineers in the engineering job series conduct the engineering reviews of proposed projects). With regard to loans, the application review procedures for the Infrastructure Program start in the field, where the GFR conducts the first level of review (see fig. 3, which illustrates the loan review and approval process). For both loan programs, once an application is at RUS headquarters, staff from RUS’s engineering and financial-operations branches review the application for completeness. If complete, RUS’s financial and engineering analysts, managers, GFR, and GFR managers discuss the eligibility of the applicant. Once a loan application package is determined complete and eligible, it undergoes an engineering and financial review, followed by multi-level reviews and approvals by a number of committees and divisions. RUS officials noted a difference between the two loan programs in that the Infrastructure Program has a rolling application process while the Broadband Program holds two application submission periods each year. Like the loan applications, Community Connect grant applications go through a multilevel review process (see fig. 4). To confirm that the area in question is truly unserved, a GFR physically goes to the area of the proposed broadband project to test that existing broadband services are not present. After the engineering and financial review, the application is scored independently by two GFRs who do not oversee the applicant’s area, to avoid any conflict of interest. Each application is scored according to the criteria outlined in the Notice of Funding Availability or Notice of Solicitation of Applications. RUS has guidelines on how to score applications and how many points each criterion is worth. While the expectation is for the two scores to be similar, RUS officials said that occasionally there can be substantial differences. If such variance occurs, the officials review the application again and hold discussions with the Deputy Assistant Administrator and their branch chiefs to determine whether to move the application forward. All awards in the grant program must have been approved by the Administrator. Currently, RUS’s procedures and activities are partially consistent with the leading practice of mapping because RUS has two mapping systems in place, but its mapping information is not complete and the agency has efforts under way to improve its mapping activities. RUS officials told us that they currently use mapping data to determine if the service proposed by an applicant overlaps the service of an existing provider in the same area, and to determine and prioritize grant applications that propose to serve areas with the greatest need. Applicants requesting funding under the Infrastructure Program and Broadband Program for loans and the Community Connect grant program are required to submit maps of their service area and proposed service area. RUS uses a number of sources to collect mapping data, but has two distinct mapping tools. First, applicants upload digital maps of their proposed service areas in RDApply as they submit their applications. Second, applicants can also use the RUS mapping tool, which predates the RDApply mapping system. According to RUS, the mapping tool serves three purposes. First, it can be used by existing borrowers or those interested in applying for loan or grant funding to draw their existing or proposed service-area maps. Second, it can be used by RUS to post Public Notices of applicants’ proposed service areas or be used by existing providers to submit information regarding their service offerings. Third, it can be used by any state, local, or other entity that wishes to upload an authenticated map of existing broadband services. According to RUS officials, they intend that the RDApply system will eventually incorporate the mapping tool information and they will no longer use the mapping tool, but will instead rely on one system. They explained that they are building a mapping system based on recent and current application information because they did not previously require all applicants to submit mapping data. They began requiring submission of geospatial mapping data for the Broadband Program in 2009, for Community Connect in 2012, and for the Infrastructure Program in 2015. According to RUS officials, USDA’s Office of General Counsel ruled that they do not have the authority to require past recipients to provide them with mapping data, so they are unable to completely fill in historical mapping information for past projects. Presently, RUS has service-area and proposed service-area data for the loan and grant recipients in the RUS mapping tool, which it then overlays with decennial Census data. RUS also incorporates data from federal and state sources, including FCC’s National Broadband Map, into the RUS mapping tool. Currently, RUS uses information from the National Broadband Map as part of its review of an applicant’s proposed coverage area. However, according to RUS, it has found the National Broadband Map to have accuracy limitations. We testified before Congress in April 2016 that when a service provider reports any availability of high-speed Internet in a Census block, the entire block was counted as served in FCC’s National Broadband Map. As we testified, this reporting could overstate service in rural areas, which generally constitute large Census blocks. RUS officials told us that they do not have a mapping system that houses extensive broadband service-area data. RUS officials told us that they are in the process of improving the data and their broadband- mapping capabilities as they move to improve their RDApply mapping information and move to having one mapping system. If successful, this effort should lead to improved information about the location of rural broadband services. Going forward, improved mapping information can help RUS begin to use its mapping information to determine if there are unserved rural areas where it should consider additional outreach. RUS has procedures and activities that are consistent with the leading practice of providing external training to prospective applicants, applicants, and recipients of its loan and grant programs. For example, we found that, over the past 5 years, RUS has held a number of external training and outreach events, such as workshops and seminars, to provide a range of information about its broadband loan and grant programs to rural communities and prospective applicants. For example, RUS has hosted multi-day construction and engineering workshops on broadband engineering and construction issues. RUS also hosted workshops on contract administration issues, such as the financial, accounting, and audit processes and requirements for the programs. RUS has also provided technical training on system installation. In addition, RUS representatives have participated in conferences held by broadband trade associations and other groups, setting up information booths or holding information sessions on the RUS programs. For applicants, RUS has held webinars on the loan- and grant-application processes. While RUS does not have a formal process for identifying external training needs, the agency assesses and makes training decisions at the beginning of each fiscal year or when there are available funds. According to an RUS official, decisions are made after consulting with the GFRs, state officials, and RUS loan and grant recipients. Officials representing all six loan and grant recipients we spoke with had participated in external training provided by RUS. Project recipients participated in at least one training course and felt that the RUS training was helpful. Moreover, all six recipients said that their GFR was critical to obtaining necessary program information by providing ongoing support and assistance, such as assisting applicants in filling out the forms for the loan programs and helping explain program policies or procedures. RUS’s procedures and activities are partially consistent with the leading practice of project monitoring because RUS has in place a number of monitoring and oversight activities of program recipients, but RUS does not currently evaluate Community Connect project results. According to the Green Book, effective project monitoring incorporates a process that helps ensure that project goals are identified, tracked, and met. Project monitoring should include corrective actions to address identified internal- control issues and penalties for serious and frequent offenses of program requirements. Project activities are to be evaluated and reported on a regular basis to help determine whether changes are needed to better meet project goals and detect fraud and abuse. A July 2014 RUS reorganization resulted in separate divisions for overseeing the different phases of RUS projects. Prior to the reorganization, analysts were responsible for overseeing each loan and grant project from beginning to end. The reorganization split the responsibilities by divisions—Loan Origination and Approval Division (LOAD) oversees all project applications and approvals, while Portfolio Management and Risk Assessment (PMRA) is responsible for monitoring all broadband loans and grants and has procedures for tracking performance and monitoring projects. While there are some differences in the monitoring requirements for the three programs, PMRA tracks the following for all projects: Subscriber number and service area: Applicants are required to provide details on the service area and the number of subscribers intended to be served. The GFR is responsible for visiting the designated service area to ensure that these goals are met. Deliverables and time frames: PMRA reviews recipients’ contracts with construction firms and the related invoices to evaluate recipients’ progress toward established deliverables and project milestones. Progress reports: Project recipients are required to submit progress reports to RUS. According to RUS officials, Community Connect grant recipients submit annual progress reports to their GFRs while Infrastructure Program and Broadband Program loan recipients are required to submit quarterly progress reports as well as an annual report. Progress report data are tracked in RUS’s Broadband Collection Analysis System and Data Collection System. Financial information for loans: The financial information required in the progress reports includes balance sheets, income, debt service ratios, cash flow, and long-term debt. PMRA analysts evaluate the financial data against broadband miles constructed and the number of subscribers to ensure compliance with the project goals and flag any potential issues. Monitoring duration: Community Connect grants are monitored for the duration of the grant project, typically up to the 3 years the program allows for construction and implementation; all recipients are required to submit a project performance-activity report and audit report annually. Infrastructure Program and Broadband Program loans are monitored for the life of the loans and audited for compliance every 2- 5 years until all funds are disbursed . A final audit is conducted after all funds are disbursed. Annual audited financial statements are reviewed annually until the loans are paid in full. The two grant and four loan recipients we spoke with confirmed that their GFRs make site visits to ensure the eligibility of the area to receive service. While the number of visits may vary, the recipients said that work by their GFRs ranged from evaluating construction progress and ensuring compliance with contract goals and deliverables to validating billing statements. They also stated that actions that RUS can take for non- compliance are laid out in their loan or grant agreements. While RUS established many project-monitoring activities, with regard to the key activity of evaluating project results, we found that RUS evaluates loan performance but does not review post-award grant program performance. For loans, RUS necessarily follows loan projects through the repayment process—which is often 20 years or more—and evaluates what happened when a loan recipient defaults. However, for grants, once a Community Connect grant is fully disbursed, RUS does not conduct any evaluation of whether the grant recipient is still providing broadband service at a later date or measure the effectiveness of the project in meeting its goals. RUS officials told us that they would like to go back and evaluate grant projects, but that staffing resource constraints have prevented them from doing so. However, not periodically evaluating grant project results affects RUS’s ability to measure the outcomes and success of its grant program. Without analysis of post-award project successes or failures, Community Connect program managers are missing information that could be used to determine if programmatic changes might improve the selection of grant recipients or the results of grant awards. RUS’s procedures and activities related to its communications with program applicants and recipients are consistent with the leading practice of external communication. RUS provides outreach efforts to publicize its broadband loan and grant programs through workshops and seminars located around the country, and to provide information on program requirements, key dates, funding availability, and the review processes, including how applications are scored. Both RUS headquarters employees and GFRs in the field conduct outreach efforts. According to an RUS official, it is not cost-effective to visit very remote areas to disseminate information about their programs, but they do try to hold their workshops in areas where there may be an interest in the programs. For example, the workshops that RUS has held in recent years included a 2014 Telecommunications Workshop held in Clanton, Alabama, which has a population under 9,000, and a 2-day workshop in 2015 held in the Washington State Rural Development Office in Olympia, Washington, that covered broadband access and deployment. To reach out to others, RUS’s website contains information on its programs, including information about eligibility criteria, corresponding regulations, time frames, and frequently asked questions. Applicants can get pre-application assistance from their GFRs and RUS headquarters staff. Applicants can also obtain program application guidance on the RUS website, including fact sheets, process information, and application instructions. We reviewed documentation and interviewed officials and found that RUS sent eligibility, acceptance, and rejection letters that explain how decisions were reached. RUS also publicizes information on loan and grant awards through press releases and announcements on its website. All six recipients we spoke with said that their GFR was their primary communication link, followed by someone in the RUS headquarters office. The recipients’ views about RUS’s external communication ranged from “adequate” to “excellent.” Typically, the recipients would first reach out to their GFRs and then to RUS headquarters if a GFR could not answer the question. Recipients also said that they would try to get information and answers on the RUS website; however, as one recipient noted, it is easier to ask the GFR. Overall, the recipients said that RUS’s external communication efforts kept them informed. Specifically, recipients praised their GFRs’ efforts to inform them about relevant events, such as available funding, upcoming application periods, and deadlines. RUS’s procedures and activities are partially consistent with the leading practice of internal communication as it has established an organizational structure to permit the flow of information to assist agency staff and recipients, appropriate methods of communication throughout the organization, and mechanisms to obtain relevant data based on identified project information requirements. However, RUS falls short of this leading practice in that it does not have a centralized system to obtain relevant data to monitor grant awards and loans, including correspondence and deliverables. As noted above, RUS established a new organizational structure in 2014 to consolidate expertise and assist agency staff in fulfilling their responsibilities. For example, the new organizational structure has clearly defined reporting lines that include four divisions with respective Deputy Assistant Administrators. These divisions oversee operations, loan origination and approval, portfolio management and risk assessment, and policy and outreach. With regard to communication methods, internal meetings are held on an ad-hoc basis throughout the year, and at the Rural Development mission area level, regular announcements and notices are sent to the various offices. As discussed earlier, RUS’s application review process includes numerous meetings and distribution of pertinent information throughout the process among field and headquarters staff. RUS has multiple software systems to monitor loan and grant data, and both division and field staff are responsible for collecting and monitoring data. However, the current loan and grant data are not aggregated or housed in a centralized database. According to RUS officials, RUS has updated the application system with RDApply, but in addition to that, uses a number of different databases that are mostly legacy systems and antiquated. RUS headquarters and field staff can access pertinent data from the various software systems. However, RUS officials told us they cannot conduct complex analyses, but can produce data such as, for example, spreadsheets on how many obligations have been made since a particular year. RUS officials told us that they want to move to a modern, single, centralized database that would enable them to conduct analyses of all loan and grant applications. RUS officials said that USDA’s information technology department is working on a new software system, but RUS was not able to provide us with a plan or implementation timelines for when the system would become operational. Moving toward a centralized system would allow RUS to more effectively monitor loans and grants and more fully analyze program performance. RUS’s procedures and activities are partially consistent with the written documentation leading practice. According to the Green Book, an effective management framework for grants and loans includes developing and maintaining written documentation as a means to obtain and retain organizational knowledge and to ensure accountability for achieving agreed-upon results. Although RUS has effectively developed written documentation to communicate to program applicants and recipients, we found that RUS has not consistently updated its written policies and procedures to retain organizational knowledge and to communicate loan- and grant-management knowledge internally among its staff. For each of the three programs, RUS has updated application guides to assist applicants in the application process. For Community Connect, RUS created an application guide for fiscal year 2016. The current application guides for both the Broadband Program and the Infrastructure Program incorporate updates regarding the eligibility of equipment and facilities for funding. For the benefit of applicants and recipients, we found that RUS has periodically issued announcements and letters pertaining to the programs, frequently asked questions, and fact sheets. Furthermore, information about the three programs and their application documents are online. According to RUS officials, RUS documents its award decisions. RUS conducts an initial assessment of applications to determine whether an application meets eligibility requirements, is complete, or needs clarification. For those not complete, RUS would send the applicant a letter stating the deficiencies or with questions. The applicant would have 30 days to address the concerns of the letter and amend the application in order for it to continue through the decision making process. Once final application decisions are made, RUS notifies the applicants of the decisions and reasons for denial of loan or grant. Recipients are required to sign agreements which lay out their expectations during the course of the awards. The six loan and grant recipients we spoke with told us that the application processes are lengthy and require a great deal of information. However, two recipients for the Community Connect and Infrastructure Programs told us that they found the written documentation for the processes to be relatively clear and straightforward. RUS has provided guidance and templates that have been helpful to some of the recipients. While written guidance to assist applicants and recipients exists, since its reorganization in 2014, RUS has not fully updated written documentation for loan and grant management policies and procedures to communicate knowledge among its staff. For the Community Connect program, RUS has updated its staff instructions, templates and worksheets for determining eligibility, conducting technical reviews, and scoring the applications. For the Infrastructure Program, RUS provided us with revised application checklists, loan checklists, and post-award project- visit checklists to be used by the GFR. For the Broadband Program, RUS provided us with the program’s initial application review report. However, RUS does not have any formalized staff instructions for processing the Broadband Program’s loan applications. According to one RUS official, employees use checklists and review packets similar to those for the Infrastructure Program. Moreover, the dates of other checklists and instructions RUS provided range from 1995 to 2011, and some of them make reference to agency offices that no longer exist. RUS officials told us that they have not been able to update employee manuals on the grant and loan programs due to resource constraints. While we were told that engineers (job series 855—Professional series for engineers) and financial analysts (job series 1101—Business-Industry Analyst) review the applications, the grant- and loan-review process steps are not written down. New employees are assigned a mentor and learn their responsibilities through on-the-job training. The GFR manual is dated March 2007 and has not been updated to reflect organizational changes within RUS. According to one of the loan recipients we spoke with, new RUS employees seemed to have a difficult time ensuring that they are passing correct information to program applicants. The Telecommunications Division responsible for overseeing the broadband grant and loan programs consists of employees located in its headquarters and GFRs located in 27 regional territories throughout the country. Since fiscal year 2000, the Telecommunications Division’s approved full-time equivalents (FTE) have decreased about 25 percent, from 133 FTEs in fiscal year 2000 to 98 FTEs in fiscal year 2016. We found that much of the critical knowledge of these programs resides in one key official located at headquarters, who is close to retirement. Furthermore, several of the recipients we spoke with noted that their GFR has changed, either due to retirement or reassignment. These changes could negatively affect the agency in efficiently carrying out its tasks, unless the agency has documented detailed information on how the programs are to be managed for the next generation of RUS officials. RUS’s procedures and activities for coordinating and collaborating with other federal agencies are consistent with the leading practice of coordination mechanisms, which should help the agencies in minimizing redundancies and removing regulatory barriers, for example: In 2014, RUS and FCC, the other agency with primary responsibility for providing rural broadband funding, entered into a memorandum of understanding (MOU). The MOU was intended to govern the sharing of data between the agencies, thereby improving federal coordination and facilitating the agencies’ efforts to carry out the responsibilities of their broadband-funding programs. For example, documentation we reviewed showed RUS and FCC staff coordinating on issues regarding carriers that use both agencies’ programs, data resulting from carrier audits, and questions about program rules. RUS signed an interagency agreement with the Environmental Protection Agency (EPA) in September 2015 to provide technical assistance for the Cool & Connected project, which aims to support community development by leveraging investments in Internet access. The agreement states that the two agencies will provide technical assistance to the partner communities and conduct outreach for the Cool & Connected project. The technical assistance includes consultations, analysis, and workshops to help members of the partner communities develop action plans for improving broadband access and revitalizing downtowns and traditional neighborhoods in rural areas. According to the agreement, both RUS and EPA will invest financial resources into the project and both agencies will offer the services of headquarters and field staff to provide technical assistance. In addition, the Department of Agriculture co-chairs the Broadband Opportunity Council, an effort involving 25 federal agencies and departments with missions or programs with the potential to drive broadband infrastructure investment and adoption. The Broadband Opportunity Council seeks to foster increased collaboration among agencies, to identify regulatory barriers and additional opportunities to improve broadband access, and to elevate the importance of broadband as a cross-cutting policy objective across the federal government. The Broadband Council issued a report in 2015 with dozens of action items intended to improve broadband nationwide and, in January 2017, issued a progress report stating that more than one-third of the action items had been completed by the federal agencies involved. By following leading practices related to program management, federal funding, and broadband deployment, RUS can more effectively and efficiently use its resources to help promote the deployment of broadband infrastructure to rural areas that are currently unserved or underserved. RUS has established procedures and activities that are consistent with leading practices in the management of its broadband infrastructure loan and grant programs in several areas, including its application review process, its external training of and communication with program participants, and its collaboration with other federal entities on efforts related to broadband deployment. With regard to mapping, we believe that RUS’s plan to obtain mapping information from recipients going forward will help RUS to better align with the leading practice in this area. In other areas, RUS is consistent with some key activities of leading practices, but further incorporation of key activities could enhance its loan- and grant-program management. Most fundamentally, RUS needs to develop clear goals and performance measures for each of its three programs. With regard to risk assessments, RUS established procedures to conduct numerous risk-assessment activities at the application and project level. However, an assessment of each program could help RUS determine whether modifications to business practices and internal controls are necessary to cost-effectively address programmatic and portfolio-level risks. Similarly, RUS established procedures to actively monitor loan and grant projects in numerous ways, but has not evaluated the outcomes of its grant awards over time. Such an evaluation could better inform RUS as to whether changes to Community Connect are warranted to help improve the outcomes for the communities served by the grant projects. Internally, RUS could improve its practices by establishing a centralized data system and improving its written documentation for the benefit of its staff. These actions to improve RUS’s consistency with leading practices can help the agency build a stronger foundation for successful program management. To improve RUS’s management of the Infrastructure Program, Broadband Program, and Community Connect by more closely following leading practices for broadband loan- and grant-program management, we recommend that the Secretary of Agriculture direct RUS to take the following five actions. Develop and document clear goals and performance measures linked to those goals, for each program. Establish and implement procedures to conduct a risk assessment of each program, including an examination of risk at both the programmatic and portfolio level for each program. Establish and implement procedures to conduct periodic evaluations of completed grant projects to determine the outcomes associated with these projects, and analyze the information gained to assess if any programmatic changes are needed to improve the Community Connect program. Establish a timeline for implementing a centralized internal system for staff to obtain relevant and timely program data for use in managing and monitoring loans and grant awards. Develop, update, and maintain complete written policies and procedures for RUS’s programs as a way to retain and communicate organizational knowledge internally among agency staff. RUS should determine the critical documentation that should be created or updated, including considering documentation such as loan- application review guidance and employee manuals for each of the three programs. We provided a draft of this report to USDA, FCC, and NTIA for review and comment. USDA agreed with our recommendations. USDA, FCC, and NTIA provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the United States Department of Agriculture, the USDA’s Under Secretary for Rural Development, the Chairman of the Federal Communications Commission, the Secretary of the Department of Commerce, and the Assistant Secretary for Communications and Information at the National Telecommunications and Information Administration. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or members of your staff have questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Appendix I: Description of Leading Practices and GAO’s Assessment Key activities associated with the leading practice Assessment Establish a process that ensures program goals are identified, tracked, and fulfilled. Develop performance measures that link directly to stated program goals. Establish a process to evaluate and document the results of program measurement activities, including plans for corrective actions. Define program objectives. Define the program’s risk tolerances. Establish a process to conduct risk assessments to identify and analyze risks to achieving program objectives. Conduct risk assessments to identify and analyze risks for credit programs. Determine the program’s fraud risk factors and the types of fraud for which the program is most at risk. Determine and conduct the appropriate responses to identified risks. Develop procedures for the review of grant and loan applications that describes the process (describing, for example, the number of panels or reviewers, the methods for assigning applications to panels or reviewers, how the results of the review are recorded, how scoring variances across panels or reviewers are resolved, how panels or reviewers ensure consistent reviews, etc.). Use a panel or reviewers who hold relevant expertise, do not have conflicts of interest, apply the appropriate criteria, and are trained. Map broadband availability to help both policymakers and service providers determine where to focus their efforts, and to reveal gaps in service to providers that might wish to expand their offerings. Establish mapping systems that identify the necessary information requirements, obtain relevant data from reliable sources, and process mapping data into quality information. Key activities associated with the leading practice Assessment Identify external training needs. Develop a mechanism that allows grant and loan applicants and recipients to establish and maintain a level of subject-matter expertise and competence so that they can fulfill their responsibilities related to compliance with the terms and conditions of the program. Develop training that helps grant and loan recipients obtain sufficient understanding of regulations, policies, and procedures governing their grants or loans. Establish baselines, goals, and performance measures for projects. Establish a system of internal control for monitoring projects. Identify problematic issues and design and take corrective actions. Require periodic reviews, including progress reports. Establish corrective actions, including penalties, for serious and frequent offenses of program requirements. Evaluate project results. Establish procedures for outreach efforts to potential applicants. Provide relevant information, prior to making award decisions, on program requirements, time frames, and review processes. Establish a process to provide pre-application assistance. Establish a process to notify successful and unsuccessful applicants of selection decisions in writing and provide feedback on applications. Ensure transparency by making program documents, policies, procedures, and decisions publicly available. Publish information on the number of grants and loans awarded annually. Mark L. Goldstein, (202) 512-2834 or goldsteinm@gao.gov. In addition to the contact named above, Faye Morrison (Assistant Director), Martha Chow (Analyst in Charge), Melissa Bodeau, Richard Bulman, Russell Burnett, Marcia Carlsen, Carol Henn, Ken Rupar, Terence Lam, Hannah Laufe, Benjamin Licht, SaraAnn Moessbauer, Joshua Ormond, Cheryl Peterson, Mathew Scirè, and Sarah Veale made key contributions to this report.
RUS provides loans and grants to help finance the construction of broadband infrastructure in rural America. GAO was asked to review RUS's management of its programs to fund broadband deployment, including consistency with leading practices for federal funding, program management, and broadband deployment. This report examines the extent to which RUS's procedures and activities are consistent with leading practices and how, if at all, its management practices could be improved. GAO synthesized, from federal guidance and relevant literature, a set of 10 leading practices that would be appropriate for the management of broadband loan and grant programs. GAO validated its set of practices with states that have programs similar to the RUS programs. GAO then reviewed RUS documentation and interviewed RUS officials and six program recipients, selected for having geographically dispersed projects currently under construction. Based on this information, GAO determined whether RUS's procedures and activities were consistent, partially consistent, or not consistent with each leading practice. The Rural Utilities Service (RUS), an agency within the United States Department of Agriculture (USDA), has procedures and activities that are consistent with four leading practices and partially consistent with six leading practices in managing two loan programs and one grant program aimed at funding broadband infrastructure projects in rural communities. Consistent with Leading Practices: With regard to reviewing applications, RUS has procedures for training reviewers, guarding against conflicts of interest, and conducting multiple levels of review. For external training and external communication, RUS holds workshops and seminars to inform rural communities and applicants about its programs. RUS's website contains program information, including eligibility criteria, time frames, and frequently asked questions. Applicants can also seek assistance from the RUS general field representative (GFR) assigned to their area. Program recipients whom GAO interviewed often spoke positively of the help provided by GFRs. As to coordination mechanisms, RUS has worked with other federal agencies on rural broadband-deployment efforts, including having a memorandum of understanding with the Federal Communications Commission. Partially Consistent with Leading Practices: While USDA has a high-level goal and a performance metric for measuring the benefits to rural communities of the broadband loans and grants, RUS has not developed specific program-level goals or performance measures for its individual programs. Without specific measureable goals for each loan and grant program, RUS will have difficulty determining how well the programs are performing. Regarding risk assessment, RUS conducts a variety of risk assessment activities at the loan and grant application and project level, but has not conducted a risk assessment at the program level. A higher-level, programmatic risk assessment would provide a holistic look at the programs' core processes and internal controls. For broadband programs, another leading practice is establishing mapping systems that can provide program data and reveal areas that lack service. RUS has mapping tools and systems in place, but does not have complete mapping information. RUS has efforts under way to improve its mapping data going forward. These efforts should increase RUS's understanding of broadband coverage and help RUS begin to identify possible unserved areas for outreach. For project monitoring, RUS currently oversees loan and grant recipients' projects through GFR site visits, progress reports, and audits. However, RUS does not evaluate its grant projects post-completion and is therefore missing information that could be used to improve the selection of grant recipients or the results of grant awards. RUS has established an organizational structure that supports internal communication, but does not have a centralized system to monitor loan and grant data. RUS officials said USDA is working toward such a system, but they did not have established deliverables or time frames. RUS generally has external written documentation for recipients, but internal written documentation is often outdated, affecting RUS's ability to share knowledge among its staff and retain institutional knowledge. GAO recommends that RUS develop program performance goals and measures, conduct program risk assessments, evaluate completed grant projects, establish a timeline for implementing a centralized internal data system, and update written policies and procedures for RUS staff. USDA agreed with the recommendations.
Federal organizations relocate their civilian employees to help them accomplish their many varied and unique missions. Organizations carry out their missions through a civilian workforce of nearly 2 million employees assigned to offices in locations throughout the United States, its territories and possessions, and various foreign countries. The Secretary of State determines the length of an overseas tour for Foreign Service Officers. The tour of duty overseas for Department of Defense (DOD) employees is prescribed by DOD’s Joint Travel Regulations (2 JTR). When civilian employees are authorized to relocate in the interests of the government, they are to be authorized to relocate prior to the time they actually move, and they generally have up to 2 years, and can request a third year, from the date that they report to their new location to complete the relocation and receive reimbursement for the associated costs. Therefore, the actual relocation may not take place in the fiscal year that it is authorized. Also, expenses associated with the relocation may be paid to the employee over the 2- to 3-year period. Two federal laws provide government organizations with the primary authority to pay the travel and related expenses of relocating a civilian employee: the Administrative Expenses Act of 1946, as amended, 5 U.S.C. §§ 5701-5742, and the Foreign Service Act of 1980, 22 U.S.C. 4081. GSA’s Federal Travel Regulation (FTR), 41 C.F.R., chapters 301 to 304, implements the provisions of the Administrative Expenses Act. FTR, chapter 302, governs the travel and relocation expenses of civilian employees, except those in the Foreign Service. Based on authority provided in the Foreign Service Act, travel and relocation expenses for Foreign Service Officers are prescribed by the Secretary of State in the Foreign Service Travel Regulations. These regulations are contained in volume 6 of the Foreign Affairs Manual (6 FAM). Once any civilian employee is located in a foreign area, his or her travel allowances and differentials are set by the Secretary of State in the Standardized Regulations (Government Civilians, Foreign Areas). Both the Department of State’s and GSA’s travel regulations authorize federal organizations to pay basically the same expenses for relocations within the United States. These expenses include transportation of individuals, per diem, subsistence, transportation and storage of household and personal effects, and real estate expenses. The key difference is that Foreign Service Officers are not entitled to relocation income tax allowances. Overseas, the Standard Regulations apply to both Foreign Service Officers and other civilian employees, and generally provide them with the same allowances. These allowances include living quarters allowance, temporary quarters subsistence allowance, and cost of living allowance. One difference is that Foreign Service Officers are entitled to separation travel, which is relocation to anyplace in the United States that they choose upon retirement regardless of where they are located when they retire. On the other hand, other civilian employees returning from overseas are only entitled to reimbursement for travel and relocation expenses to their home of record. Another difference is that Foreign Service Officers may be authorized rest and recuperation travel when assigned to a hardship post. In order to receive reimbursement for relocation expenses/allowances to which they are entitled, both civilian employees and Foreign Service Officers must sign a service agreement to remain with the government for 12 months after the date that they report to their new duty station, mission, or agency, unless they leave the government for reasons beyond their control and that are acceptable to the agency. An employee who violates the agreement must repay the government the amount it spent to relocate him or her. Neither FTR nor FAM specifically define the term relocation. For the purposes of this report, we define relocation as (1) the transfer, in the interest of the government, of an existing civilian employee or appointee from one office, mission, or agency to another for permanent duty; (2) the moving of a new eligible appointee from his or her actual residence in one location to his or her first office or mission in another location; (3) the return of an existing eligible civilian employee or appointee who is separated from an overseas office or mission to his or her actual residence; and (4) the return of an existing eligible career appointee on retirement from an office or mission to his or her elected residence within the United States, its territories, or possessions. Collecting exact cost information for relocation travel is difficult. Office of Management and Budget (OMB) Circular No. A-11, Preparation and Submission of Budget Estimates and Circular No. A-34 Instructions on Budget Execution require that federal organizations record obligations and expenditures by object class according to the nature of the services or articles procured. There are no object classes dedicated solely to recording relocation travel obligations and expenditures. Rather, relocation obligations and expenditures are captured in at least four different object classes, along with obligations and expenses that are not related to relocation travel. These object classes include (1) 12.1 civilian personnel benefits; (2) 21.0 travel and transportation of persons; (3) 22.0 transportation of things; and (4) 25.7 operation and maintenance of equipment (related to storage of household goods). As a result, relocation obligations and expenditures cannot be extracted from OMB budget/object class data. Instead, relocation obligations and expenditures must be obtained from each federal organization through queries of its automated systems or examination of its travel records. As the government’s travel manager, GSA’s Office of Governmentwide Policy is responsible for establishing governmentwide civilian travel and relocation policy, updating FTR, gathering travel and relocation costs, and providing leadership to develop sound travel and relocation policy. GSA was required by the Federal Civilian Employee and Contractor Travel Expenses Act of 1985, 5 U.S.C. § 5707(c), to periodically, but at least every 2 years, submit to the Director of OMB an analysis of, among other things, estimated total agency payments for employee relocation. GSA is to survey a sampling of agencies, each of which spent more than $5 million on travel and transportation payments in the prior fiscal year. This provision was to expire with the administrator’s submission of the analysis that included fiscal year 1991. The Treasury, Postal Service and General Government Appropriations Act of 1995, Pub. L. No. 103-329 (Sept. 30, 1994), reinstated this provision with no future expiration date. GSA collected the required travel information for fiscal years 1989, 1990, and 1991. However, GSA only analyzed the travel information for fiscal year 1989. GSA’s Office of Governmentwide Policy recently distributed its survey to collect travel information, including relocation travel, for fiscal year 1996. To provide the requested civilian employee relocation information, we developed and distributed a questionnaire to 120 federal organizations. We asked the organizations to report their total number of and cost for civilian employee relocations. We also asked whether they had a rotational policy that resulted in civilian employee relocations. We received responses from 119 (or 99 percent) of the 120 organizations surveyed. The Department of Commerce’s Economic Development Administration, which had a civilian workforce of less than 400 employees, did not provide a response. The names of the organizations that we surveyed are listed in appendix I. To develop the questionnaire and ensure its completeness, we researched FTR and OMB Circular No. A-11 to identify the allowances for relocation expenses and the object classes that federal organizations use to record relocation obligations, respectively. We drafted the questionnaire with the assistance of our staff knowledgeable in federal travel and relocation practices. We pretested the questionnaire with the following six organizations: the Bureau of the Census, Defense Educational Activity, Department of State, Drug Enforcement Administration, U.S. Marine Corps, and Office of Personnel Management. Using the pretest results, we revised the questionnaire to help ensure that our questions were interpreted correctly and that the requested relocation information was available. We did not independently verify the accuracy of the civilian employee relocation information that the federal organizations provided or assess the appropriateness of their relocations or the associated cost because of time constraints and the number of organizations surveyed. However, we reviewed each questionnaire for clarity and completeness and followed up with the organization’s contact person in those instances in which the responses were unclear or incomplete. To provide information on rotational policies that resulted in civilian employee relocations, we obtained copies of these policies from the pertinent federal organizations. We reviewed the policies to understand their purposes, their rotational requirements, and which employees were affected. Additionally, we interviewed cognizant officials to discuss the policies in greater detail, clarify specific issues, and determine current use of the policies. Appendix II contains a more detailed description of our objectives, scope, and methodology. We did our work in Washington, D.C., from June 1996 to June 1997 in accordance with generally accepted government auditing standards. Because it was impractical for us to obtain comments from all 119 federal organizations, we requested comments on a draft of this report from the Director of OMB and the Administrator of GSA. Their comments are discussed at the end of this letter. Most of the federal organizations that responded to our survey reported authorizing over 130,000 relocations and the other organizations reported making over 40,000 relocations during fiscal years 1991 through 1995. A small percentage of the organizations reported the majority of the relocations. Over half of the relocations authorized or made were reported by 7 percent and 9 percent of the organizations, respectively. In addition, while the total number of relocations authorized and the total number of relocations made fluctuated yearly across the organizations that reported data for all 5 fiscal years, there was moderate overall change between fiscal years 1991 and 1995. Ninety-seven federal organizations that responded to our survey reported that they authorized 132,837 civilian employees to relocate at the government’s expense from fiscal year 1991 through fiscal year 1995.However, the total number of relocations authorized is probably understated because 7 of the 97 organizations did not, for various reasons, provide this relocation information for all 5 fiscal years. Also, one organization did not report relocations authorized by one of its components for fiscal year 1991. As shown in figure 1, seven organizations accounted for 52 percent (69,072) of the reported relocations authorized. Among the seven organizations, the number of relocations authorized ranged from 17,881 by the Department of State to 5,509 by the Forest Service. (Appendix III shows the number of relocations authorized for each fiscal year reported by the federal organizations.) Because not all of the 97 federal organizations that reported relocations authorized provided relocation information for all of their components for all 5 fiscal years, we were precluded from determining the total change in relocations authorized. However, 89 organizations did provide relocation information for all 5 fiscal years. Across these organizations, total relocations authorized fluctuated yearly. In fiscal year 1991, total relocations authorized were about 25,600; they continually declined to a low of about 20,080 in fiscal year 1993. Thereafter, total relocations authorized began to increase, and in fiscal year 1995 reached about 25,370. Overall, total relocations authorized decreased less than 1 percent from fiscal year 1991 to fiscal year 1995. The 23 other federal organizations that responded to our survey reported that they made 40,252 civilian employee relocations from fiscal year 1991 through fiscal year 1995. The total number of relocations reportedly made was probably understated because 4 of the 23 organizations—including the Departments of the Army, Energy, and the National Oceanic and Atmospheric Administration, which were among those that made the most relocations—did not provide complete relocation information for all 5 fiscal years. As shown in table 1, the Departments of the Army and the Navy accounted for 21,947 (about 55 percent) of the reported civilian employee relocations made. (Appendix IV shows the number of relocations made for each fiscal year reported by the federal organizations.) Nineteen federal organizations reported relocations made for all 5 fiscal years. Across these organizations, total relocations made varied yearly. Relocations made increased from 3,468 in fiscal year 1991 to 3,759 in fiscal year 1992. In fiscal year 1993, relocations made decreased to a low of 3,426. But, total relocations made increased to 3,622 in fiscal year 1994 and rose to 3,902 in fiscal year 1995. Overall, total relocations made increased about 12.5 percent from fiscal year 1991 to fiscal year 1995. Although relocations reportedly made increased over the 5-year period, this increase was not distributed evenly among the 19 federal organizations. Two organizations—Defense Logistics Agency (DLA) and Tennessee Valley Authority (TVA)—reported the greatest changes in relocations made. DLA’s reported relocations made rose about 221 percent, from 301 civilian relocations made in fiscal year 1991 to 965 in fiscal year 1995. According to a DLA official, the number of civilian relocations made increased substantially during this period due to base realignments and closures and Defense Management Review decisions. These decisions resulted in DLA acquiring control of all DOD supply depots and supporting civilian employees. DLA consolidated these depots, reducing the number from 31 to 23 and relocated employees from closing depots to gaining depots. DLA also consolidated its 9 contract management districts into 2 districts, which led to additional civilian relocations. TVA’s reported relocations made decreased by 52 percent, from 1,026 in fiscal year 1991 to 490 in fiscal year 1995. TVA did not provide an explanation for this decrease. The changes in the number of relocations made reported by DLA and TVA generally offset each other. Collectively, the 17 remaining organizations displayed about a 14-percent overall increase in relocations made during this period. Most of the federal organizations that responded to our survey reported obligating over $3 billion for relocations and the other organizations reported expending over $350 million for relocations during fiscal years 1991 through 1995. Again, a small percentage of the organizations reported the majority of the costs. Over half of the total relocation obligations were reported by 8 percent of the organizations, and 70 percent of the total relocation expenditures were reported by 13 percent of the organizations. Across the organizations that provided data for all 5 fiscal years, total relocation obligations and total relocation expenditures varied yearly. When adjusted for inflation, there was a noticeable increase in the total reported relocation obligations and a larger increase in total relocation expenditures. However, the majority of the increase in total relocation expenditures was due to one organization. Ninety-seven federal organizations reported that they obligated about $3.4 billion for employee relocation expenses for fiscal years 1991 through 1995. Fourteen of the 97 organizations did not provide information for all 5 fiscal years, which probably resulted in an understatement of the funds reported obligated. Also, nine organizations did not provide obligations for certain relocation expense categories, and one organization did not provide fiscal year 1994 relocation obligations for its regional offices. As shown in figure 2, eight organizations accounted for over 53 percent (about $1.8 billion) of the total reported obligations for employee relocation expenses. (Each federal organization’s reported relocation obligations are located in appendix V.) From fiscal year 1991 through fiscal year 1995, the total reported relocation obligations fluctuated yearly across the 83 federal organizations that provided relocation obligations for all 5 fiscal years. In constant 1995 dollars, total relocation obligations continually decreased from $652.1 billion in fiscal year 1991 to $546.9 billion in fiscal year 1993. But, total relocation obligations increased in fiscal year 1994 and rose to $759.6 billion in fiscal year 1995. Overall, total relocation obligations increased about 16 percent from fiscal year 1991 to fiscal year 1995. This increase was not greatly influenced by one organization or a small group of organizations. The 23 other federal organizations reported that for fiscal years 1991 through 1995 they expended over $362.8 million to relocate their civilian employees. Reported relocation expenditures were probably understated because one organization did not report fiscal year 1992 relocation expenditures for one of its components. In addition, 2 of the 23 organizations did not provide expenditures for all expense categories. As shown in table 2, the Departments of Energy and the Navy and the U.S. Information Agency accounted for over $254 million (70 percent) of the total reported expenditures to relocate civilian employees during this period. (Appendix VI shows the reported relocation expenditures for each fiscal year by federal organization.) Annually, total reported relocation expenditures increased across the 22 federal organizations that provided relocation expenditures for all 5 fiscal years and included each of their components. In constant 1995 dollars, total relocation expenditures increased from $45.7 million in fiscal year 1991 to $46 million in fiscal year 1992. In fiscal year 1993, total relocation expenditures increased to $50.3 million; in fiscal year 1994, increased to $81 million; and in fiscal year 1995, rose to $86 million. Overall, total relocation expenditures increased about 88 percent from fiscal year 1991 to fiscal year 1995. The Navy accounted for this increase because its reported relocation expenditures more than quadrupled during this period. Navy’s relocation expenditures reportedly rose about 367 percent, from $11 million in fiscal year 1991 to $51.4 million in fiscal year 1995, in constant 1995 dollars. According to a Navy official, expenditures for civilian relocations increased substantially during this period due to the increase in the number of relocations caused by base realignment and closure decisions. During this period Navy closed or began closing and realigning 114 bases. Excluding the Navy from the total expenditures, the 21 remaining organizations’ total reported relocation expenditures decreased by less than 1 percent, from $34.7 million in fiscal year 1991 to $34.6 million in fiscal year 1995. Fifteen federal organizations reported that they had rotational policies that required some of their civilian employees to relocate on a prescribed schedule. Nine of the 15 organizations reported that they had these policies because they assign their civilian employees to overseas locations and must comply with federal regulations or a treaty that limits such employees’ tours of duty. The six remaining organizations reported that they had these policies either to (1) maintain the safety and security of their civilian employees who may be assigned to dangerous/hazardous locations, (2) maintain their civilian employees’ objectivity when inspecting or auditing specific locations, or (3) enhance the job-related knowledge and experiences of their civilian employees, regardless of where they are assigned. In addition, the 15 federal organizations estimated the annual percentage of their civilian employee relocations that were due to their rotational policies. Among these organizations, their estimated annual percentages ranged from 100 to less than 1. Using the organizations’ estimated percentages, we calculated the estimated impact these policies had on the number of civilian employee relocations authorized and made by the 15 organizations. Specifically, we multiplied each organization’s percentage by either its reported number of relocations authorized or made. As shown in table 3, 11 organizations’ (including some Navy components’) rotational policies led to an estimated 24,671 civilian employees being authorized to relocate during fiscal years 1991 through 1995. These relocations authorized—triggered by rotational policies—accounted for about 18.6 percent of the total reported relocations authorized. As shown in table 4, five federal organizations’ (including some Navy components’) rotational policies resulted in an estimated 2,792 civilian employees being relocated during the same 5-year period. These relocations made—triggered by rotational policies—accounted for about 6.9 percent of the total relocations made that were reported by the organizations we surveyed. On June 5, 1997, we requested comments on a draft of this report from the Director of OMB and the Administrator of GSA. On June 11, 1997, GSA officials, including the Acting Director, Travel & Transportation Management Policy Division, provided oral comments. In general, GSA officials characterized the report as a useful resource that will assist them in fulfilling GSA’s legislative requirement to biannually survey agencies and report on, among other things, the estimated cost of civilian employee relocations. GSA officials also provided updated information on the status of GSA’s biannual survey and technical comments. On June 11, 1997, OMB staff within the Justice and GSA Branch provided their views on the draft report, which were technical in nature and involved clarification issues. GSA’s and OMB’s technical comments were incorporated in the report where appropriate. Copies of this report will be sent to the Ranking Minority Members of your Committees; the Administrator of the General Services Administration; the Director of the Office of Management and Budget; all federal organizations included in this report; and other interested parties. Copies will also be made available to others upon request. If you have any questions concerning this report, please call me on (202) 512-4232 or Gerald P. Barnes, Assistant Director, on (202) 512-4228. Major contributors are listed in appendix VII. Animal and Plant Health Inspection Service Cooperative, State, Research, Education & Extension Service Food Safety and Inspection Service Grain Inspection, Packers, and Stockyards Administration Office of the Chief Financial Officer Office of the Inspector General National Institute of Standards and Technology National Oceanic and Atmospheric Administration National Telecommunications and Information Administration (continued) Administration for Children and Families Agency for Health Care Policy & Research Centers for Disease Control and Prevention Health Resources & Services Administration Substance Abuse & Mental Health Services Administration Office of Surface Mining Reclamation and Enforcement U.S. Fish and Wildlife Service (continued) Offices, boards, and divisions National Highway Traffic Safety Administration Research and Special Programs Administration Bureau of Alcohol, Tobacco, and Firearms Bureau of Engraving and Printing Bureau of the Public Debt Federal Law Enforcement Training Center Office of the Comptroller of the Currency Departmental Offices and Office of the Inspector General Department of Housing and Urban Development (continued) As agreed, our objectives were to provide information for the executive branch departments and largest independent agencies on (1) the total number of civilian employees who were relocated at the federal government’s expense, (2) the total cost of these relocations to the government, and (3) the agencies that had rotational policies requiring their civilian employees to relocate. To provide the requested relocation information, we developed and distributed a questionnaire to the 14 executive branch departments and the 18 largest independent agencies. Relocation travel at most of the 14 executive branch departments is decentralized, and subordinate agencies/bureaus/administrations controlled their own relocations. Thus, we requested that a separate questionnaire be completed by each federal organization that had control over its relocations. As a result, the questionnaire was distributed to a total of 120 federal organizations.These federal organizations employed about 1.9 million civilian employees, representing 96 percent of the federal civilian workforce as of September 1995. We received responses from 119 of the 120 federal organizations. The Department of Commerce’s Economic Development Administration, which had a workforce of less than 400 employees, did not provide a response. Appendix I lists the federal organizations we surveyed. To develop the questionnaire and ensure its completeness, we researched FTR and OMB Circular No. A-11, Preparation and Submission of Budget Estimates, to identify the allowances for relocation expenses and the object classes that federal organizations are to use in reporting relocation obligations. We drafted the questionnaire with the assistance of our staff knowledgeable of federal travel and relocation practices. We pretested the questionnaire with six federal organizations: the Bureau of the Census, Defense Educational Activity, Department of State, Drug Enforcement Administration, U.S. Marine Corps, and the Office of Personnel Management. Using the pretest results, we revised the questionnaire to help ensure that our questions were interpreted correctly and that the requested relocation information was available. Federal organizations are not required to track or keep relocation information in any specific way. During pretesting, we found that organizations maintained relocation travel information at different organizational levels and used different categories to track the information. Organizations had to go through varying levels of effort to provide the information that we requested. Some organizations had centralized automated systems that required them to write special programs to extract the information. Some of the organizations with automated systems had to retrieve the earlier years of information from archives and then run special programs to extract the information we requested. Other organizations did not have centralized systems or reporting requirements for this type of information and had to query a number of local offices, which in turn had to go through automated or paper records to obtain the information. We also know of at least one organization that had to go through paper records and manually tabulate the number and cost of its relocations. The organizations generally took from 1 to 3 months to complete the questionnaire. Since federal organizations maintained relocation information at different levels and used different categories for tracking purposes, our questionnaire was carefully designed to collect the best and most complete information possible from each federal organization on its number and cost of relocations. The questionnaire allowed organizations to report their relocation information based on the categories they used. As a result, for the number of relocations, 97 organizations reported relocations that they authorized and the other 23 organizations reported the relocations that they made. Similarly, the cost of relocations were reported by 97 organizations using obligations, while the other 23 organizations reported expenditures. To help the organizations report complete cost data, we developed a list of the expense categories related to relocation travel. We developed this list based on our research of FTR and discussions with knowledgeable officials in several federal organizations. Our survey asked the federal organizations to include costs incurred in all of these expense categories and to indicate if there were categories of expenses for which they could not provide cost data. While federal organizations are not required to track or keep relocation data in a specific way, they are required to maintain travel records for 6 years that contain information on reimbursements for individuals. Based on your request for relocation information over the last several years, our questionnaire was designed to collect relocation information for fiscal years 1990 through 1995. However, at the time we sent the questionnaires to the federal organizations, they were required to have data for fiscal years 1991 through 1996, and many organizations could not provide the data for 1990. Therefore, our report presents information for fiscal years 1991 through 1995. Although most federal organizations were able to provide the requested information for fiscal years 1991 through 1995, the total numbers and costs of relocations are understated in three respects. First, 15 organizations were not able to provide any information for 1 or more years for one or two of the four reporting categories. Second, 10 federal organizations reported that they could not provide any cost information for one or more of the expense categories. Lastly, 6 organizations said that the information they reported did not include data from all components for at least 1 year. Federal organizations’ reasons for not being able to provide the requested information included (1) records were inaccessible due to asbestos contamination; (2) records were incomplete due to office or base closures or realignments; (3) records had been sent to off-site storage; (4) accounting systems had changed during the period; and (5) the inability to separate relocation related travel expenses from other travel expenses. We did not independently verify the accuracy of the relocation information that the federal organizations provided because of time constraints and the number of federal organizations surveyed. However, we reviewed each questionnaire for clarity and completeness and followed up with the federal organization’s contact personnel in those instances in which the response(s) was unclear or incomplete. To provide information on rotational policies that resulted in civilian employee relocations, we obtained copies of these policies from the pertinent federal organizations. We reviewed the policies to understand their purposes, their rotational requirements, and which employees were affected. Additionally, we interviewed cognizant officials to discuss the policies in greater detail, clarify specific issues, and determine current use of the policies. We did our work in Washington, D.C., from June 1996 to June 1997 in accordance with generally accepted government auditing standards. We did not request comments on this report from the heads of the 119 federal organizations that responded to our survey because it was impractical. We requested comments on a draft of this report from the Director of OMB and the Administrator of GSA. GSA provided oral comments, which are discussed in this report. In addition, GSA and OMB provided technical comments, which are incorporated in the report where appropriate. Animal and Plant Health Inspection Service Cooperative, State, Research, Education, and Extension Service (continued) Department of Health and Human Services Agency for Health Care Policy & Research Health Resources & Services Administration Substance Abuse & Mental Health Services Administration Administration for Children and Families Department of Housing and Urban Development Office of Surface Mining Reclamation and Enforcement (continued) Bureau of Alcohol, Tobacco, and Firearms Federal Law Enforcement Training Center Office of the Comptroller of the Currency(continued) UA: data were not available. Fiscal year 1991 does not include data from Europe. Grain Inspection, Packers, and Stockyards Administration Department of Health and Human Services Centers for Disease Control and Prevention (Table notes on next page) Not all components of NOAA reported for each fiscal year, most notably, the National Weather Service. According to an Army Official, relocations made reported for fiscal year 1991 are underreported. Fiscal year 1992 data were not available from the Bonneville Power Administration. Information reported on a calendar-year basis. Obligations reported by fiscal year (nominal dollars) (continued) Obligations reported by fiscal year (nominal dollars) (continued) Obligations reported by fiscal year (nominal dollars) (continued) Obligations reported by fiscal year (nominal dollars) (continued) Obligations reported by fiscal year (nominal dollars) (continued) Obligations reported by fiscal year (nominal dollars) (Table notes on next page) UA: data were not available. Obligations reported do not include nontemporary storage of household goods expenses. Obligations reported for fiscal years 1994 and 1995 do not include enroute travel expense. Obligations reported do not include overseas renewal agreement expenses. Questionnaire was sent to multiple installations for completion but not all installations were able to report obligations for all categories of expenses. Obligations for fiscal year 1994 do not include regional data. Obligations reported for fiscal years 1991 to 1995 do not include relocation service contract expenses. Obligations reported do not include miscellaneous moving expenses. Obligations reported do not include transportation and storage of household goods, mobile homes, and vehicle expenses. Expenditures reported by fiscal year (nominal dollars) (continued) Expenditures reported by fiscal year (nominal dollars) Fiscal year 1992 data were not available from the Bonneville Power Administration. Expenditures reported do not include overseas renewal agreement expenses. Information reported on a calendar-year basis. Expenditures reported do not include enroute travel expenses. Gerald P. Barnes, Assistant Director Maria Edelstein, Evaluator-in-Charge Shirley Bates, Evaluator Martin DeAlteriis, Social Science Analyst Stuart Kaufman, Social Science Analyst Hazel Bailey, Evaluator (Communications Analyst) Robert Heitzman, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the number of civilian employees relocated during fiscal years (FY) 1991 and 1995 and the associated costs of these relocations, focusing on: (1) the total number of civilian employees who were relocated at the federal government's expense; (2) the total cost of these relocations to the government; (3) the agencies that had rotational policies requiring their civilian employees to relocate; and (4) trends for the number and cost of civilian employee relocations during this period. GAO noted that, for FY 1991 through 1995: (1) 97 federal organizations reported authorizing about 132,800 relocations, and 23 other organizations reported making about 40,200 relocations; (2) a small number of organizations accounted for the bulk of the relocations authorized or made; (3) while the total numbers of relocations authorized and made fluctuated yearly across the organizations that provided data for all 5 fiscal years, there was moderate change in these totals between FY 1991 and 1995; (4) across the organizations that provided data for all 5 fiscal years, the total number of relocations authorized decreased by less than 1 percent (89 organizations) and the total number of relocations made increased by about 12.5 percent (19 organizations) from FY 1991 to 1995; (5) 97 federal organizations reported obligating about $3.4 billion for relocations, and 23 other organizations reported expending about $363 million for relocations; (6) a small number of organizations accounted for the bulk of the relocation obligations or expenditures; (7) across the organizations that provided data for all 5 fiscal years, total relocation obligations varied and total relocation expenditures increased yearly; (8) there was noticeable change in these totals between FY 1991 and 1995; (9) in constant 1995 dollars, total relocation obligations increased about 16 percent (83 organizations) and total relocation expenditures increased about 88 percent (22 organizations) from FY 1991 to 1995; (10) for the 22 organizations, this increase was due to the Department of the Navy's expenditures; (11) excluding the Navy's expenditures, the 21 remaining organizations' total expenditures decreased by less than 1 percent during the period; (12) 15 federal organizations reported that they had mandatory rotational policies requiring some of their employees to rotate on a prescribed schedule; (13) most of these organizations attributed their policies to federal regulations that limit overseas tours of duty; and (14) based on data provided by these 15 organizations, GAO estimated that these rotational policies accounted for about 19 percent of the total relocations reported as authorized and about 7 percent of the total relocations reported as made during this period.
We found weaknesses in the implementation of NASA’s export control policy and procedures concerning the CEA function and foreign national access procedures, which increase the risk of unauthorized access to export-controlled technology. Variations in CEA Position, Function, and Resources: NASA’s export control policy provides the CEA the responsibility to ensure compliance of all Center program activities with U.S. export control laws and regulations and states that the position should be “senior-level,” but does not define what “senior-level” means. NASA headquarters export control officials define senior-level as a person at the GS-15 level or in the senior executive service; however, we found that no CEAs were at the senior executive service level, three were GS-15s, and the CEAs at the remaining seven centers were at the GS-14 and GS-13 levels. In addition, NASA’s export control NPR does not contain a provision on the placement of the export control function and CEA within the center’s organizational structure. At some centers where they were several levels removed from the Center Director, CEAs stated that this placement makes it difficult to maintain authority and visibility to staff, to communicate concerns to center management, and to obtain the resources necessary to carry out their export control responsibilities. Conversely, a CEA at another center stated that his placement as Special Assistant to the Center Director creates a supportive environment to incorporate export controls into the project management processes and to require and provide export control training for the majority of center staff. NASA headquarters’ export control officials, as well as several CEAs, noted that limitations in staff resources and time spent on export control functions makes it difficult to carry out the full range of export control duties, such as improving center export control procedures or providing a more robust export control training program. However, NASA’s export control NPR does not discuss the allocation of resources for the export control function or for the CEA within the center, and, according to NASA headquarters’ export control officials, each Center Director has the discretion of how to allocate resources to the export control function. As a result, we found variation among the centers in the staff resources assigned to the export control function, as shown in figure 1. Moreover, we found indications that the resources assigned to export controls at centers did not always appear to be commensurate with the export control workload. Specifically, 8 of the 10 centers had two or fewer civil servant staff to carry out export control activities for hundreds to thousands of foreign national visits, Scientific and Technical Information (STI) reviews, international agreements, and technical assistance agreements. For example, at one center in 2013, two civilian export control officials working less than full time on export control activities were responsible for reviewing and providing any needed export control access restrictions for over 3,000 foreign national visitors and conducting STI reviews for over 2,000 publications. NASA’s procedural requirements for STI requires that all STI intended for release outside of NASA or presented at internal meetings where foreign persons may be present undergo technical, legal, and export control reviews, among others, to ensure that information is not unintentionally released through publication. See figure 2 for export control workload by center for fiscal year 2013. The CEA at one of the centers stated that the time to complete required review activities leaves little time to improve procedures or provide more robust training. To address the variations in authority, placement, and resources of the CEAs, we recommended NASA establish guidance defining the appropriate level and placement for the CEA function and assess the CEA workload to determine appropriate resources needed at each Center. NASA concurred, indicating plans to update existing guidance and to explore strategies to enhance support for the export control function. Weaknesses in Foreign National Access: Throughout fiscal year 2013 NASA centers and Headquarters approved over 11,000 foreign national visits for periods ranging from less than 30 days to greater than 6 months. NASA’s security procedure requires screening of all foreign national visitors prior to gaining approval for access to any NASA facility. However, we identified instances in which NASA security procedures for foreign national access were not followed, which were significant given the potential impact on national security or foreign policy from unauthorized access to NASA technologies. Specifically, at one center, export control officials’ statements and our review of documentation identified instances between March and July of 2013, where foreign nationals fulfilled the role of sponsors for other foreign nationals by identifying the access rights to NASA technology for themselves and other foreign nationals for one NASA program. This is not in compliance with NASA’s security procedures which provide that only NASA civil servants or JPL employees who are U.S. citizens can act as sponsors for foreign nationals, which is one step in NASA’s process of approving and activating foreign national access. This center is taking action to address this issue and, as of December 2013, it developed a new approval process and criteria for foreign nationals requesting access to center automated databases and made revisions to center policies for information systems and foreign national access. We identified planned corrective actions at this and other Centers related to the management of foreign national access and, in our April report, we recommended that NASA develop plans with specific time frames to monitor these corrective actions to ensure their effectiveness. NASA concurred and indicated that it plans to take action to increase the effectiveness of its existing procedures and implement improvements. We found that NASA headquarters export control officials and some CEAs faced challenges in providing effective oversight. In particular, the lack of a comprehensive inventory of export-controlled technologies and not effectively utilizing available oversight tools limit their ability to identify and address risks. Lack of a Comprehensive Inventory of Export-Controlled Technologies: NASA headquarters export control officials and CEAs lack a comprehensive inventory of the types and location of export- controlled technologies at the centers, limiting their ability to identify internal and external risks to export control compliance. Five CEAs told us that they do not know the types and locations of export-controlled technologies, but rather rely on NASA program and project managers to have knowledge of this information. NASA’s export control NPR provides that NASA Center Program and Project Managers, in collaboration with CEAs, are to identify and assess export-controlled technical data. Additionally, NASA Center Project Managers are required by NASA’s export control NPR to provide appropriate safeguards to ensure export- controlled items and technical data are marked or identified prior to authorized transfer to foreign parties consistent with export control requirements. The CEA and security chief at one center told us that they requested a plan identifying where export-controlled and sensitive technologies are located within a research branch in order to facilitate foreign national visit requests. According to the branch manager, he was unable to provide this information, stating it would be too cumbersome to map out all of that information and try to restrict access to the areas with sensitive technologies. Assessing areas of vulnerability, including identifying and assessing export-controlled items, could better ensure that consistent procedures are practiced. NASA’s lack of a comprehensive inventory of its export-controlled technologies is a longstanding issue that the NASA Inspector General identified as early as 1999. Three centers began recent efforts to identify export-controlled technologies at their centers—one of which involves coordination with the center counterintelligence officer. Specifically, at this center, the counterintelligence office collaborated with the CEA to conduct a sensitive technology survey—designed to identify the most sensitive technologies at the center—to better manage risks by developing protective measures for these technologies in the areas of counterintelligence, information technology security, and export controls. Such approaches, implemented NASA-wide, could enable the agency to take a more risk-based approach to oversight by targeting existing resources to identify the most sensitive technologies and then ensure the location of such technologies are known and protected. To implement a risk-based approach, we recommended NASA build off of existing information sources, such as assessments by NASA’s counterintelligence office, to identify targeted technologies. In its response, NASA highlighted plans to implement a risk-based approach that would include CEAs, program managers, and counterintelligence officials. Underutilization of Oversight Tools: NASA’s oversight tools, including annual audits, export control conferences with CEA, and voluntary disclosures, have identified deficiencies, but NASA headquarters has not addressed them. Specifically, we found that seven centers have unresolved findings, recommendations, or observations spanning a period from 2005 to 2012, in areas including export control awareness, management commitment, resources, training, foreign national visitor processes, and disposal of property. At five centers, responding to audit findings and implementing recommendations required that the CEA coordinate with other offices and programs across the center beyond the CEA’s control. The remaining two centers cited resource constraints, organizational priorities, and insufficient coordination with center management as barriers to implementing corrective actions and resolving recommendations. NASA’s current procedures do not address coordination among offices at a center to address findings from annual audits. Further, NASA headquarters export control officials hold annual export control program reviews with the CEAs to discuss export control changes and CEA concerns and recommendations for the program. At NASA’s 2013 annual review, the CEAs presented NASA headquarters export control officials with a list of comments regarding the export control program, many of which echo the issues raised in our April 2014 report, such as CEA position and resources, foreign national access, and awareness of export-controlled technologies. NASA headquarters’ export control officials stated that they agree with the issues raised by the CEAs but acknowledged that they have not fully addressed the CEA concerns from the most recent program review in March 2013 and have not developed specific plans to do so. In fact, we found that over the last 3 years, NASA headquarters export control officials provided only one policy update or other direction to address export control concerns raised by the CEAs. In our April report, we made two recommendations to address underutilization of the audit and program review tools. To ensure implementation of audit findings, we recommended that NASA direct Center Directors to oversee implementation of the audit findings. Similarly, we recommended that NASA develop a plan, including timeframes, to ensure CEA issues and suggestions for improvement are addressed. NASA concurred and plans to revise existing guidance. NASA may also be missing an opportunity to use voluntary disclosures to help improve export control compliance. NASA’s export control NPR provides that it is every NASA employee’s personal responsibility to comply with U.S. export control laws and regulations; and further provides the Departments of State and Commerce’s regulatory requirements for voluntary self disclosure of noncompliance in export activities, even if the errors were inadvertent. NASA’s headquarters’ export control program officials told us that few or no voluntary disclosures might indicate a weakness in a center’s export control program. We found little usage of the voluntary disclosure process at the NASA centers: a total of 13 voluntary disclosures divided among four of the NASA centers since 2011, and potential noncompliance ranged from failure to file a record of shipment to Germany to potential foreign national exposure to a program’s technical data. The remaining six NASA centers have not submitted voluntary disclosures since 2011. We found that a similar event may lead to a voluntary disclosure at one center but not another and that CEA approaches toward voluntary disclosures at some centers may affect NASA’s ability to identify and report potential violations of export control regulations. To ensure consistency in reporting potential export control violations, in our April 2014 report, we recommended that NASA re-emphasize to CEAs the requirements on how and when to notify headquarters. NASA concurred and plans to revise and develop additional guidance. As stated above, NASA concurred with all of our recommendations and stated that our findings and recommendations complement results from the recent reviews by the NASA’s Inspector General and the National Academy of Public Administration. Further, NASA stated in its response to each of these reviews that it plans to adopt a more comprehensive, risk-based approach to enhance its export control program. Subsequent to our report, the NASA Administrator issued an email to all employees reiterating the importance of the export control program and announcing plans to expand the online and in-person export control training. This is an important step as it sets a tone from the top and could help ensure the centers apply consistent approaches. However, it will be important for NASA to be vigilant in assessing actions taken to help ensure effective implementation and to avoid a relapse into the former practices. Collectively, improvements in all of these areas can help NASA strike an effective balance between protecting the sensitive export- controlled technologies and information it creates and uses and supporting international partners and disseminating important scientific information as broadly as possible. Mr. Chairmen, Ranking Members, and members of the subcommittees, this concludes my prepared remarks. I would happy to answer any questions that you may have. For questions about this statement, please contact Belva Martin at (202) 512-4841, or at (martinb@gao.gov). Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include William Russell, Assistant Director; Caryn Kuebler, Analyst-in- Charge; Marie Ahearn; Lisa Gardner; Laura Greifner; Amanda Parker; and Roxanna Sun. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
NASA develops sophisticated technologies and shares them with its international partners and others. U.S. export control regulations require NASA to identify and protect its sensitive technology; NASA delegates implementation of export controls to its 10 research and space centers. Recent allegations of export control violations at two NASA centers have raised questions about NASA's ability to protect its sensitive technologies. GAO was asked to review NASA's export control program. This report assessed (1) NASA's export control policies and how centers implement them, and (2) the extent to which NASA Headquarters and CEAs apply oversight of center compliance with its export control policies. To do this, GAO reviewed export control laws and regulations, NASA export control policies, and State and Commerce export control compliance guidance. GAO also reviewed NASA information on foreign national visits and technical papers and interviewed officials from NASA and its 10 centers as well as from other agencies. Weaknesses in the National Aeronautics and Space Administration (NASA) export control policy and implementation of foreign national access procedures at some centers increase the risk of unauthorized access to export-controlled technologies. NASA policies provide Center Directors wide latitude in implementing export controls at their centers. Federal internal control standards call for clearly defined areas of authority and establishment of appropriate lines of reporting. However, NASA procedures do not clearly define the level of center Export Administrator (CEA) authority and organizational placement, leaving it to the discretion of the Center Director. GAO found that 7 of the 10 CEAs are at least three levels removed from the Center Director. Three of these 7 stated that their placement detracted from their ability to implement export control policies by making it difficult to maintain visibility to staff, communicate concerns to the Center Director, and obtain resources; the other four did not express concerns about their placement. However, in a 2013 meeting of export control officials, the CEAs recommended placing the CEA function at the same organizational level at each center for uniformity, visibility, and authority. GAO identified and the NASA Inspector General also reported instances in which two centers did not comply with NASA policy on foreign national access to NASA technologies. For example, during a 4-month period in 2013, one center allowed foreign nationals on a major program to fulfill the role of sponsors for other foreign nationals, including determining access rights for themselves and others. Each instance risks damage to national security. Due to access concerns, the NASA Administrator restricted foreign national visits in March 2013, and directed each center to assess compliance with foreign national access and develop corrective plans. By June 2013, six centers identified corrective actions, but only two set time frames for completion and only one planned to assess the effectiveness of actions taken. Without plans and time frames to monitor corrective actions, it will be difficult for NASA to ensure that actions are effective. NASA headquarters export control officials and CEAs lack a comprehensive inventory of the types and location of export-controlled technologies and NASA headquarters officials have not addressed deficiencies raised in oversight tools, limiting their ability to take a risk-based approach to compliance. Export compliance guidance from the regulatory agencies of State and Commerce states the importance of identifying controlled items and continuously assessing risks. NASA headquarters officials acknowledge the benefits of identifying controlled technologies, but stated that current practices, such as foreign national screening, are sufficient to manage risk and that they lack resources to do more. Recently identified deficiencies in foreign national visitor access discussed above suggest otherwise. Three CEAs have early efforts under way to better identify technologies which could help focus compliance on areas of greatest risk. For example, one CEA is working with NASA's Office of Protective Services Counterintelligence Division to identify the most sensitive technologies at the center to help tailor oversight efforts. Such approaches, implemented NASA-wide, could enable the agency to better target existing resources to protect sensitive technologies. In April 2014, GAO recommended that the NASA Administrator establish guidance to better define the CEA function, establish time frames to implement foreign national access corrective actions and assess results, and establish a more risk-based approach to oversight, among other actions. NASA concurred with all of our recommendations and provided information on actions taken or planned to address them.
“Offshoring” generally refers to an organization’s replacement of goods and services produced domestically with imports from foreign sources. For example, if a U.S.-based company decides to move its computer programming activities to an overseas supplier, this would be considered offshoring. The overseas supplier may be an affiliate of the company, in which case the company has also invested overseas. In contrast, the supplier may be unrelated to the domestic company, in which case the company has outsourced its computer programming activities, as well as offshored them. Semiconductors are devices that enable computers and other products such as telecommunication systems to store and process information. Semiconductor device fabrication is the process used to create “chips,” the integrated circuits that are present in everyday electrical and electronic products. It is a multiple-step sequence of photographic and chemical processing steps during which electronic circuits are gradually created on a wafer made of pure semiconducting material, most commonly silicon. Improvement in the performance of increasingly sophisticated electronics products depends on more powerful semiconductors that can store more information and process it faster. Demand for semiconductors is driven by the demand for computers and communications products that use them. The semiconductor manufacturing process can be divided into three distinct stages: (1) design of the semiconductor integrated circuit, (2) fabrication of the semiconductor wafer, and (3) assembly and testing of the finished integrated circuit. The design and fabrication processes are the most capital-intensive, while the assembly and testing process tends to be more labor-intensive, although still relatively technologically sophisticated. For example, semiconductors are designed by computer engineers with the assistance of advanced software. They are then fabricated using chemicals, gases, and materials combined in an intricate series of operations using complex manufacturing equipment to produce wafers containing a large number of chips. During assembly, the chips are assembled into the finished semiconductor components and tested for defects. The finished semiconductor consists of millions of transistors and other microscopic components. The technological complexity of semiconductors is indicated by the diameter of the wafer and the density of the etched lines (feature size) on the wafer. The size of the wafer is an important element because the number of chips per wafer increases dramatically as the wafer size increases. The current leading-edge manufacturers produce 12-inch (300 millimeters) wafers. Smaller feature size measured in microns allows for more components to be integrated on a single semiconductor, thus creating more powerful semiconductors. Each reduction in feature size—from 0.35 micron to 0.25 micron, for example—is considered a move to greater technological sophistication. The software services industry also includes several types of services and levels of technological sophistication. Software services include writing individual software programs or combined “modules;” supporting these programs and modules once they are installed on computers; designing software networks, which might include various software programs, as well as systems of networks; integrating and maintaining these networks and systems as they are applied to clients’ tasks; and managing and operating clients’ overall computer systems. Software services are now broadly diffused throughout the U.S. economy. Firms across most industries now use some form of software services— whether it is basic accounting software, inventory control software, or a much more complex software product applied to manufacturing operations. Automobile companies, for example, use advanced computer software in the design of new car models, on production lines that manufacture these cars, and in the cars themselves that now contain electronic components. Software services generally range in complexity from routine software programming and testing to complex software programming, software project management, and higher-end software systems integration, architecture, and research. In general, software programs and modules can be produced in various locations; integrating these requires some focal points capable of working closely with the various locations. Both semiconductor manufacturing and software services are key industries within the broader information and communications technology (ICT) sector; they have contributed significantly to overall U.S. growth and productivity. For example, semiconductor and related device manufacturing in the United States represents about 24 percent of the total value of the ICT sector’s computer and electronic products manufacturing. Software services comprise about 48 percent of the total production of the categories of services industries included in the broader ICT sector— publishing industries (includes software), information and data processing services, and computer systems design and related services (averaged over 1990 to 2004). Although the ICT sector represents a small share of the overall U.S. economy (about 4 percent), it has contributed significantly to U.S. economic expansion. According to the Department of Commerce’s Bureau of Economic Analysis (BEA), the ICT sector accounted for about 11 percent of total economywide value-added growth in 2004. Examining value-added growth is a useful way to compare growth rates across industries because it measures only the increase in output due to that industry, excluding any inputs or materials from other industries. Therefore, value-added growth measures the changes in output due to increases in factors such as labor and capital and to improvements in the productivity of those factors. Figure 1 shows that, from 2002 to 2004, the ICT sector’s growth in real value added accelerated more than any other industry group. Although the ICT sector’s growth slowed in 2001 during the recession, annual real growth has recently accelerated from 2.0 percent in 2002, to 6.7 percent in 2003, and to 12.9 percent in 2004. The ICT sector also contributes to productivity in the rest of the economy. For example, other manufacturing and services sectors, such as automobiles and banking, have become more productive as they have used the latest products and advances from the ICT sector. Economic research has generally found that the investments made in ICT sector products by other industries contributed to a rapid economywide increase in productivity during the 1990s. In addition, the technological advances and competition within the sector have resulted in declining prices and rising performance in ICT products. This, in turn, has contributed to lower rates of inflation throughout the economy as other sectors benefit from these improvements. We present information on multinational companies’ global operations in semiconductor and software services in appendix II. The U.S. semiconductor industry has foreign operations in several locations, notably in Taiwan and China. The U.S. software services industry has turned to India for a significant share of its offshoring operations. The types of semiconductor manufacturing and software services that U.S. firms have offshored to Taiwan, China, and India have become more complex over time. U.S. semiconductor firms first offshored labor- intensive assembly operations in the 1960s, then wafer fabrication, and more recently, higher value-added activities, such as advanced fabrication and design. The offshoring of software services largely began in the 1990s in preparation for the year 2000 transition. Much like semiconductor products, the types of software services that firms have offshored have become progressively more complex as firms expanded their offshore operations to customized applications requiring highly skilled workers. Offshoring in semiconductor manufacturing began in the 1960s with labor- intensive manufacturing activities, such as assembly. U.S. firms invested in overseas manufacturing facilities to perform the labor-intensive assembly of semiconductors for export to the United States. Firms domestically sourced the design and fabrication of higher-skilled, more capital-intensive semiconductor manufacturing activities and then shipped the semiconductors to Asia for assembly. The finished semiconductors were returned to the United States for final testing and shipment to the customer. According to some industry experts, offshoring of assembly work kept the U.S. semiconductor industry cost-competitive as new foreign rivals emerged in countries such as Japan. The overall U.S. business models for semiconductor manufacturing changed in the 1980s. Two types of company models developed for semiconductor production. Some companies, known as Integrated Device Manufacturers (IDMs), conduct their own research, produce their own designs, and operate their own fabrication plants to produce semiconductor wafers. Other companies, known as fabless design firms, develop their own designs and contract with independent fabrication plants, known as foundries, to produce their wafers. Foundries emerged during the 1980s as firms in Asia, particularly Taiwan, began to specialize in wafer fabrication. With the emergence of overseas foundries, U.S. firms developed global supply chains for sourcing different parts of the semiconductor production process over multiple global locations. They continued to design in the United States and other developed countries, while contracting with foundries in Taiwan to perform capital-intensive wafer fabrication. They also continued domestic fabrication, but Asian countries increased their share of overall production—with Taiwan expanding as a major supplier of fabrication services and China emerging as a new source of fabrication services in the late 1990s. In recent years, some U.S. firms have offshored increasingly complex semiconductor fabrication and design activities—essentially going up the value chain (see fig. 2). As firms in other countries, notably Taiwan, became more adept at producing more complex semiconductors, U.S. firms increasingly turned to offshore manufacturers to produce these semiconductors. The most complex semiconductors now manufactured in fabrication plants (commonly called fabs) are 12-inch (300 millimeter) wafers with submicron feature size. U.S. firms were leaders in developing 12-inch wafers. According to industry experts, firms have offshored design services to Taiwan due, in part, to maintain close contact with Asian customers to meet their specific requirements. Also, as semiconductor manufacturing becomes more complex, some experts have noted, it becomes all the more important to develop close relationships among design and manufacturing activities, so as to enable feedback discussions. The gap in semiconductor manufacturing capabilities has narrowed between the United States and Taiwan and China. Currently, Taiwanese and Chinese foundries are capable of producing technologically sophisticated semiconductors. For example, Taiwanese foundries are now capable of producing integrated circuits as small as 0.09 microns, and some Taiwanese firms provide design services to support this level of semiconductor technology. In addition, according to industry experts, the newest semiconductor manufacturing facilities in China are capable of producing integrated circuits up to 0.13 microns in size, with one Chinese foundry known to be producing circuits at the 0.09 micron size. Thus, currently the most advanced manufacturing facilities in Taiwan and China manufacture integrated circuits that are only one generation or less behind state of the art. The software services industry was one of the first services industries to offshore significant activities as U.S. firms recruited foreign software programmers, particularly in India. Before the widespread use of the Internet, it was not economical to export software. U.S. firms either invested in overseas affiliates in India to directly provide software services for the firm or hired Indian programmers to work temporarily on-site at firms’ U.S. locations. Beginning in the 1990s, Internet communications combined with the availability of satellite connections and reduced telecommunication costs made it possible for foreign software programmers to remain abroad while working for U.S. clients. Many types of U.S. firms began re-engineering their business processes to concentrate on core competencies and outsource or offshore other activities, such as writing software programs. The offshored activities were those that could be reduced to step-by-step instructions, digitized, and performed at a distance. In the late 1990s, preparations for the year 2000 changeover contributed to U.S. firms’ further use of foreign software programmers who were knowledgeable in certain programming languages. U.S. firms turned not only to foreign software programmers who were temporarily employed in the United States but also to programmers overseas, particularly in India, who provided work directly to U.S. clients. In recent years, U.S. firms have offshored increasingly complex software services, going up the value chain as occurred in the semiconductor industry. Examples of less sophisticated software services are operations involving basic computer language coding or programming and managing computer databases. More complex offshored services include advanced software design and development activities and researching, designing, developing, and testing new software technology. U.S. firms experienced high-quality work in offshore locations; for example, they discovered that firms in India have the capabilities to produce high-end software services, such as software design at a low cost. In addition, firms often combine highly skilled labor available in India with skilled labor in other countries to create global teams with specific skill sets. For example, one firm in India stated that a firm might begin a high- end software development project in India and then transfer the work to a team in Ireland for further development before delivery to a U.S. client. Firms also use global teams to better serve local markets worldwide by providing customized programming services to local clients. Currently, the types of offshored software services activities now include advanced software engineering and research and development. For example, in recent years Indian and multinational firms, including U.S. affiliates, have established high-technology research and design facilities in India to perform such high-end software services as software engineering and software product development. According to software services industry experts in India, many of these facilities employ hundreds of software engineers to develop and test a wide range of new high-end software designs and products for export to global customers. Some firms in India stated that the quality of high-end software design and development activities in India, combined with firms' need to introduce new products and new technologies, have attracted increasing interest in offshoring software development to India. Nevertheless, the bulk of offshored software services in India can be characterized as lower-level work, mostly in the applications development segment of the industry. Applications development primarily requires programming skills and has limited face-to-face interaction. Moreover, applications development can easily be segmented and standardized, features that characterize offshoring software services. The combination of technological advances, available human capital, and foreign government policies has created a favorable environment for offshoring. Many firms in semiconductor manufacturing and software services use offshoring in their business models to increase their global competitiveness by lowering costs and gaining access to foreign markets. Advances in telecommunications enabled semiconductor firms to improve their logistics and inventory controls; they also were particularly important to the offshoring of software services. Firms in both sectors initially sought low-cost labor, but they expanded the scope of their offshoring activities as they discovered and helped develop highly educated workforces in Taiwan, China, and India. Foreign government policies played different roles in the countries we visited. In Taiwan and China, the national governments pursued various industrial policies to promote semiconductor manufacturing and, in India, the loosening of regulations and the availability of government-supported software technology parks afforded the software industry opportunities to grow relatively unregulated. Although offshoring conveys benefits to firms that choose to locate operations overseas, it also encompasses business risks that challenge management skill. See table 1 for an overview of the factors that have contributed to increased offshoring. Improvements in telecommunication technology helped to expand the degree of offshoring in both semiconductor manufacturing and software services. With improved communications, U.S. semiconductor firms were able to create tighter linkages with overseas suppliers, and software services firms developed global teams that could transfer digitized information over the Internet. Semiconductor manufacturing firms improved their management of supply chains through better telecommunications, logistics management, and modern transportation. Telecommunications has allowed better monitoring of the movement of products. For example, foundries in Taiwan use Internet-enabled software that allows real-time communication between engineering teams in different locations. Some U.S. companies use radio- frequency identification tags in Taiwan and China to track products shipped from these manufacturing locations to distribution centers in other countries. According to a representative of one U.S. firm, this technology has reduced the need for inventory sourcing redundancy, thus reducing inventory cost and the associated employment costs. Logistics management is an important part of global business. Taiwan’s competitive logistics industry has offered advanced computerized systems that assist in the management of purchasing, storage, delivery, and distribution of products. According to a Taiwan government official, Taiwanese companies can provide production orders to their clients in 2 days. According to an industry researcher, the automation of the semiconductor assembly process also has improved efficiency in the overall semiconductor infrastructure, such as packaging facilities. Modern transportation options using more powerful computer systems, advanced software, and telecommunications make faster delivery possible. Countries are upgrading all elements of their transportation infrastructure—airports, seaports, modern roads, and trucking. Because a product may travel around the world more than once during the production process, efficient transportation systems are essential. For example, China has made numerous improvements to its transportation infrastructure to permit more efficient distribution. According to one Internet firm operating in China, the transportation infrastructure within China for delivering the physical products to customers—an essential component for online auction sites—did not exist before the year 2000. China reportedly invested $30 billion in 2004 alone to improve its network of roadways. In the software services sector, telecommunications improvements have changed the types of software services traded, the way the work is done, and the telecommunications investments made. First, the essential advance in IT—the introduction of Internet communications—made it possible to trade some services that were previously not tradable. For example, software programs written in standardized programming languages could be digitized and transferred worldwide over the Internet. Second, global teams have become common elements of firms’ business strategies. The ability to transmit data electronically made it possible to specify an application in one firm and develop it in another. Because of the availability of the Internet, teams can work 7 days a week, 24 hours per day to meet customer needs worldwide. These teams’ operations could be set up relatively quickly with office space, utilities, and communication tools, such as personal computers with broadband access. The ease of undertaking this type of offshoring has led to an escalating use of offshored IT services, including but not limited to software programming. According to one research firm, the value of IT offshoring and business process offshoring totaled $34 billion in 2005 and could double by 2007. Finally, the services offshoring model has required investments in global telecommunications infrastructure, such as wired landline and satellite communication services. India has made the investments to facilitate the telecommunications industry. According to the government of India, in 2005, 47 million landline connections and 65 million satellite connections existed in India. Moreover, in 2004, after the telecommunication sectors declined due to overcapacity, one major Indian telecommunications services firm, partly owned by the government of India, purchased a large, privately owned U.S. undersea fiber-optic network linking Asia, Europe, and North America after receiving national security approval from the U.S. government. This acquisition strengthened India’s control of low-cost telecommunications infrastructure. According to an Indian government official and several U.S. companies operating in India, the growth in telecommunications infrastructure has also enabled firms to move from India’s major cities to smaller, lower-cost surrounding cities. The availability of high-quality workers overseas has been an essential component of the increased use of offshoring for firms in the semiconductor manufacturing and software services sectors. Through experience and training, the talent pool in several countries demonstrated their value to firms seeking skilled workers to perform tasks with various degrees of complexity. Access to human capital played an important role in the relocation of semiconductor manufacturing firms to Taiwan and China, especially as the need for skilled labor arose, and a quality workforce emerged in these countries. During the earlier phase of semiconductor offshoring in Taiwan, workers did not need advanced training. Taiwan emphasized vocational training during this period. Industry experts stated that, although lower- cost labor was initially attractive for assembly, the labor costs component in semiconductor manufacturing is not a decisive factor for companies’ location decisions overseas. New technology has computerized the entire production process, leading to a reduced need for labor and an increased need for skilled workers and managers. According to the representative of one research firm, the quality of the Chinese and Taiwanese workforce makes it easy to train and retain workers in semiconductor assembly and manufacturing. Taiwan, China, and India are each able to provide a quality workforce, with a plentiful supply of engineers including emigrants who have returned to work in their home countries. Highly trained professionals with experience in U.S. firms assisted the development in each of these three countries of their semiconductor and software industries. According to one research firm, more than 5,000 overseas students and professionals return to China each year, bringing with them Western knowledge and skills. For example, several firms operating in China told us that Chinese returnees who have studied or worked abroad are an important part of their staffs. India, Taiwan, and China are each graduating IT and other engineers in large numbers. For example, China’s potential supply of engineers is large; according to one U.S. study, the number of Chinese engineering graduates with bachelor’s degrees in 2004 numbered 351,537, as compared with 137,437 in the United States. Moreover, engineers in Taiwan, China, and India typically earn less than their counterparts in the United States. For example, Taiwan’s domestic supply of engineers can be hired at approximately half the cost of engineers in the United States. We reported in 2004 that access to human capital, particularly lower-wage skilled labor, an educated workforce, and quality local vendors facilitated software services offshoring. India is the leading example of this trend. For example, Indian wages represent a fraction of the cost of hiring U.S. counterparts, with the salaries for Indian IT engineers starting at $5,000. According to industry experts, the increasing demand for these workers is causing salary rates to increase somewhat. Yet lower wages does not tell the entire story because India also provides a skilled workforce. India’s leading software services association reports that 44 percent of India’s services professionals possess at least 3 years of work experience. Moreover, many Indian nationals who studied computer technology in the United States and gained experience with U.S. IT firms have begun to return to India to pursue career opportunities in their native country. Some of these individuals have gone on to create or lead successful firms in India. India has a strong national emphasis on advanced technical education, and its scientific and educational institutions produce well-trained scientists and engineers. The highly competitive Indian Institutes of Technology trains the upper echelon of talented students and, according to one industry researcher, produces highly skilled engineers with capabilities that match or exceed U.S. talent. In addition, an industry researcher in India stated that nontechnical programs are beginning to offer computer science and software programming courses to prepare students to meet the market demand of the software services sector. According to India’s software services association, of the 215,000 engineering graduates in 2003 to 2004, 141,000 specialized in IT (e.g., computer science, electronics, and telecommunications). India’s use of the English language gives it a further advantage, making India a prime destination for services offshoring. Finally, the quality of the firms in India is another factor that is considered when firms decide to offshore services. The quality of local vendors, many with Capability Maturity Model (CMM) certifications, provides a sense of security to firms seeking to offshore software services to India. According to a business association in India, Indian companies work to attain these certifications to demonstrate the high quality of their work. For example, a business representative told us that more than 50 percent of the companies that have CMM Level 5 certifications are located in India. With the update of the CMM to the Capability Maturity Model-Integration (CMMI), the Software Engineering Institute reports 93 Indian and 74 U.S. entities (41 percent and 32 percent, respectively, of the world total) with CMMI certifications as of March 2006. Foreign government policies contributed to the development of dynamic semiconductor and software services sectors with opportunities for U.S. firms to offshore. The governments of Taiwan and China developed a broad range of policies to promote their respective indigenous semiconductor industries and to attract investment, technology and talent from abroad. India, in its transition from a socialist government to a market-based economy, has liberalized its software services market, thus permitting U.S. firms to access India’s low-cost high-quality workforce. Taiwan has long pursued industrial policy to encourage the domestic development of science and technology. In 1972, it established a national research institute and within that organization an office to develop its semiconductor industry. Drawing upon the expertise of a U.S. advisory group, Taiwan successfully duplicated elements of the Silicon Valley technology cluster by establishing science-based industrial parks that brought together major universities, research labs, and a dynamic venture capital industry. Its universities feature programs sponsoring research specific to semiconductors, and the government targeted financial and tax incentives to the semiconductor industry. The government also emphasizes vocational training to develop quality resources. As a result, the government of Taiwan helped position its semiconductor industry as an effective contract supplier integral to the U.S. semiconductor supply chain. Its industrial strategy, which has been characterized as “close followership,” integrated Taiwan’s industry operations with those of U.S. companies. Although this strategy means that Taiwan’s industry may be a step behind the U.S. industry, firms in Taiwan capture high-technology industrial and research functions. As a result of its efforts, Taiwan is now a leading semiconductor producer with top-level manufacturing expertise. Taiwan’s support of a strong semiconductor sector continues to evolve with a project that focuses on integrated circuits manufacturing infrastructure. The government is providing partial financial support to this project, which includes the expansion of university-based training, investments in new technologies, and a design park to focus on system-on- a-chip design. With added pressure from the opening of China’s market and the competition from Chinese firms, Taiwan is revisiting its restrictions on the level of technology that firms may transfer to mainland China. In April 2006, Taiwan announced it was removing restriction of the export of low-end semiconductor packaging and testing technology to China. China’s current policies have helped its semiconductor sector to grow dramatically since 2000, but its wafer production represents a relatively small percentage of worldwide production. Nevertheless, China is considered a rising player in the field of advanced technology. Prior to 2004, China’s differential value-added tax, since normalized, was a notable policy that led to an influx of semiconductor firms into China —notably from Taiwan—that sought to avoid the impact of the tax. Following Taiwan’s strategy, China is creating a modern infrastructure to support semiconductor operations. For example, the government provides tax incentives, preferential loans, and opportunities to locate in special economic zones and science parks. China announced, in 2006, the adoption of a 15-year national technology strategy to develop, among other things, a world-class information sector and to focus on developing independent innovation. The result of China’s policies is an expanding semiconductor sector that relies heavily on the expertise of Taiwan’s managers and other expatriates whom China is actively recruiting to return to the mainland. India’s policy for software services differed from the deliberate industrial policy undertaken by Taiwan and China. India’s government policy shifted from protection of domestic industries to a gradual liberalization of some regulations. Although India maintains significant controls on some industries, the software services sector was not affected by some of the most restrictive policies, given the small size of its enterprises. Entrepreneurs in the software services sector were able to build the industry based on the special attributes of India —its English-speaking population, its supply of IT professionals, and its favorable telecommunication infrastructure. Between the 1950s and 1980s, India generally protected domestic firms from foreign competition and undertook a policy of import substitution. India pursued policies that sought to support state-owned enterprises. Where private firms were permitted to operate, a cumbersome licensing bureaucracy controlled their operations. Initially prevented from expanding into higher value-added segments of the industry in the 1980s, software services firms nevertheless found areas of specialization that the government did not restrict. In 1991, India experienced a shortage of foreign exchange, which required liberalization of its economy as a condition to gain support of the International Monetary Fund. This led to further deregulation, which enabled software services to expand. Moreover, in the 1990s, India introduced software technology parks, which are similar to export processing zones. Firms in these parks were given tax exemptions, access to high-speed satellite links, and reliable electric power. India’s technical universities trained large numbers of engineers and specialists in their highly selective IT programs. Later reforms of foreign ownership rules, intellectual property protections, and venture capital policy further opened the way for trade in services. Firms seeking to offshore also encounter risks, including unforeseen costs, geopolitical concerns, cultural differences, infrastructure adequacy, and foreign government requirements. The destination country’s legal system and contract enforcement affect firms’ decisions to offshore. Both the semiconductor and software services industries have specific concerns about countries’ intellectual property protection for their products and make location decisions accordingly. It should also be noted that offshoring places higher demands on firms’ internal management skills. Managers must be able to lead teams with cultural differences, establish metrics to assess contract performance, and manage teams located around the world, using telecommunications as a primary tool. Although firms have found some cost savings in labor, nevertheless, they have also found other management challenges that tend to moderate the overall cost savings. One recurrent concern of U.S. firms operating in China is the lack of middle managers with the combination of business training, business acumen, management skill, and creative thinking. While offshore suppliers are playing a larger and more sophisticated role as the industries globalize, the U.S. semiconductor and software industries have remained technological leaders in the most advanced research and development (R&D) and design work, and the United States remains one of the largest producers globally of products in both industries. Available indicators on production, employment, and trade show that both of these industries have generally rebounded since the 2001 recession and continue to grow. Traditionally, the U.S. economy has had several advantages that fostered strong semiconductor and software industries, including its highly competitive university system, talented labor pool, large domestic market for products, high levels of spending on R&D, and competitive business environment. Despite having offshored some semiconductor operations, the U.S. semiconductor industry remains a global leader in cutting-edge semiconductor chip design and fabrication. U.S. semiconductor production has begun to rise again after a sharp decline during the 2001 recession. However, U.S. semiconductor employment, which also fell during this period, has remained relatively flat since 2003. U.S. exports have also remained flat, but imports declined more sharply creating a U.S. trade surplus in semiconductors. The United States generally exports high-value fabricated chips and wafers to lower-cost locations for assembly and testing. It imports integrated circuits (semiconductor wafers that have been assembled and tested) for use in a variety of industries. However, global demand for finished semiconductors has increasingly shifted to Asia where final assembly of electronic consumer products takes place. Semiconductor fabrication and design capabilities are spread among traditional producers such as the United States, Japan, the European Union, and newer producers such as South Korea, Taiwan, and China. According to industry experts and data, however, the United States remains one of the largest producers of semiconductors and, in particular, maintains cutting-edge development of both design and fabrication of new semiconductors. Industry estimates of semiconductor capacity vary, but the United States and Japan remain the largest two producers of semiconductors. Although a significant share of new high-end fabrication facilities are being built outside the United States for mass production, the United States is a key location for the fabrication facilities used for development of new semiconductor chips. As a global industry, U.S. production includes both U.S. companies and affiliates of foreign companies operating in the United States. Foreign companies have established operations in the United States to take advantage of U.S. technology, skilled labor, and the large domestic market, according to industry experts. One estimate suggests that about one-fifth of U.S.-based fabrication capacity was owned by foreign companies in 2001. In addition, foreign companies also take advantage of experienced design teams in the United States. Companies can potentially benefit from having operations in key areas around the globe where innovation is occurring. These operations are able to access the experienced labor pool and new innovations occurring in a particular region and transfer those developments to their global operations. Silicon Valley, California, for instance, is widely known as a key center of innovation in the semiconductor industry. Similarly, U.S. firms have invested in production capacity in Europe and Asia. However, according to industry experts, U.S. firms have generally not moved their R&D operations offshore. Data on patents and expenditures on R&D also indicate that U.S. semiconductor companies continue to locate their R&D work in the United States. Some industry analysts, though, are concerned that as production increasingly moves offshore to Taiwan and China, it will begin to draw more and more research activities with it. Industry experts also believe that most U.S. company design work is still conducted in the United States rather than offshore. According to these experts, U.S. companies are significant technology leaders in both the IDM and fabless design models. Although U.S. IDMs and fabless design companies operate globally, a larger share of their R&D and design work is conducted in the United States. Most of the fabless design firms are based in the United States, and many of the largest IDM’s are also U.S.-based. Also, the development of foundries, particularly in Taiwan, likely allowed a wider range of fabless companies to develop in the United States than may have been possible without the existence of foundries. This is because the high cost of fabrication plants acts as an entry barrier to smaller firms. At the same time, there are a growing number of fabless design firms in Canada, Israel, and Taiwan, and U.S. companies are also operating design offices in these countries. Thus, the global share of design work by fabless companies is becoming less concentrated in the United States. U.S. production statistics show that the value of semiconductor production in the United States grew steadily during the 1990s even while offshoring expanded. U.S. production of semiconductors and related devices (measured by value-added) peaked in 1999 at about $68 billion, then declined steeply during the 2001 recession. It has since rebounded somewhat to $56 billion in 2004 (see fig. 3). U.S. employment in the semiconductor industry did not rebound after the 2001 recession as production did. After a long decline from the mid-1980s through the early 1990s, U.S. semiconductor employment grew strongly through 2001 (see fig. 4). However, employment dropped sharply from a peak of about 292,000 in 2001 to around 226,000 employees in 2003. After hitting a trough in 2003, employment in the semiconductor industry has been stagnant, although overall U.S. employment across all industries resumed growth in 2004. Employment in the semiconductor industry highlights the broader relationship between productivity growth and job declines in the U.S. manufacturing sector. Figure 5 shows an increase in productivity in the semiconductor and electronic components industry (a broader category than used in fig. 4) over the 15-year period from 1987. The pace of productivity growth sharply increased starting in late 1990s. Industry output continued to grow even after employment declined due to the increase in productivity (output per employee). Since 2001, the United States has had a trade surplus in semiconductors, exporting more semiconductors and semiconductor components than it imported (see fig. 6). Both imports and exports grew rapidly from 1985 to 1995. From 1995 to 1998, exports continued to grow while imports remained flat. From 1998 to 2000, both imports and exports increased again rapidly, peaking in 2000 at about $48 billion (imports) and $45 billion (exports). From 2001 to 2005, imports declined sharply to about $26 billion, while exports also declined, but then leveled out in 2003 to about $34 billion. The majority of U.S. exports of semiconductors consist of chips and wafers, which are used to produce finished integrated circuits in other countries. The top five destinations for U.S. semiconductor exports were all Asian locations: Malaysia (13 percent), Korea (12 percent), Philippines (11 percent), Taiwan (9 percent), and China (8 percent). Exports of U.S. chips and wafers are the result of the fabrication process, which involves some of the most technologically advanced manufacturing processes. The majority of U.S. imports of semiconductors are finished integrated circuits (such as memory and logic integrated circuits), which are then used in other finished electronic goods, such as computers and cell phones. Finished integrated circuits are the result of chips and wafers being tested, cut, and packaged by separate manufacturing plants usually located abroad. This process, although still technologically sophisticated (and less labor-intensive than in the past), is still significantly less advanced than the fabrication process. In 2005, only 13 percent of imports were chips and wafers whereas 71 percent of U.S. exports comprised chips and wafers. The decline in U.S. semiconductor imports since 2000 reflects the movement from the United States to Asia of manufacturing production of electronics products that use integrated circuits. Finished integrated circuits are moving to other countries in Asia, particularly China, for assembly into electronics products, rather than returning to the United States. Therefore, U.S. exports surpassed imports for the first time in 2001. Chinese trade statistics demonstrate the other end of this movement with Chinese imports of integrated circuits soaring over the last 10 years, making China one of the largest markets for integrated circuits in the world. Much of this increase has been supplied by Taiwan, Korea, Malaysia, Japan, the Philippines, and the United States (see fig. 7). Although the United States is sixth in terms of direct exporters to China, some portion of U.S. exports of chips and wafers are passing through other Asian countries for assembly and testing (including China) before use in China’s booming electronics industry. As mentioned above, the top destinations for U.S. wafer exports are Malaysia, Korea, Taiwan, the Philippines, and China. Those wafers are assembled and tested before being sent to electronics manufacturers for use in their products. These trade flows show the complex production chains that have developed across multiple countries. The shift in production and trade flows toward Asia has two consequences. First, because final production increasingly takes place in Asia, the United States imports an increasing share of electronics and telecommunications products (that use semiconductors). Appendix III shows that this is reflected in the growing U.S. trade deficit with Asia and China, in particular, including in advanced technology products. Second, as electronics and telecommunications production chains increasingly locate in Asia, there are benefits to U.S. producers of semiconductors to locate abroad near their customers and take advantage of the production clusters developing there. Therefore, this trend creates an incentive for U.S. companies to offshore some activities. Although the industry is globalizing, the United States has maintained its leadership in the development and expansion of the software services industry. U.S. companies are global leaders in the packaged software and custom software services segments of the industry. Although statistics on software services are more limited than for semiconductor manufacturing, indicators show that the United States is a leading developer and consumer of software globally. U.S. production and employment data show that the industry has generally rebounded after declining during the 2001 recession. Also, while both imports and exports have grown rapidly, the United States maintains a trade surplus in software services. The U.S. software industry is the largest in the world and plays a leadership role in the global market for software services. U.S. companies are disproportionately ranked among the largest in the world, both in terms of revenues and numbers of top firms. U.S. companies also benefit from the large U.S. domestic market, which by one industry estimate accounts for about 50 percent of global demand for packaged software and about 40 percent of global demand for custom software services. U.S. software companies are also widely considered leaders in the development and delivery of leading-edge software services. According to industry experts, much of the development of these services takes place in the United States, although larger companies also employ teams of developers worldwide. Although the industry experienced a downturn during the 2001 recession, it has since begun to recover. As figure 8 shows, the U.S. software industry grew rapidly through the late 1990s, declined during the 2001 recession and, as of 2004, had rebounded to its peak in 2000 based on industry revenue. Packaged software appears to be leading the rebound, while custom software revenues have remained flat since 2002. U.S. software industry employment is the largest in the world. According to one industry estimate, U.S. software employment makes up roughly about half of the global workforce in packaged software and about a third of the workforce employed in IT services industry, which includes custom software services. As a group, software occupations, or computer specialists as designated by the Department of Labor’s Bureau of Labor Statistics (BLS), experienced relatively large gains in both employment and hourly wages from 2001 to May 2005 (the most recent time period for which comparable occupation-based data are available). This period largely coincided with an economic recovery following the 2001 recession. Table 2 compares changes in employment and hourly wages for nine computer specialist occupations and that of all U.S. occupations. Seven of the occupations saw employment growth ranging from 1.1 percent to 46.9 percent compared to 1.8 percent for all U.S. occupations. Employment for two occupations (computer programmers and database administrators) declined by 22.4 percent and 4.7 percent, respectively, from 2001 to May 2005. The wages for these occupations also increased more slowly than the wages for all U.S. occupations. Hourly wages for five occupations increased more slowly than the wages for all U.S. occupations, increasing by 3.5 percent to 10.5 percent compared with 11.4 percent for all U.S. occupations. Wages for four occupations, however, increased faster than the wages for all U.S. occupations, rising by 12 percent to 22.2 percent. Computer software engineers (including systems software and applications engineers, two high-wage occupations) saw modest increases in wages but relatively large increases in employment, growing by 22.6 and 26.1 percent, respectively. Computer software engineers design, develop, and test the software and computer systems, applying computer science, mathematics, and engineering expertise. The integration of Internet technologies and the rapid growth in e-commerce have led to a rising demand for computer software engineers. Although hourly wages of network systems and data communications analysts increased by a relatively low 7.7 percent, their job growth was the largest of all computer specialist occupations at 46.9 percent. This group of computer specialists designs, tests, and evaluates network systems and other data communications systems. According to BLS, employment in computer specialist occupations, apart from computer programmers, is projected to grow much faster than overall U.S. employment. Although total U.S. employment is projected to grow 13 percent over the 2004 to 2014 period, employment of computer specialists is projected to grow 31.4 percent (see fig. 9). BLS projects that the demand for computer-related jobs is likely to increase as employers continue to adopt and integrate increasingly sophisticated and complex technologies. Growth, however, will not be as fast as the previous decade, as the software industry matures, and as routine work is increasingly offshored. Projected job growth for computer software engineers and network systems and data communications analysts is especially robust. The BLS’s Occupational Outlook Handbook suggests that demand for workers with specialized technological skills is expected to increase sharply as employers use and improve the efficiency of new technologies. As the race for increasingly sophisticated technological innovations continues, the need for more highly skilled workers to implement these innovations will continue. More highly skilled computer specialists will be needed as businesses and other organizations try to manage, upgrade, and customize their increasingly complicated computer systems. Computer specialists who have a combination of strong technical and good interpersonal and business skills will be in demand. Unlike other computer specialists, job growth of computer programmers is expected to lag significantly behind the growth in overall U.S. occupations. Programmers are projected to grow only by 2 percent from 2004 to 2014. Because computer programming requires little localized or specialized knowledge, computer programming can be performed anywhere in the world and transmitted electronically. Consequently, programmers potentially face a higher risk of having their jobs offshored than other computer specialists such as software engineers, who are involved in more complex information technology functions. Another factor limiting job growth in computer programming is progress in programming technology. Computer software has become increasingly sophisticated, enabling users to write basic code without programmers’ involvement for routine programming. The United States is a net exporter of software services and has maintained this trade surplus for several decades. Although U.S. exports are rising rapidly, imports are also increasing in this category. Canada is the largest supplier of imported computer and data processing services to the U.S. market but, as we have previously reported, India is rapidly growing as a supplier of these services. Figure 10 shows U.S. exports and imports of computer and data processing services, the category that includes both custom and packaged software services (as defined by BEA) since 1986. U.S. exports of software services make up about 13 percent of overall U.S. software revenues according to the U.S. Census Bureau (Census). However, most export revenue is derived from packaged software exports. These Census statistics show a much larger value of exports than the BEA trade in services statistics. As shown in figure 11, U.S. companies report nearly $22 billion in exports of software services, primarily comprising about $20 billion in U.S. package software exports. Information on trade in software services is significantly more limited than information on trade in semiconductors. Although both BEA and Census collect statistics on software trade, as demonstrated by the previous two figures, the data are available only for the aggregate categories shown. In comparison, for semiconductors, over 230 individual semiconductor goods are identified by Census as they cross international borders. In addition, most countries in the world utilize the same goods classification system, known as the Harmonized System, to record trade in goods. However, efforts to create and utilize detailed and compatible classification systems across countries for services such as software are still relatively new. Part of the challenge in collecting detailed statistics on services industries, such as software, derives from the “intangible” nature of many services—they are not necessarily physical products—and the fact that they don’t cross customs borders like goods. Rather, services data is collected by surveying companies for information on their payments or receipts for services. In addition, services can be delivered to the customer through many different channels, including licensing agreements, imbedded in goods such as computers, or a commercial presence such as a foreign subsidiary. The United States maintains substantial advantages as a large, technologically sophisticated economy. The U.S. high-technology industries, such as semiconductors and software, have benefited from a U.S. economic environment that supports innovation—world-class universities and research centers, a talented labor pool, and high levels of spending on R&D. The industries also benefited from a competitive U.S. business environment, an efficient legal system for contracts and intellectual property protection, and a large domestic market. Although a wide range of causes and circumstances leads to new innovations, certain enabling factors create an environment that fosters new ideas and their development. These include (but are not limited to) such factors as the higher education system and related research centers, pools of talent available, and the investments in research and development. The U.S.’s world-class higher education system and research institutes create communities for researchers and educators and are widely considered a key competitive advantage. The higher education system in the United States includes many universities that are ranked among the best in the world in terms of research, education, and entrepreneurship. Also, a large number of top applicants from around the world apply for undergraduate, graduate, and postdoctoral study. More specifically, U.S. computer science and engineering programs—of particular importance to high-technology industries such as semiconductors and software—are leaders in their fields. The higher education system has provided both a strong research environment and a pool of talented labor—both native born and foreign students who remained after education. A second factor that fosters innovation is the quality and number of available researchers and other skilled labor. Countries with larger and more talented labor pools are more likely to foster and sustain innovation. The United States has a world-class talent pool that includes both technical and managerial talent. The United States has the largest number of researchers worldwide, with about 1.3 million, followed closely by the European Union (EU-25), according to data from the Organization for Economic Cooperation and Development. China, ranked third, has rapidly increased the number of its researchers to surpass Japan. Although the quality of these researchers is not captured by the indicator, it does show the growing size of the Chinese research community. A third factor that fosters innovation is a country’s investment in research and development. This investment may come from several sources, including the government, academia, and business. U.S. expenditures on R&D are the largest in the world and have continued to grow over time (see fig. 12). Currently, the United States spends about 2.7 percent of its gross domestic product on R&D expenditures, compared with about 3.2 percent for Japan and 1.4 percent for China. For certain industries such as semiconductors, early investments by the federal government—the military, in particular—have been key in the initial development of the industry. However, this role may change over time. For the United States, the increase in R&D expenditures over the past decade has been driven by the business community, while the total amount of federal R&D has grown much more slowly in comparison. While the United States has generally maintained a strong advantage in areas that foster innovation, several studies have recently raised questions about continued dominance of the United States in cutting-edge innovation. They cite a range factors that indicate the rise of other competitors in traditionally U.S.-dominated areas. For instance, changes in U.S. visa and immigration requirements have been cited as hampering the number of foreign students, researchers, and high-tech workers who are attracted to the United States and allowed to reside here. At the same time, other countries’ university systems are increasingly competing with the United States to attract the most qualified students and researchers. According to these studies, these changes have led to a decline in the number of university applications from foreign students. Similarly, other countries have liberalized their economies and provided greater opportunities for higher skilled workers. Therefore, more students and researchers, including those from India and China, who may have once stayed in the United States have an incentive to return to their native countries. In addition to an environment for fostering innovation, countries need to be able to commercialize these innovations to affect the wider economy. Several factors contribute to a U.S. competitive environment that encourages innovation to be commercialized. First, the business environment includes relatively competitive product markets that encourage businesses to take new products to market in order to gain advantage over rivals, while also allowing new entrants to challenge existing companies. The United States also has a relatively efficient financial system, including venture capital markets that fund new innovations and start-ups in high-technology industries. The U.S. legal and regulatory environment, including its intellectual property protections (such as patents), allows individuals and companies to be rewarded for their investment in innovation. Finally, the large U.S. domestic market provides an avenue for companies to sell new products to a wide range of sophisticated customers. The U.S. economy is by far the largest in the world, and per capita income is also one of the highest in the world. This creates an environment for U.S. companies to develop and sell new products profitably. In addition, companies that are close to their customers are able to spot new trends and preferences in demand and cater to them. This is particularly true in high-technology industries in which the product life cycle is relatively short and profit margin for older products declines quickly. The past decade’s revolution in telecommunications and related advances in supply chain management capabilities have deeply affected the business models for both the semiconductor manufacturing and software services industries. These industries’ overall business model is now a global one, in which U.S. firms regularly consider a wide range of locations for their operations and source different parts of their operations wherever the advantages are most compelling. For the semiconductor industry, firms initially offshored labor-intensive assembly activities to cut labor costs, but more recently firms have offshored other activities for various reasons, including proximity to other industry suppliers, closer relations with foreign customers, benefits offered by foreign governments, and the availability of both skilled and unskilled human capital. In the software industry, the offshoring trend is more recent, but the motivations are similar. For software services, however, an important difference may be the possible speed and scale of employment shifts. Software services offshoring, compared with semiconductor manufacturing offshoring, does not need the same physical infrastructure, such as ports, roads, and factories, and thus can be set up more quickly. It is more labor intensive than capital intensive, and thus may be more sensitive to wage differentials. In addition, service occupations related to software programming are large in comparison to manufacturing jobs in the semiconductor industry. In semiconductor manufacturing, there was relatively slow movement up the value chain as firms invested in the overseas workforce and factory facilities. India’s software industry development has advanced more quickly, with rapid technological changes bringing large numbers of highly educated, but underused, English-speaking workers to the doorstep of firms willing to operate from India. The data available to monitor the scale of services offshoring, unfortunately, are much more limited than those available for following trade in manufactured products. Semiconductor products, for example, can be identified and inspected at U.S. borders, whereas software imports and exports can be transmitted almost instantaneously over the Internet. Government policies also played important, but different, roles in Taiwan, China, and India; however, all three governments have placed high importance on education. In recent years, China has been transforming large parts of its coastal cities through massive infrastructure investments and has provided more targeted inducements for firms, such as support for science and technology parks and various types of financial assistance. India liberalized parts of its central government apparatus in the early 1990s, but its investment in physical infrastructure such as roads and ports has been much more limited, although India has also supported its science parks and put in place advanced telecommunications infrastructure improvements. These incentives for software exporters appear to have been well targeted. The comparison of these two offshoring experiences offers some insights for U.S. policies. Clearly, a large and well-educated population appears to be a central element to success in both semiconductor manufacturing and software services activities. Also, technological changes have impacts that are not always predictable and, in a now closely-connected global business world, such changes can have continuing dynamic effects on U.S. industries. India may have neither fully predicted or planned its current strengths in software services, nor foreseen how its pool of native English speakers could be such an asset, but it now realizes the importance of enhancing its strengths in these areas. In addition, ambitious national goals—whether China’s semiconductor development road maps or Indian businesses’ long-term strategies—are additional elements in the mix of factors that will shape these countries’ futures and will pose competitive challenges to U.S. firms. As numerous recent studies have reported, the ability of the United States to continue to compete at the most advanced levels in high technology industries depends on a range of reinforcing factors: high-level R&D investment by companies and government, innovative academic environments attracting and training the highest-skilled researchers, a competitive business environment that fosters development and commercial application of new technologies, and a flexible and skilled workforce. These factors are being nourished in China, Taiwan, and India, as these countries seek to move further up the value chain and to “leapfrog” advanced country capabilities where possible. Indeed, these countries have modeled their industry development strategies on various aspects of the U.S.’s successful model. The United States is an integral part of this dynamic world economy—in which it will be important for U.S. businesses and policymakers to keep alert to technological changes, to anticipate competitor countries’ strategies, and to preserve and enhance the elements of the innovation environment that helped make the United States a model. We provided a draft of this report to the Departments of State and Commerce for their review and comment. The Department of State did not provide comments. We received written comments from the Department of Commerce, which agreed our findings. (See app. IV.) The Department of Commerce also provided technical comments, which we incorporated into the report, as appropriate. We are sending copies of this report to interested congressional committees and the Departments of State and Commerce. We also will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128 or yagerl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. This report discusses (1) the development of offshoring in semiconductor manufacturing and software services over time, (2) the factors enabling the expansion of offshoring in these industries, and (3) the development of these industries in the United States as they have become more global. To obtain information about the key developments in the offshoring of semiconductor manufacturing and software services, we reviewed available literature; attended conferences on the subject; and interviewed government officials, representatives of private firms, industry associations, and research organizations in China, India, Taiwan, and the United States. We performed a literature search and obtained information from several research organizations, universities, and industry associations that have published industrywide studies on offshoring and the key developments in both the semiconductor manufacturing and software services industries, including the Association for Computing Machinery; Brookings Institution; Gartner, Inc.; McKinsey and Company; the University of California, Berkeley; Stanford University; Carnegie Mellon University; the Semiconductor Industry Association; and the Information Technology Association of America. We attended conferences on developments in the semiconductor and software services industries and the general offshoring phenomenon. We interviewed researchers at private research organizations, industry experts at the U.S. Department of Commerce and the U.S. International Trade Commission, and government officials from India and Taiwan. In addition, we met representatives of private sector firms in the semiconductor and software services industries in China, India, Taiwan, and the United States. We also interviewed representatives and obtained data from organizations representing semiconductor and software services firms and workers, including the Semiconductor Industry Association, the National Association of Software and Service Companies, and the Information Technology Association of America. We discussed with these sources the historical changes that have occurred broadly in the computer hardware industry, particularly with respect to China and Taiwan, and the software services industry, particularly with regard to India. To determine the factors that have contributed to offshoring in semiconductor manufacturing and software services, we conducted a review of available literature and interviewed representatives of private sector firms, semiconductor and software services industry associations, business associations, and research organizations (see above). In addition, we interviewed industry experts within the U.S. government and the governments of India and Taiwan. We met with and reviewed relevant literature from researchers who have published on the offshoring phenomenon and the factors contributing to global developments in semiconductor manufacturing and software services; including experts from the Brookings Institution; the Institute for International Economics; the Milken Institute; and the University of California, Berkeley. We interviewed representatives of private sector firms in China, India, Taiwan, and the United States that have globally sourced semiconductor manufacturing and software services; trade and industry experts in the U.S. Department of Commerce; and the governments of India and Taiwan. In addition, we interviewed representatives of business and industry associations, such as the Federation of Indian Chambers of Commerce and Industry, the U.S.-Taiwan Business Council, and the Semiconductor Industry Association. To determine developments in the semiconductor and software services industries in the United States as they have become more global, we examined available government data, information from experts in both the semiconductor and software services industries, and other private sector research. We obtained U.S. international trade data from the Bureau of Economic Analysis (BEA) and the U.S Census Bureau. We also obtained foreign countries’ international trade data through the United Nations and a private company, Global Trade Information Services. We obtained foreign direct investment data from BEA and domestic production data from Census. To assess the limitations and the reliability of various data series, we reviewed technical notes and related documentation and met with officials from BEA and Census, as well as individuals in the private sector familiar with these data. In addition, we reviewed relevant research studies and obtained data from several private sector entities. Although we do not report these data directly, we used them to corroborate information from other sources. To determine employment trends in the semiconductor and software services industries, we analyzed available U.S. government employment data from the Bureau of Labor Statistics (BLS). We cross- checked various employment data and reviewed technical notes in BLS publications to assess the limitations and reliability of these data. We also discussed the limitations and reliability of BLS data with BLS officials. We determined that the data we used in this report to show the development and trends in the semiconductor and software industries were sufficiently reliable for these purposes. We conducted our review from October 2005 through August 2006 in accordance with generally accepted government auditing standards. U.S. multinational companies’ worldwide investments and operations (including production, employment, and research and development (R&D) have played an important role in the globalization of the semiconductor and software industries. U.S. statistics show that overall multinational corporation (MNC) investments have still tended to be in developed economies, rather than in developing economies such as India and China. However, certain manufacturing sectors such as the computer and electronic products industry (including semiconductors) have a relatively higher share of investment, production, and employment in developing countries. In particular, U.S. companies’ investments and production in this industry are relatively higher in the Asia-Pacific region (particularly Singapore) than other industries. Employment is even more concentrated abroad—likely due to the movement of more labor-intensive production operations overseas in order to reduce costs. Conversely, research and development expenditures are much more concentrated in the United States than they are in foreign affiliates. U.S. direct investment abroad statistics show that overall U.S. investment (across all industries) in developing country markets is still a relatively small share of total U.S. direct investment abroad (less than 1 percent of the total each for India, China, and other developing countries, except Mexico and Brazil), according to statistics from the Bureau of Economic Analysis (BEA). However, within the computer and electronic products industry (which includes semiconductors), Singapore was the most significant Asia-Pacific country accounting for 15 percent of U.S. global investment in that industry as of 2004. Malaysia and Japan were next with about 5 percent; followed by Korea (4 percent); Taiwan (3 percent); and China, Hong Kong, and the Philippines (2 percent, each). Figure 13 shows the value of U.S. foreign direct investment (FDI) from 1999 to 2004 in this industry for selected Asian countries. As figure 13 shows, Singapore accounted for $8.8 billion in U.S. FDI in 2004 (down from $13.5 billion in 2001), or about 15 percent of the global total in this industry. Interestingly, the value of U.S. FDI in China in this sector has fallen since 2001—more significantly than for other countries, except Singapore. These data represent the accumulated investments (stock) made by U.S companies in the computer and electronic products industry. As discussed in this report, U.S. companies moved labor-intensive assembly and testing operations overseas over the past several decades. Also, U.S. exports of semiconductor wafers were largest to Malaysia, Korea, Taiwan, Philippines, and China. This reflects the production process in which fabricated wafers are then sent overseas for final assembly and test by U.S. companies’ affiliates (as well as unaffiliated contractors). Within the semiconductor industry, the majority of U.S. companies’ global production (as measured by value-added) remained in the United States, although the share declined during the recent recession. As figure 14 shows, semiconductor value-added by U.S. parents (U.S. operations) took a steep decline in 2001, remained flat in 2002, and rebounded somewhat in 2003. Value-added by U.S. companies’ affiliates abroad accounted for about 28 percent of U.S. MNC’s global production, while the Asia-Pacific region (excluding Japan and Australia), in particular, accounted for about 9 percent of global production. U.S. MNCs that operate affiliates offshore have overall split their employment between their U.S. operations and their foreign affiliates. According to data from BEA, about 53 percent of MNC’s global semiconductor employment was located in offshore affiliates in 2003, up from 49 percent in 1999. As previously discussed, this reflects the trend begun in 1960s of U.S. companies’ offshoring much of their labor-intensive assembly and testing operations to lower wage countries, particularly in Asia. BEA statistics also show that a relatively higher share of U.S. employment in semiconductor manufacturing is concentrated in Asia compared with other industries. Similarly, U.S. MNCs in computer and electronic product manufacturing industries (of which semiconductors is a part), in general had relatively higher shares of their global employment located abroad (about 38 percent) than other information and communications technology industries such as computer system design and related services (35 percent), as well as across all industries (28 percent) in 2003. Employment statistics from the Semiconductor Industry Association (SIA) show a similar pattern for U.S.-based companies. According to SIA, about 54 percent of U.S. companies’ semiconductor employment was located in North America (mainly the United States) in 2004. This is down from a peak of about 60 percent in 1998 but still higher than in the 1980s and 1990s, which was between 45 and 50 percent. In addition, about 28 percent of U.S. companies’ North American workforce was engaged in R&D in 2004. According to industry experts, a much higher share of U.S. companies’ R&D employment is based in the United States, rather than offshore. As discussed above, U.S. direct investment abroad statistics show that overall investment (across all industries) in developing country markets is still a relatively small share of total U.S. direct investment abroad. This is also generally true in services industries that include software services. For example, U.S. direct investment in India in the information sector and the professional, scientific, and technical services sector are both less than 1 percent of global investment in those sectors. However, investment in Ireland in the information sector accounted for 30 percent of global U.S. direct investment abroad in that sector in 2004. Over time, Ireland has attracted investment by a large number of U.S. companies to produce software for the European Union market. Similarly, U.S. multinational companies’ operations abroad (including employment) in software services are relatively small compared with the semiconductor industry and the broader electronics hardware industry. For example, table 3 shows that, for semiconductors, over half of U.S. MNC’s employment was located in their foreign affiliates (rather than their domestically based parent company). In contrast, services industries such as publishing (which includes packaged software) and computer systems design and related services (which includes custom software) had between one-fifth and one-third of their employment located in their foreign affiliates. Compared with production or employment, U.S. MNC R&D expenditures are more concentrated in the United States. As shown in table 4, in 2003 about 14 percent of U.S. MNC R&D expenditures were made through U.S. majority-owned foreign affiliates (MOFAs) out of total MNC R&D expenditures (U.S. parents plus MOFAs). The share was similar for the computer and electronic products industry (about 13 percent) and publishing industries (about 10 percent) but less for semiconductors (8 percent), computer systems design and related services (about 5 percent), and information services and data processing services (1 percent). In comparison, MOFAs accounted for about 26 percent of value-added for all industries, 24 percent for computer and electronic products, and 28 percent for semiconductors. Likewise, MOFAs accounted for 28 percent of employment across all industries, 38 percent for computer and electronic products, and 53 percent of semiconductor employment. Across industries, MNCs spent about 22 percent of MOFA R&D expenditures in the computer and electronic products industry (5 percent in semiconductors alone), making it the third largest industry overall in 2003. Other information and computer technology (ICT) sectors represented very small shares (see table 5). Across major industries, transportation equipment manufacturing accounted for 29 percent of total MOFA R&D expenditures (26 percent of that was autos). The next largest was chemicals with 25 percent of R&D expenditures (of which 21 percent was pharmaceuticals). Asia-Pacific economies account for a relatively small share of U.S. MNC’s R&D expenditures. Except for Japan (7 percent overall and 15 percent in information), Singapore (10 percent in computer and electronic products), and Malaysia (5 percent in computer and electronic products), these countries each accounted for 3 percent or less of MOFA expenditures in ICT-related industries (see table 6). China accounts for about 3 percent of manufacturing, but details are not available for computers and electronic products. India accounts for less than 1 percent of R&D expenditures across most industries (note that in the computers and electronic products and professional, technical, and scientific industries, where amounts were suppressed in 2003 for India, prior years also showed less than 1 percent). Since 1989, Commerce’s Bureau of the Census (Census) has identified products that use leading edge technologies or innovations. Commerce classifies these goods as Advanced Technology Products (ATP). Currently, Census identifies about 500 of some 22,000 10- digit commodity U.S. merchandise trade classification codes as ATP codes because they meet the following criteria: (1) the code contains products from 1 of 10 recognized high technology fields such as electronics (which includes semiconductors) and information and communications (which includes notebook computers and cell phones), (2) these products represent leading-edge technology in that field, and (3) these products constitute a significant part of all items in the selected classification code. Partly as a consequence of the growing movement of electronics assembly to Asia, and China in particular, in 2005, the United States trade deficit with China in the ATP information and communications group, $51.5 billion, is slightly larger than the overall ATP deficit with China, $48.4 billion, and about 25 percent of the overall goods deficit, $203.8 billion, all of which have dramatically grown in recent years. Finished products—such as notebook computers and cell phones—are the largest U.S. information and communication ATP imports from China in 2005. Computer parts and accessories are the leading U.S. exports to China in this group. U.S. exports, imports, and the trade balance with China in this group are depicted in figure 15. This figure shows both the rapid growth in imports of these products from China, as well as the rising trade deficit. In contrast, in the ATP electronics group, beginning in 2001, the United States has a trade surplus with China, largely due to the substantial exports of semiconductor wafers and integrated circuits to China. (See fig. 16.) However, this surplus of about $1 billion in 2003 has been declining somewhat in recent years. This current trade surplus is partly a result of slower growing U.S. demand for finished integrated circuits by downstream manufacturers of consumer electronics, as discussed previously. The overall ATP trade deficit with China (as well as Asia overall) is largely due to information and communications imports. However, trade statistics rarely separate out the value of imported components embodied in finished products. Therefore, some part of the value of U.S. imports of information and communications products from China is attributable to U.S. exports of chips and wafers (and other ATP components) directly to China or indirectly through other Asian countries. However, to be a leading-edge product, Census must judge the product itself to use leading-edge technology, not simply some of its components. For example, although autos have many leading-edge components such as semiconductors and integrated circuits, autos are not leading-edge products. In addition to the individual named above, Virginia Hughes, Assistant Director; Bradley Hunt; Ernie Jackson; Sona Kalapura; Judith Knepper, Analyst-in-Charge; Lynn Cothern; Yesook Merrill; Berel Spivack; and Tim Wedding made major contributions to this report. Offshoring in Six Human Services Programs: Offshoring Occurs in Most States, Primarily in Customer Service and Software Development. GAO- 06-342. Washington, D.C.: Mar. 28, 2006. Offshoring of Services: An Overview of the Issues. GAO-06-5. Washington, D.C.: Nov. 28, 2005. International Trade: U.S. and India Data on Offshoring Show Significant Differences. GAO-06-116. Washington, D.C.: Oct. 27, 2005. International Trade: Current Government Data Provide Limited Insight into Offshoring of Services. GAO-04-932.Washington, D.C.: Sept. 22, 2004. Highlights of a GAO Forum: Workforce Challenges and Opportunities For 21st Century: Changing Labor Force Dynamics and the Role of Government Polices. GAO-04-845SP. Washington, D.C.: June 1, 2004. China Trade: U.S. Exports, Investment, Affiliate Sales Rising, but Export Share Falling. GAO-06-162. Washington, D.C.: Dec. 9, 2005. U.S.-China Trade: Opportunities to Improve U.S. Government Efforts to Ensure Open and Fair Markets., GAO-05-554T. Washington, D.C.: Apr. 14, 2005. U.S.-China Trade: Observations on Ensuring China’s Compliance with World Trade Organization Commitments. GAO-05-295T. Washington, D.C.: Feb. 4, 2005. U.S.-China Trade: Opportunities to Improve U.S. Government Efforts to Ensure China's Compliance with World Trade Organization Commitments. GAO-05-53. Washington, D.C.: Oct. 6, 2004. World Trade Organization: U.S. Companies’ Views on China’s Implementation of Its Commitments. GAO-04-508. Washington, D.C.: Mar. 24, 2004. Export Controls: Rapid Advances in China’s Semiconductor Industry Underscore Need for Fundamental U.S. Policy Review. GAO-02- 620. Washington, D.C.: Apr. 19, 2002. Export Controls: System for Controlling Exports of High Performance Computing Is Ineffective. GAO-01-10. Washington, D.C.: Dec. 18, 2000. Federal Research: SEMATECH’s Technological Progress and Proposed R&D Program. RCED-92-223BR. Washington, D.C.: July 16, 1992. Federal Research: SEMATECH’s Efforts to Strengthen the U.S. Semiconductor Industry. RCED-90-236. Washington, D.C.: Sept. 13, 1990.
Much attention has focused on offshoring of information technology (IT) services overseas. "Offshoring" of services generally refers to an organization's purchase from other countries of services such as software programming that it previously produced or purchased domestically. IT manufacturing, notably semiconductor manufacturing, has a longer history of offshoring of manufacturing operations. Under the Comptroller General's authority to conduct evaluations on his own initiative, GAO addressed the following questions: (1) How has offshoring in semiconductor manufacturing and software services developed over time? (2)What factors enabled the expansion of offshoring in these industries? (3) As these industries have become more global, what have been the trends in their U.S.-based activities? The U.S. semiconductor industry began offshoring labor-intensive manufacturing operations in the 1960s, followed in the 1970s and 1980s by increasingly complex operations, including wafer fabrication and some research and development (R&D) and design work. Semiconductor assembly and testing was the first to move to Asia, followed by fabrication and, more recently, by some design operations. Software services offshoring began in the 1990s after Internet communications made it possible to trade services such as software programming and software design. The year 2000 changeover hastened this offshoring trend related to software services because programmers knowledgeable in the appropriate programming languages were available, primarily in India. In the 2000s, firms further expanded their offshoring operations, based on the low-cost and high-quality work from the offshored services undertaken in the late 1990s. Although a lower labor cost was initially a key factor that attracted firms to offshore locations, other factors such as technological advances, available skilled workers, and foreign government policy, also played roles. Technological advances helped firms in the semiconductor industry improve their management of global supply chains and logistics. Regarding software services, technological advances opened the way to trade in programming and other software services. Foreign government policies in Taiwan and China created favorable investment conditions for U.S. semiconductor firms. India changed its emphasis from state-owned enterprises in the 1970s to an environment more amenable to private enterprise by the mid-1980s. Although its restrictions on foreign investment constrained the software services industry's overall development, India established software technology parks in 1990 to give domestic firms preferential access to the infrastructure essential for offshored operations. Although offshoring continues to grow in both the semiconductor manufacturing and software services industries, the United States remains one of the largest and most advanced producers of semiconductors and software services. U.S. production data show that both industries have largely rebounded from the 2001 recession. Employment data show a mixed picture, with semiconductor employment remaining flat and software employment mostly recovering. The United States has global trade surpluses in the semiconductors and software services sectors, although production is increasingly shifting to Asia. Both U.S. industries have become global, sourcing components from many locations overseas. U.S. firms have offshored increasingly complex products, essentially moving up the value chain. The ability of the United States to compete depends on research and development investment, innovative academic environments attracting top-quality students, and a competitive business environment. It will be important for U.S. businesses and policymakers to keep alert to technological changes and competitor countries' strategies while enhancing the elements of the innovation environment in the United States.
According to the Nutrition Business Journal, the dietary supplement industry is growing, and total sales were about $23.7 billion in 2007, as shown in figure 1. Top selling supplements in 2007 included multivitamins, sports nutrition powders and formulas, and calcium, according to the Nutrition Business Journal. In addition, one of the areas of greatest growth in supplements within the United States in 2007 was among weight loss products. Projections through 2011 show that growth in the industry is expected to continue, in large part because of the aging population and an increasing interest in personal health and wellness. Over time, several key events have shaped the regulation of dietary supplements, as shown in table 1. Significantly, Congress passed DSHEA, which amended the Federal Food, Drug, and Cosmetic Act and created a new regulatory category, safety standard, and other requirements for supplements. Under DSHEA, dietary supplements are generally presumed safe. With the exception of the banned dietary ingredient, ephedra, companies may sell otherwise lawful products containing any dietary ingredient that was marketed in the United States prior to October 15, 1994—referred to as “grandfathered ingredients”—without notifying FDA. Ingredients that were not marketed before this date are considered new dietary ingredients. A dietary supplement containing a new dietary ingredient must meet one of the two following requirements: (1) it contains only dietary ingredients that have been “present in the food supply as an article used for food in a form in which the food has not been chemically altered” or (2) there is evidence that the dietary ingredient is reasonably expected to be safe under the conditions of use recommended or suggested in the product’s labeling. In addition, companies planning to market a dietary supplement with a new dietary ingredient that only meets the second requirement must notify FDA of the evidence that is the basis of the determination at least 75 days before marketing the supplement. As of December 22, 2007, dietary supplement companies are required to submit any report received about a serious adverse event to FDA, as mandated by the Dietary Supplement and Nonprescription Drug Consumer Protection Act. In addition, companies can voluntarily submit reports about moderate and mild adverse events. Others, such as consumers and health care practitioners, can submit reports of serious, moderate, and mild adverse events on a voluntary basis to FDA. Prior to implementing the mandatory reporting requirements, FDA’s Center for Food Safety and Applied Nutrition—which, in part, is responsible for promoting and protecting the public’s health by ensuring that the nation’s food supply is safe, sanitary, wholesome, and honestly labeled—had a system in place to receive voluntary reports of adverse events involving dietary supplements from all parties. As stated in the Federal Food, Drug, and Cosmetic Act, FDA is also responsible for protecting the public health by ensuring that the labels of dietary supplements are not false or misleading. As noted in table 1, the Nutrition Labeling and Education Act of 1990 amended the Federal Food, Drug, and Cosmetic Act to require that most foods, including dietary supplements, bear nutrition labeling. In addition, DSHEA amended the Federal Food, Drug, and Cosmetic Act to add specific labeling requirements for dietary supplements and provided for optional labeling statements. Federal regulations require the following information on the labels of dietary supplements: (1) product identity (name of the dietary supplement), (2) net quantity of contents statement (amount of the dietary supplement in the package), (3) nutrition labeling, (4) ingredient list (when appropriate), and (5) name and place of business of the manufacturer, packer, or distributor. In addition, DSHEA specifies that supplements with labeling that makes disease or health-related claims must contain a disclaimer that FDA has not evaluated the claim and the product is not intended to diagnose, treat, cure, or prevent any disease. Similar to dietary supplements, the market for foods with added dietary ingredients has been growing, and this trend is expected to continue. Foods with added dietary ingredients vary greatly, including such products as orange juice with added calcium, pasta with Omega 3, and sunflower seeds with guarana. Terms such as “functional foods” and “nutraceuticals” are sometimes used to describe foods with added dietary ingredients. However, there are no regulatory definitions for these terms, and some of these terms include foods with naturally beneficial properties beyond nutrition, such as pomegranate juice. FDA has made several changes in response to the new serious adverse event reporting requirements established by law in 2006 and has subsequently received an increased number of reports. FDA has modified its existing data system and internal procedures for compiling, tracking, and reviewing adverse event reports to incorporate mandatory reporting by industry. Additionally, FDA has issued draft guidance and conducted outreach to industry regarding the new requirements. Since mandatory reporting went into effect on December 22, 2007, FDA has seen a threefold increase in the number of all adverse event reports received by the agency compared with the previous year. Although FDA received more reports overall since the reporting requirements went into effect, underreporting of adverse events remains a concern, and the agency has further actions planned to facilitate adverse event reporting by consumers, health care practitioners, and industry. In 2007, FDA took several actions in response to the new serious adverse event reporting requirements for dietary supplements. Specifically, FDA modified its existing database for compiling, tracking, and reviewing adverse event reports—the CFSAN Adverse Event Reporting System (CAERS)—to include data fields and instructions specifically for compiling and tracking mandatory reports. In addition, CFSAN established procedures for reviewing mandatory serious adverse event reports to determine if they meet the minimum data requirements for mandatory reports outlined in guidance to the industry. FDA has also issued draft guidance and conducted outreach to industry regarding the new reporting requirements. In October 2007, FDA provided companies with a form and instructions for submitting mandatory serious adverse event reports and issued draft guidance describing statutory requirements and agency recommendations for reporting, recordkeeping, and records access. Additionally, in December 2007, FDA issued draft guidance on labeling requirements. Statutory requirements outlined in draft guidance include the following: The manufacturer, packer, or distributor whose name appears on the dietary supplement label (responsible party) must report all serious adverse events to FDA, as well as follow up medical information received within 1 year after the initial report, within 15 business days of receipt. Mandatory serious adverse event reports must be submitted to FDA using the MedWatch 3500A form and should contain the following minimum data elements: an identifiable injured person, name of the person who first notified the responsible party, identity and contact information for the responsible party, a suspect dietary supplement, and a serious adverse event or fatal outcome. The responsible party must include a copy of the dietary supplement label related to the serious adverse event. The responsible party must maintain records of all adverse events reported for 6 years and must provide FDA officials with access to the records upon request during an inspection. Labels for dietary supplements marketed in the United States must provide a complete domestic mailing address or phone number where the responsible party may receive adverse event reports. In addition to these requirements, FDA recommended that firms include an introductory statement on dietary supplement labels to inform consumers that the contact information provided may be used to report a serious adverse event. According to comments submitted to FDA by the three major dietary supplement industry associations, although the industry broadly supports the new mandatory reporting requirements, it disagrees with the recommended labeling changes. These industry associations cite the following three key reasons for their opposition to FDA’s recommendation: (1) in their view, the changes are unnecessary and beyond Congress’ intent; (2) the introductory statement may draw undue attention to the possibility of an adverse event and confuse consumers; and (3) redesigning and replacing product labels is a substantial added expense for dietary supplement companies and should have been proposed through a formal rulemaking process rather than guidance. According to an FDA official, the draft guidance regarding reporting, recordkeeping, and records access requirements is close to being finalized. In December 2008, FDA issued a revision of the draft guidance regarding labeling changes. According to FDA, before this guidance is finalized, it will need to be reviewed by the Office of Management and Budget because of its potential economic impact on industry. FDA has also worked with industry associations to increase awareness of the new reporting requirements. For instance, FDA officials have spoken at industry-sponsored conferences and seminars to increase awareness and answer questions about the new reporting requirements. Representatives from two of the leading industry associations we spoke with stated that they were generally satisfied with FDA’s outreach efforts regarding mandatory reporting. Since mandatory reporting requirements went into effect, the agency has seen a threefold increase in the number of all adverse events reported compared with the previous year. For example, from January through October 2008, FDA received 948 adverse event reports, compared with 298 received over the same time period in 2007. Of the 948 adverse event reports, 596 were mandatory reports of serious adverse events submitted by industry; the remaining 352 were voluntary reports, which include all moderate and mild adverse events reported and any serious adverse events reported by health care practitioners and consumers directly to FDA. As shown in figure 2, FDA received more serious adverse event reports between January 1, 2008, and October 31, 2008, than previous years, including 2003 and 2004, when FDA was receiving adverse event reports related to ephedra. Adverse event reports from January 1, 2008, through October 31, 2008, include 596 serious adverse event reports submitted by industry, 163 serious adverse events reported by others on a voluntary basis, and 189 moderate and mild adverse event reports. Since mandatory reporting went into effect, FDA had received 596 mandatory reports of adverse events, such as serious cardiac, respiratory, and gastrointestinal disorders, as of October 31, 2008. Among other results, these events involved 9 deaths, 64 life-threatening illnesses, and 234 patient hospitalizations. As shown in table 2, 66 percent of serious adverse event reports were associated with dietary supplements that either contained a combination of types of products, such as a product containing both vitamins and herbals, or could not be categorized under one of FDA’s other product classifications, and 40 percent were associated with vitamins. However, according to FDA, because of variability in the quality and detail of information in reports and the lack of a control group, the agency cannot necessarily determine a causal relationship between an adverse event and the dietary supplement associated with the event. Appendix II provides further detail on adverse event reports related to dietary supplements received by FDA from January 1, 2003 through August 6, 2008. Although FDA has received a greater number of reports since mandatory reporting requirements went into effect, FDA recently estimated that the actual number of total adverse events—including serious, moderate, and mild—related to dietary supplements per year is over 50,000. This estimate suggests that underreporting of adverse events limits the amount of information that FDA receives regarding safety concerns related to dietary supplements or their ingredients and, according to FDA, this can negatively impact the agency’s ability to identify safety concerns. Experts have cited several possible reasons for underreporting related to dietary supplements, including reduced attribution of adverse effects to supplements due to the assumption that all dietary supplements are safe, the reluctance of consumers to report dietary supplement use to physicians, the failure to recognize chronic or cumulative toxic effects from their use, and a cumbersome reporting process. To facilitate adverse event reporting for any FDA-regulated products, FDA is currently developing MedWatchPlus, an interactive Web-based portal intended to simplify the reporting process and reduce the time and cost associated with reviewing paper reports. For example, according to FDA planning documents, MedWatchPlus would simplify the reporting process by providing a single Internet portal for consumers, health care providers, and industry to report an adverse event. Furthermore, the proposed interactive format will prompt reporters to provide relevant information based on the type of products involved in the adverse event—thereby facilitating reporting and improving the quality of information FDA receives. Once an event is reported, the information would be automatically routed to the relevant FDA centers based on the type of product involved. Testing and release of the interactive questionnaire phase of the project is currently expected in 2009. FDA has taken some steps—such as analyzing adverse event reports and detaining certain potentially unsafe imported products—to identify and act upon safety concerns related to dietary supplements. However, several factors limit the agency’s ability to detect concerns and efficiently and effectively remove products from the market. For example, FDA has limited information on the number and location of dietary supplement firms, the identity and ingredients of products currently available in the marketplace, and mild and moderate adverse events reported to industry. Additionally, FDA dedicates relatively few resources to dietary supplement oversight activities compared with other FDA-regulated products. Moreover, once the agency has identified a safety concern, the agency’s ability to efficiently and effectively remove a product from the market is hindered by a lack of mandatory recall authority and the difficulty of establishing adulteration for dietary supplement products under the significant or unreasonable risk standard. Although FDA has taken some steps, such as drafting guidance for industry on reporting serious adverse events and establishing its Current Good Manufacturing Practice regulations, to improve the oversight of dietary supplements over the past several years, consumers remain vulnerable to risks posed by potentially unsafe products. FDA uses a variety of approaches to identify potential safety concerns related to dietary supplements. For example, FDA may identify concerns through surveillance actions such as monitoring adverse event reports and consumer complaints, screening imports, and conducting inspections. For instance, during almost half of the 909 inspections conducted at dietary supplement firms from fiscal year 2002 through May 6, 2008, FDA and its partners at the state level identified potential problems, such as a lack of quality control and unsanitary conditions. Table 3 provides examples of FDA surveillance related to dietary supplements. For more detailed information on FDA’s actions to identify potential safety concerns, see appendix II. In addition, FDA monitors the Internet to identify products that purport to be dietary supplements but may be fraudulently promoted for treating diseases. According to FDA, such products pose a threat to public health because the disease prevention and treatment claims often persuade consumers to delay or forgo medical diagnosis and treatment. FDA officials also told us they identify safety concerns by obtaining information from other agencies at the state, federal, and international level; reviewing scientific literature; sponsoring safety-related research; and targeting safety-related investigations on particular classes of products. For instance, according to FDA, the agency used adverse event information from the Florida Department of Health to issue a consumer warning about the product “Total Body Formula.” FDA officials also described current safety-related investigations it initiated targeting specific classes of products, such as ephedra substitutes; male potency enhancers that contain undeclared active pharmaceutical ingredients; and products making misleading health claims to prevent or cure serious illnesses such as diabetes, sudden acute respiratory syndrome, and influenza. For example, officials said they are currently contracting with the University of California at Los Angeles to monitor adverse event reporting related to ephedra substitutes. In addition, FDA is conducting animal testing through the National Center for Toxicological Research to examine interactions among weight loss supplement ingredients, according to agency officials. Once FDA has identified a potential safety concern, the agency has several options available for taking action. According to FDA officials, products or ingredients of greatest concern for public health generally will be subject to either administrative or judicial enforcement actions, whereas FDA will take advisory actions against products of lower public health risk. FDA officials also noted that, if a firm does not correct violations in response to an advisory action, FDA may pursue an enforcement action against the firm. Table 4 provides examples of FDA administrative and enforcement actions related to dietary supplements. For more detailed information on FDA’s actions in response to identified safety concerns, see appendix II. In addition to taking enforcement action on its own, FDA may pursue enforcement action in conjunction with another federal agency, such as the Federal Trade Commission, which has enforcement responsibility with regard to dietary supplement advertising. For example, as part of the FDA’s Consumer Health Information for Better Nutrition initiative launched in 2002, FDA and the Federal Trade Commission took joint enforcement actions against several marketers of dietary supplement products making unsubstantiated treatment claims for diseases such as emphysema, diabetes, Alzheimer’s disease, cancer, and multiple sclerosis. In addition, industry has also initiated some measures to address unsubstantiated claims. For example, based on monitoring efforts and company referrals, the National Advertising Division of the Council of Better Business Bureaus reviews advertising claims for accuracy and then recommends changes to companies as necessary. This program is currently funded through a series of multiyear grants from the Council for Responsible Nutrition. FDA officials also noted that the agency plans to expand dietary supplement oversight in the near term. In particular, FDA will add dietary supplement inspections as an option for its formal state contract agreements in 2009, which should increase the number of dietary supplement inspections performed by state officials on FDA’s behalf, according to FDA officials. To further increase the number of inspections, FDA is also exploring third-party certification as part of its Food Protection Plan: An Integrated Strategy for Protecting the Nation’s Food Supply. To improve the agency’s Internet surveillance, FDA has plans to implement a sophisticated computer program that will search the Web for unauthorized disease treatment claims, potentially searching hundreds of thousands of Web sites per minute compared with a manual search by FDA staff, according to an FDA official. Moreover, although agency officials stated it was too early to determine the effectiveness of the newly established Current Good Manufacturing Practice regulations and serious adverse event reporting requirements, these new tools could improve the agency’s ability to oversee the dietary supplement industry. While several stakeholders generally agreed that the new regulations could improve FDA’s ability to oversee the dietary supplement industry, some stakeholders raised concerns about FDA’s ability to enforce the new requirements given its limited resources. Although FDA has taken some steps to identify and act on safety concerns, limited information hinders FDA’s oversight of the dietary supplement industry. In addition, FDA dedicates relatively few resources to dietary supplement oversight. Furthermore, FDA is limited by a lack of authority to efficiently and effectively remove products from the market. FDA’s ability to identify safety concerns is hindered by a lack of information in three key areas: the identity and location of dietary supplement firms; the types and contents of products on the market; and product safety information, such as adverse event data. First, FDA lacks complete information on the names and location of dietary supplement firms within the agency’s jurisdiction. Although all dietary supplement firms must register with FDA as food facilities to provide information on their name and location, firms specializing in certain product categories, such as herbal products, are not required to self-identify as dietary supplement firms under current law. For example, a firm manufacturing products containing only herbs, such as echinacea and ginseng, would not be required to identify itself as a dietary supplement firm during the registration process. Consequently, FDA may not be aware of all dietary supplement firms that are currently operating. In addition, there is little assurance that FDA’s existing inventory of dietary supplement firms is accurate because this information is not updated in a systematic fashion. As one FDA official explained, a thorough review of FDA’s firm inventory would probably require dedicating 10 to 15 staff within each field office to the task for a year—which is unlikely given FDA’s current workload. However, FDA officials did indicate that modifying the existing registration categories to better reflect FDA’s inspection responsibilities could provide the agency with more complete information on the number and location of dietary supplement firms within its regulatory jurisdiction, provided industry complies with the new requirements. In FDA’s Food Protection Plan, the agency requested statutory changes to allow modifying existing registration categories and require biannual renewals for food facilities, stating that such changes would ensure FDA has accurate, up-to-date information and would help the agency assess and respond to potential threats to the food supply. Second, FDA does not have comprehensive information on the types and contents of dietary supplement products that are on the market or their ingredients. In addition, FDA officials noted that, if a dietary supplement firm reformulates a product to include different ingredients and/or changes the amounts of the ingredients without renaming the product, FDA may not be aware of the changes. Although drug manufacturers are required by law, with some exceptions, to register the identity and active ingredients of their products with FDA, the agency lacks the authority to require similar product information from dietary supplement manufacturers. Detailed product information could help the agency more efficiently and effectively analyze the adverse event reports it receives. For example, according to FDA, voluntary reports often contain inaccurate or incomplete information on product ingredients. Complete information on product ingredients could help the agency establish links between mandatory and voluntary reports on products containing the same ingredient. Furthermore, a database of marketed products and their ingredients could help the agency respond more quickly to safety concerns. For instance, if FDA identified a particular ingredient of concern, officials could quickly determine which products on the market contained the ingredient and tailor the agency’s response accordingly. Third, FDA’s ability to identify safety concerns is undermined by a lack of information on product safety, such as data on the frequency and characteristics of adverse events related to dietary supplements. As we noted earlier in this report, although dietary supplement firms are required to report all serious adverse event reports they have received to FDA, they are not required to report mild or moderate adverse events. Additional information on adverse events could be particularly beneficial because there is a limited amount of scientific data available on the safety of dietary supplements compared with other regulated products such as drugs, which require premarket approval. For instance, FDA officials noted that mandatory reporting of mild and moderate events could assist the agency by increasing the amount of data available for signal detection, as well as provide additional support for safety-related conclusions regarding particular products or ingredients. Although some stakeholders have pointed out that mandatory manufacturer reporting of mild and moderate events won’t fully address the issue of underreporting— particularly for consumers and health care providers—most medical researchers we interviewed agreed that mandatory reporting of all adverse events would be beneficial to the agency. FDA dedicates relatively few resources to dietary supplement oversight activities, including conducting inspections and developing guidance for industry on key safety-related aspects of DSHEA. Our analysis of FDA expenditure data found that FDA dedicated approximately 4 percent of CFSAN resources and 1 percent of its field resources—which are dedicated to FDA’s Office of Regulatory Affairs—to dietary supplement programs from fiscal years 2006 through 2007. FDA uses its field resources to, for example, monitor industry compliance by conducting surveillance actions such as inspections and import screenings. As FDA officials explained, limited inspection resources are prioritized according to public health risk, and dietary supplements are generally considered to be a lesser risk than, for example, foods that could be contaminated with foodborne pathogens. Consequently, although FDA conducted 973 inspections of foreign food firms from fiscal year 2002 through fiscal year 2008, FDA conducted no foreign inspections of dietary supplement firms during this time period. Similarly, although FDA increased the number of domestic inspections of dietary supplement firms in fiscal years 2004 and 2005, overall, these inspections represented less than 1 percent of total food establishment inspections conducted by FDA and its state partners from fiscal years 2002 through 2008. With few resources dedicated to dietary supplement inspections, FDA’s ability to identify potential safety concerns through this key surveillance activity is limited. Furthermore, despite identifying the need to provide industry with guidance on key aspects of DSHEA, FDA has not done so in a timely manner. For example, DSHEA authorized FDA to establish Current Good Manufacturing Practices specific to dietary supplements in 1994; however, the agency did not publish a proposed rule until 2003 and did not finalize the rule until 2007. FDA officials noted that it first issued an advance notice of proposed rulemaking in 1997 and went through a number of steps, such as conducting public meetings, to develop an overall strategy for regulating dietary supplements and then submitted its rule to the Office of Management and Budget for clearance before finalizing the rule. Because these Current Good Manufacturing Practices are phased in over time, they will not fully be in effect until 2010—16 years after FDA was authorized to establish them. In addition, although FDA recognized the need to develop guidance on the new dietary ingredient provisions of DSHEA, FDA has yet to issue this guidance—an omission previously highlighted in our 2000 report. As an FDA official explained, new dietary ingredient guidance is critical for dietary supplement safety because, without formal guidance, firms may not notify FDA before marketing products that have drastically different safety profiles than their historical use. For example, this official was concerned that a firm could use bitter orange’s historical use as a flavoring in marmalade as justification for not submitting a new dietary ingredient notification to FDA when it uses bitter orange to create a product that is 95 percent synephrine—a powerful stimulant. Similarly, a firm might choose to market dietary supplement products that contain nano-sized particles of grandfathered ingredients without notifying FDA in advance. According to the FDA official, this raises concerns because potential health risks associated with nano-sized particles are unknown. According to this official, FDA has started to develop draft guidance for new dietary ingredients that would clarify what factors FDA will use when determining if a substance is a new dietary ingredient. More specifically, the guidance would clarify what changes to grandfathered ingredients would require a new dietary ingredient notification to FDA and what information should be included in the notification, among other items. However, this draft guidance has been under legal review for over a year, and FDA did not provide us with a time frame for its issuance. Once FDA has identified a safety concern, the agency’s ability to efficiently and effectively remove a product from the market is hindered by a lack of mandatory recall authority. For instance, FDA officials commented that FDA’s ability to protect consumers through its voluntary recall authority is limited because it relies on industry exercising its responsibility rather than enforceable requirements. As FDA noted in its Food Protection Plan, mandatory recall authority would allow the agency to ensure the prompt and complete removal of unsafe products from distribution channels in cases where a firm was unwilling to cooperate voluntarily. Additionally, agency officials and other stakeholders have acknowledged the difficulty of banning a dietary supplement because FDA must establish adulteration under the significant or unreasonable risk standard. For example, it took FDA almost 10 years after issuing its first advisory about ephedra to gather sufficient data to meet the statutory burden of proof for banning ephedra from the market. The difficulty of establishing significant or unreasonable risk is compounded by limited scientific research on the safety of dietary supplements—which are generally presumed safe under the law, and firms are not required to provide FDA with evidence of product safety for ingredients marketed prior to October 15, 1994, such as ephedra. Underreporting of adverse events also limits FDA’s ability to meet its burden of proof. In the case of ephedra, one firm withheld information from FDA on thousands of serious adverse event reports related to its product—which hindered FDA’s investigation, and prompted support for establishing mandatory reporting requirements. As previously mentioned in this report, although mandatory serious adverse event reporting requirements for industry are now in effect, underreporting of all adverse events from consumers, health care providers, and industry remains a concern. According to an agency official, given these data limitations and the agency’s difficult and costly experience with ephedra, banning an ingredient is not a very viable option. However, according to some experts, the difficult process of establishing significant or unreasonable risk for dietary supplement ingredients with known safety concerns has raised doubts about FDA’s ability to adequately protect the public. For example, table 5 summarizes FDA actions for certain dietary supplement ingredients that have been banned in other countries. Although FDA has taken some actions, such as issuing warnings, when foods contain unsafe dietary ingredients, certain factors may allow unsafe products to reach consumers. FDA may not know when a company has made an unsupported or incorrect GRAS determination about an added dietary ingredient in a product until after the product becomes available to consumers because companies are not required to notify FDA of their self- determinations. In addition, the boundary between dietary supplements and foods containing added dietary ingredients is not always clear, and some food products could be marketed as dietary supplements to circumvent the safety standard required for food additives. Finally, according to FDA officials, the agency conducts a limited amount of monitoring for safety concerns associated with foods containing added dietary ingredients. The Federal Food, Drug, and Cosmetic Act allows companies to market a conventional food product with added dietary ingredients if the company determines that the added dietary ingredient meets the GRAS standard. These companies do not have to notify FDA before selling the product to consumers, although some may do so voluntarily. If a company makes an unsupported or incorrect GRAS determination about an added dietary ingredient in a product, FDA may not know about the product until after it becomes available to consumers. This was the case, for example, for several food products containing such herbs as kava, ginkgo, and echinacea. Specific examples are as follows: In 2004, during a food inspection of a juice company, FDA found that the company was marketing a product that contained kava. According to FDA, it is not aware of a basis for concluding that kava is GRAS, and it has not approved kava as a food additive. In addition, kava was the subject of a public health advisory issued by FDA in March 2002, which warned consumers of the potential risk of severe liver injury associated with the use of kava. In 2001, FDA identified a company marketing cereals with ginkgo biloba and Siberian ginseng. According to FDA, it is not aware of a basis for concluding that these ingredients are GRAS, and it has not approved them as food additives. FDA sent a warning letter to the company, and the product was subsequently removed from the market. Also in 2001, FDA identified a company marketing juices with added echinacea. FDA sent a warning to the company noting that it has not approved echinacea as a food additive and is not aware of a basis for concluding that echinacea is GRAS. FDA learned of these products after they were available to consumers. If FDA wanted to remove these products from the market, and the companies did not do so voluntarily, FDA would have to initiate enforcement actions. The boundary between dietary supplements and foods containing added dietary ingredients is not always clear. FDA officials have noted, for example, that a tea with an identical mix of herbal ingredients could be considered either a dietary supplement or a food product. FDA determines how to classify the tea based on the product labeling. More specifically, according to FDA, if the tea is labeled as a dietary supplement and is not represented as a conventional food, FDA would consider the tea to be a dietary supplement and regulate it as such. On the other hand, if the tea is labeled as a food or is represented as a conventional food with terms such as “drink” or “beverage,” FDA officials noted that they would consider the tea to be a food. The way FDA classifies a product is important because the safety standard that applies to the product varies based on that classification. If the product is classified as a conventional food, the added dietary ingredient must meet the GRAS standard or be approved by FDA as a food additive, except in certain circumstances as authorized in law. If the product is classified as a dietary supplement, the added dietary ingredient is presumed safe if it was marketed in the United States before October 15, 1994; otherwise, it is considered a new dietary ingredient, and the manufacturer or distributor may be required to notify FDA 75 days before the product with the added dietary ingredient enters the market and provide some basis for concluding that the ingredient is reasonably expected to be safe. According to FDA and industry officials, this is a less stringent standard than that for food additives. However, FDA does not have the authority to require that the safety of dietary supplements be approved before entering the market. These differences in how products are regulated may lead to circumstances when an ingredient would not be allowed to be added to a product if it was labeled as a conventional food but would be allowed in the identical product if it was labeled as a dietary supplement. This was the case, for example, in August 2007, when FDA identified a company marketing an iced tea mix containing stevia—an herb that had not been approved as a food additive because of potential safety concerns, including reproductive and cardiovascular effects. FDA issued a warning to the company; however, rather than discontinue using stevia in its product, the company changed the label to classify the product as a dietary supplement rather than a food, and the product remains on the market. We identified other products that also fall within the gray area between dietary supplements and foods with added dietary ingredients that are being marketed as dietary supplements. For example, we identified several nutrition bars, teas, and energy drinks, some produced by large companies with national distribution, which contain herbs such as kava, St. John’s wort, and echinacea. If these ingredients are added to conventional foods and are not GRAS and have not been approved as food additives, then they would violate the Federal Food, Drug, and Cosmetic Act. An FDA official told us that FDA is unaware of a basis for concluding that these ingredients are GRAS, and they have not been approved as food additives. However, these products may remain on the market because they are labeled as dietary supplements. Such a process might allow companies to circumvent the safety standard required for food additives. In FDA’s 10-year plan to implement DSHEA, issued in January 2000, the agency identified the need to clarify the boundary between conventional foods and dietary supplements but did not indicate when or how the agency planned to address this issue. Moreover, we highlighted this particular issue in our July 2000 report and recommended FDA take action to clarify the boundary between conventional foods and dietary supplements. As of November 2008, the agency had not issued regulations or guidance to clarify this boundary. According to FDA officials, the agency conducts limited monitoring for safety concerns associated with food products that contain added dietary ingredients. These officials explained that FDA does not track these products separately from foods, and the agency generally relies on trade complaints and adverse event reports to identify concerns about these types of products. FDA officials told us that the current regulatory framework is sufficient to identify and act on safety concerns regarding foods with added dietary ingredients. FDA held a public meeting in 2006 regarding these products and, according to FDA officials, the agency is currently evaluating the comments made during that meeting. Some stakeholders told us that safety risks associated with foods containing added dietary ingredients that meet the GRAS standard or have been approved as food additives are generally low. For example, stakeholders were generally not concerned about vitamin-fortified products, such as cereal, unless individuals consume these products in high doses. However, some stakeholders we spoke with raised concerns about certain products—such as energy drinks that contain stimulants and have the potential to cause adverse cardiac effects. In addition, some stakeholders expressed concern about adding botanicals to foods due, in part, to the potential for an adverse physiological response. In contrast, an industry official noted that companies sometimes add dietary ingredients to foods for labeling or marketing purposes—not to elicit a physiological effect—and, therefore, the amounts included are low. While FDA has conducted some consumer outreach, these initiatives have reached a relatively small proportion of consumers using dietary supplements. Additionally, surveys and experts indicate that consumers are not well-informed about the safety and efficacy of dietary supplements and have difficulty interpreting labels on these products. Without a clear understanding of the safety, efficacy, and labeling of dietary supplements, consumers may be exposed to greater health risks associated with the uninformed use of these products. FDA has taken some steps to educate consumers about the safety, efficacy, and labeling of dietary supplements. According to FDA officials, the agency primarily educates the public about dietary supplements through publications such as brochures and articles, as well as the agency’s Web site. For example, agency officials highlighted the following efforts: FDA and the NIH’s Office of Dietary Supplements jointly published a brochure in 2004 to educate consumers about the importance of disclosing their dietary supplement usage to doctors. In March 2006, FDA developed a document entitled, “Food Facts: Dietary Supplements—What You Need to Know” with general information about dietary supplements. In August 2008, FDA distributed an article via e-mail and its Web site entitled “FDA 101: Dietary Supplements” that contained information on the regulation of dietary supplements, as well as information on the safety, efficacy, and labeling of these products. FDA’s Web site provides warnings about certain ingredients and products, how to report an adverse event, and general consumer information about dietary supplements, including descriptions of the types of label claims permitted on dietary supplement products. FDA’s Web site also links consumers to the NIH, Federal Trade Commission, United States Department of Agriculture, and National Academies’ Institute of Medicine Web pages that contain information about the safety and efficacy of certain dietary supplement ingredients and how to interpret dietary supplement labels. FDA has worked jointly with industry, consumer groups, and other federal agencies to provide consumers with information about label claims. However, these outreach efforts can only be as effective as the number of dietary supplement users they reach. While data from the 2007 National Health Interview Survey show that over half of the U.S. adult population— or at least 114 million individuals—consume dietary supplements, we found that FDA’s outreach efforts have limited potential to reach the majority of U.S. adults using dietary supplements. For example, according to FDA and NIH officials, since 2004, the brochure regarding disclosure of supplement use to doctors had a distribution of 40,000 paper copies and received about 171,000 total visits on the FDA and NIH Web sites—which represent less than 1 percent of estimated dietary supplement users. Other FDA publications on dietary supplements have also reached a relatively small proportion of dietary supplement consumers. For example, according to FDA officials, it distributed about 61,000 English copies and approximately 35,000 Spanish copies of its document entitled “Food Facts: Dietary Supplements—What You Need to Know.” In addition, according to FDA officials, its consumer article on dietary supplements called “FDA 101: Dietary Supplements” was sent via e-mail to almost 32,500 subscribers to FDA’s “Consumer Health Information” and, as of October 21, 2008, FDA’s Web site had logged about 3,800 page views of the HTML version and approximately 2,100 page views of the printer-friendly PDF of the article. While agency officials stated that FDA does not evaluate the effectiveness of its outreach efforts, officials also noted that the agency must continually market its desired messages to effectively educate consumers. Additionally, consumer education was highlighted as an important part of the agency’s 10-year plan for dietary supplements, published in 2000. In the November 2004 update to this plan, FDA identified the need to provide consumers with access to reliable scientific information about the safety of ingredients and supplements so that consumers may make more informed choices. Currently, according to FDA, CFSAN has no ongoing or new consumer education initiatives being planned for dietary supplements. FDA recently announced a partnership with WebMD to expand consumer access to timely and reliable health information; however, it is not clear to what extent FDA will use this partnership to increase consumer understanding about dietary supplements. When asked about plans for consumer education initiatives, FDA officials explained that the agency has been directing its limited resources toward activities that can have the greatest public health impact, such as responding to foodborne illness outbreaks. Several studies indicate that consumers are not well-informed about the safety, efficacy, and labeling of dietary supplements. For example, a 2002 Harris Poll indicated that a majority of adults are misinformed about the extent to which government regulates the safety of vitamins, minerals, and food supplements. According to the poll, over half of respondents believed that dietary supplements are approved by a government agency. A 2002 FDA-sponsored health and diet survey also estimated that a majority of respondents who used vitamin or mineral supplements believed that the government approves dietary supplement products before they are marketed to consumers. However, FDA does not have the authority to require that supplements be approved for safety and effectiveness prior to marketing, and, unless a product contains a new dietary ingredient, FDA does not need to be notified by the manufacturer prior to marketing a dietary supplement. Additionally, the 2002 Harris Poll estimated that about two-thirds of respondents believe that the government requires dietary supplement labels to contain warnings about potential side effects, or dangers, similar to drugs. However, unlike drug manufacturers, who are required to include warnings related to adverse effects and contraindications on their product labels, dietary supplement manufacturers are required to include few such warnings on their product labels. Consequently, dietary supplement manufacturers may not necessarily include warnings about potential adverse effects on the labels of their products. For example, in the course of our review, we identified several dietary supplements that contained ingredients with known or suspected adverse effects, such as kava and black cohosh, that did not include warnings on their labels. In addition, in 2003, an analysis of 100 dietary supplement labels by the Department of Health and Human Services’ Office of Inspector General found that the dietary supplement labels were limited in their ability to guide the informed and appropriate use of dietary supplements among consumers and often did not present information in a manner that facilitates consumer understanding. Furthermore, during the course of our review, most experts we spoke with noted that, generally, consumers are not well-informed about the safety and efficacy of dietary supplements. These experts explained that many consumers believe various myths about dietary supplements. For example, consumers may believe that if a product is natural, it must be safe; if a little is good, then more must be better; and if a product does not have a warning label, it must be safe. Without a clear understanding of the safety, efficacy, and labeling of dietary supplements, consumers are exposed to potential health risks associated with the uninformed use of these products. For example, several experts stated that misconceptions about dietary supplements could cause consumers to incorrectly assess the risks and benefits of these products and, in some cases, substitute supplements for prescribed medicine. In addition, several experts noted that consumers may not be aware that taking combinations of some supplements or using certain products in conjunction with prescription drugs could lead to harmful and potentially life-threatening results. In particular, some supplements—such as garlic, ginkgo biloba, ginseng, and vitamin E—may cause blood thinning and lead to life-threatening complications during surgical procedures. Therefore, consumer education is critical to mitigate the potential risks associated with the uninformed use of dietary supplements. Americans are widely interested in maintaining health and wellness and, with an aging population, we expect that consumers’ interest in dietary supplements will continue to grow. These consumers confront an extensive variety of dietary supplements available in the marketplace, but little is known about the safety and efficacy of these products. Yet, most dietary supplements are presumed safe under current law, and companies do not need premarket approval for any dietary supplement. If FDA has concerns about a particular dietary supplement product or ingredient, the agency bears the burden of proof to require removal of the product from the market. In the case of ephedra—which was implicated in thousands of adverse events and a number of deaths—FDA faced a long and arduous process before finally banning the product in 2004. At the same time, while more and more products are entering the market each year, FDA is dedicating a small percentage of its resources to regulating the dietary supplement industry and educating consumers about dietary supplements. FDA does not have comprehensive knowledge of dietary supplement manufacturers or the products on the market and has little information about potential side effects of various products. In addition, consumers are not well-informed about dietary supplements, may not be aware of potential side effects of supplements, and might not consider a dietary supplement as a factor if experiencing an adverse reaction. Weaknesses in the regulatory system may increase the likelihood of unsafe products reaching the market, and a lack of consumer knowledge increases the potential health risks associated with uninformed consumption. Overall, we are making four recommendations to enhance FDA’s oversight of dietary supplements and foods with added dietary ingredients. 1. To improve the information available to FDA for identifying safety concerns and better enable FDA to meet its responsibility to protect the public health, we recommend that the Secretary of the Department of Health and Human Services direct the Commissioner of FDA to request authority to require dietary supplement companies to identify themselves as a dietary supplement company as part of the existing registration requirements and update this information annually, provide a list of all dietary supplement products they sell and a copy of the labels and update this information annually, and report all adverse events related to dietary supplements. 2. To better enable FDA to meet its responsibility to regulate dietary supplements that contain new dietary ingredients, we recommend that the Secretary of the Department of Health and Human Services direct the Commissioner of FDA to issue guidance to clarify when an ingredient is considered a new dietary ingredient, the evidence needed to document the safety of new dietary ingredients, and appropriate methods for establishing ingredient identity. 3. To help ensure that companies follow the appropriate laws and regulations and to renew a recommendation we made in July 2000, we recommend that the Secretary of the Department of Health and Human Services direct the Commissioner of FDA to provide guidance to industry to clarify when products should be marketed as either dietary supplements or conventional foods formulated with added dietary ingredients. 4. To improve consumer understanding about dietary supplements and better leverage existing resources, we recommend that the Secretary of the Department of Health and Human Services direct the Commissioner of FDA to coordinate with stakeholder groups involved in consumer outreach to (1) identify additional mechanisms—such as the recent WebMD partnership—for educating consumers about the safety, efficacy, and labeling of dietary supplements; (2) implement these mechanisms; and (3) assess their effectiveness. We provided a draft copy of this report to the Department of Health and Human Services for review and comment. We received a written response from the Acting Assistant Secretary for Legislation that included comments from FDA. FDA generally agreed with each of the report’s recommendations and welcomed the report as a means of calling attention to the challenges FDA faces with respect to regulating dietary supplements and conventional foods formulated with added dietary ingredients. FDA noted that although receiving information on all adverse events related to dietary supplements could enhance FDA’s ability to detect signals of potential toxicity over time, FDA raised concerns about its ability to efficiently and effectively analyze the information to identify unsafe dietary supplements. However, FDA stated that it is working on methodologies to mitigate this concern and improve data mining for safety-related signals if FDA were to receive all adverse event reports. In addition, FDA recognized the need for guidance to industry clarifying when products should be marketed as conventional foods or dietary supplements and stated that the agency will consider this recommendation and its implementation in light of FDA’s limited resources and competing priorities. Furthermore, FDA noted that the agency’s resources for consumer education are extremely limited and that it may not be able to effectively conduct consumer education on its own. FDA commented that collaborating with NIH’s Office of Dietary Supplements may be an efficient and cost-effective way to expand FDA’s current outreach activities. FDA also stated that the agency is identifying appropriate content for the recently announced FDA/WebMD partnership referenced in the report and anticipates that information on dietary supplements will be included. FDA’s comments are presented in appendix IV of this report. FDA also provided technical comments on the draft report, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Secretary of the Department of Health and Human Services; the Commissioner of FDA; the Director of the Office of Management and Budget; and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or shamesl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. We were asked to examine the Food and Drug Administration’s (FDA) oversight of dietary supplements and foods that contain added dietary ingredients. Specifically, we were asked to examine FDA’s (1) actions to respond to the new serious adverse event reporting requirements; (2) ability to identify and act on concerns about the safety of dietary supplements; (3) ability to identify and act on concerns about the safety of foods with added dietary ingredients; and (4) actions to educate consumers about the safety, efficacy, and labeling of dietary supplements. Our work included dietary supplements for human use only. We did not assess FDA’s regulation of dietary supplements for animal use. To identify FDA’s actions to respond to the new serious adverse event reporting requirements, we reviewed FDA’s guidance on reporting requirements for industry and internal procedures for compiling and tracking adverse event reports. In addition, we obtained and analyzed data on the number and type of reports received before and after the requirements went into effect. We verified our methodology for analyzing these data with FDA officials, and FDA verified our results. We also reviewed FDA’s plans for improving adverse event reporting. To examine FDA’s ability to identify and act on safety concerns associated with dietary supplements, we assessed FDA’s laws, rules, regulations, planning documents and guidance, such as the Dietary Supplement Health and Education Act of 1994, Current Good Manufacturing Practice regulations, and guidance on reporting adverse events. In addition, we obtained and analyzed data on FDA’s internal procedures and activities to identify safety concerns, such as conducting inspections and import screenings and receiving consumer complaints. We also obtained and analyzed FDA’s internal procedures and data on the agency’s actions once a safety concern is identified, including issuing warning letters, seizing products, and banning ingredients. We analyzed these data to determine the range and extent of actions FDA has taken to identify and act on safety concerns associated with dietary supplements. We verified our methodology for analyzing these data with FDA officials, and FDA verified our results. Furthermore, we reviewed data on FDA resources dedicated to dietary supplements. To examine FDA’s ability to identify and act on concerns about the safety of foods with added dietary ingredients, we reviewed laws and regulations regarding food additives. In addition, we reviewed FDA’s procedures for identifying and acting on safety concerns of foods with added dietary ingredients. We also identified and analyzed instances of actions taken by FDA to act on safety concerns associated with the addition of dietary ingredients to foods. To determine FDA’s actions to educate consumers about the safety, efficacy, and labeling of dietary supplements, we reviewed FDA’s consumer outreach initiatives. We also obtained and analyzed data on the extent to which these outreach initiatives were distributed. In addition, we analyzed data from FDA and others on consumer understanding of dietary supplements. To compare FDA’s regulation of dietary supplements with select other countries’ regulation of these products, we spoke with representatives from the governments of Canada, Japan, and the United Kingdom. In addition, we reviewed documents about the regulation of dietary supplements in these countries. We did not independently verify descriptions of foreign laws. We selected these countries because they had been identified in prior GAO work as having comparable food safety systems and covered a relatively diverse geographic area (Europe, North America, and Asia.) To assess the reliability of the data from FDA’s databases used in this report, we reviewed related documentation, examined the data to identify obvious errors or inconsistencies, and worked with agency officials to identify any data problems. We determined the data to be sufficiently reliable for the purposes of this report. To obtain insights on all four objectives, we met with a wide range of experts, including officials from federal and state agencies, industry and trade organizations, consumer advocacy groups, academia, and poison control centers. Through these efforts, we obtained documents and information related to all four objectives. At the federal level, we met with officials from FDA, including headquarters and regional and district level officials, to discuss the agency’s regulatory authorities, actions taken to implement the mandatory adverse event reporting system, steps taken to regulate the safety of dietary supplements and foods with added dietary ingredients, and consumer education responsibilities and actions. In addition, we met with officials from the National Institutes of Health, Federal Trade Commission, and Department of Agriculture. At the state level, we met with officials from the California Department of Public Health’s Food and Drug Branch and Environmental Protection Agency and the New York State Task Force on Life and the Law. To obtain insights from the dietary supplement and food industries, we met with officials from the American Beverage Association, American Herbal Products Association, Consumer Healthcare Products Association, Council for Responsible Nutrition, Grocery Manufacturers Association, National Advertising Division of the Council of Better Business Bureaus, and Natural Products Association. In addition, we met with officials from a large dietary supplement manufacturer in Maryland, a multinational food and consumer products firm, and two small, herbal products manufacturers in California. To obtain insights from consumer advocacy groups, we met with officials from the Center for Science in the Public Interest, Consumers Union, and Public Citizen. To obtain insights from public health organizations, the health care community, and academia, we met with officials from the American Association of Poison Control Centers; American Medical Association; California Poison Control System; New York City Poison Control Center; U.S. Pharmacopoeia; Baylor College of Medicine; Critical Path Institute; Center for Advanced Food Technology, Rutgers University; Stony Brook University; Center for Consumer Self Care, Department of Clinical Pharmacy, and Osher Center for Integrative Medicine, University of California, San Francisco; and the University of California, Berkeley. We conducted this performance audit from December 2007 through January 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides additional detail on FDA’s actions to identify and respond to safety concerns related to dietary supplements. FDA actions to identify safety concerns related to dietary supplements include receiving and analyzing adverse event reports and consumer complaints and conducting inspections. Table 6 compares the number of adverse event reports received and entered into FDA’s databases for review related to dietary supplements and drugs and biologics from January 1, 2003, through December 31, 2007. Table 7 compares the number of dietary supplement-related adverse event cases characterized as serious from January 1, 2003, through October 31, 2008, and the total number of dietary supplement-related adverse event cases. Table 8 shows the number and types of outcomes for all dietary supplement-related adverse event cases received by FDA from January 1, 2003, through October 31, 2008. Table 9 shows the number of dietary supplement-related adverse event cases by product type from January 1, 2003, through October 31, 2008. Table 10 shows the number of dietary supplement-related consumer complaints by adverse event result for fiscal year 2001 through July 3, 2008. Table 11 shows the number of dietary supplement-related consumer complaints with adverse symptoms present by adverse event result and FDA product class from fiscal year 2001 through July 3, 2008. Table 12 shows the number of foreign and domestic inspections of dietary supplement facilities compared with food inspections and total inspections conducted by FDA and states from fiscal years 2000 through 2008. Table 13 shows the percentage of dietary supplement inspections where investigators identified problems, from fiscal year 2002 through May 6, 2008. FDA actions to respond to safety concerns related to dietary supplements include issuing warning letters to dietary supplement firms, requesting recalls, and detaining and refusing imports. Table 14 shows the number of Federal Food, Drug, and Cosmetic Act violations cited in 293 dietary supplement-related warning letters issued from fiscal years 2002 through 2007. Table 15 shows the number of dietary supplement-related warning letters compared with total warning letters issued by FDA from fiscal years 2002 through 2007. Table 16 provides information on examples of Class I recalls related to dietary supplement products from fiscal years 2003 through 2008. According to FDA, Class I recalls are related to products that are dangerous and defective and pose a serious health concern. A firm may initiate a recall independently of FDA, or FDA may request a firm recall a product upon identifying a problem with a product. Table 17 shows the number of detentions without physical examination of imported dietary supplement products by general violation categories from fiscal year 2002 through March 24, 2008. Table 18 shows the number of detentions without physical examination of imported dietary supplement products by product classification from fiscal year 2002 through March 24, 2008. Table 19 shows the number of Federal Food, Drug, and Cosmetic Act violations cited in 3,605 dietary supplement-related import refusals from fiscal year 2002 through March 24, 2008. Table 20 shows the number of refused imports of dietary supplement products by product classification from fiscal year 2002 through March 24, 2008. FDA’s enforcement actions related to dietary supplements include seizures, injunctions, and criminal investigations. Table 21 shows information about dietary supplement-related seizure and injunction actions taken from fiscal year 2002 through July 18, 2008. Table 22 summarizes dietary supplement-related criminal investigations resulting in at least one conviction or with charges filed from 2002 through July 31, 2008. In comparison with the United States, Canada and Japan have more regulatory requirements in place for dietary supplements and related products. On the other hand, the United States has developed specific good manufacturing practices for dietary supplements while the United Kingdom has not. Table 23 compares the regulatory framework for dietary supplement products in these foreign countries with the U.S. regulatory system. In Canada, companies are required to obtain a product license to market natural health products, which include a range of products, such as vitamin and mineral supplements, herbal remedies, and other products, based upon their medicinal ingredients and intended uses. The product licensing application must include detailed information about the product, ingredients, potency, intended use, and evidence supporting the product’s safety and efficacy. Approved products are assigned a license number that is displayed on the product label. Manufacturers, packagers, labelers, and importers of natural health products must obtain a site license to perform these activities. To obtain a site license, a firm must provide evidence of quality control procedures that meet government standards for good manufacturing practices. Firms are required to report any serious adverse reactions associated with their products within 15 days and must provide information summarizing all adverse reactions, including mild or moderate events, on an annual basis. In the United Kingdom (U.K.), dietary supplements are legally termed “food supplements” and are regulated under food law—most of which is European Community (E.C.) legislation implemented at the national level, according to a U.K. official. Food supplements are generally not subject to premarket approval. For example, any supplement that either meets the guidelines established under E.U. law for specific vitamins and minerals, or does not include a new or genetically modified ingredient, does not require approval prior to marketing. According to a U.K. official, most direct oversight of the dietary supplement industry in the United Kingdom occurs at the local level of government. For example, all investigations, enforcement actions, and monitoring activities such as inspections are undertaken at the local level. Food supplement firms are required to register with local authorities and should detail the specific activities undertaken at each establishment as part of this process. However, centralized information on registered firms is not collected or maintained at a national level. Additionally, there is no centralized registry of food supplement products in the U.K. Although government standards for food good manufacturing practices apply to food supplement manufacturing, there are no good manufacturing practice guidelines specific for food supplements. Under E.C. law, firms are required to report any problems with food products to the local and national authorities and, if the product is injurious to health, the firm must remove it from the market. In Japan, products are regulated based on their product claims. There are two types of claims: Food with Nutrient Function Claims (FNFC), which are standardized, preapproved claim statements for certain vitamins and minerals with established benefits, and Food for Specified Health Uses (FOSHU) claims, which require government approval for safety and efficacy prior to marketing a product advertised as having a physiological effect on the body. Since FNFC claims are standardized and preapproved, firms do not need to notify the government prior to marketing a product using an approved FNFC claim, provided the product meets established ingredient content specifications. To use a FOSHU claim on a product, a firm is required to provide the government with evidence supporting the product’s physiological effect and safety prior to marketing. Additionally, a firm must provide information on the firm and its product to the government, as well as evidence of quality control processes. In addition to the individual named above, José Alfredo Gómez, Assistant Director; Kevin Bray; Nancy Crothers; Michele Fejfar; Alison Gerry Grantham; Barbara Patterson; Minette Richardson; Lisa Van Arsdale; and Chloe Wardropper made key contributions to this report.
Dietary supplements and foods with added dietary ingredients, such as vitamins and herbs, constitute multibillion dollar industries. Past reports on the Food and Drug Administration's (FDA) regulation of these products raised concerns about product safety and the availability of reliable information. Since then, FDA published draft guidance on requirements for reporting adverse events--which are harmful effects or illnesses--and Current Good Manufacturing Practice regulations for dietary supplements. GAO was asked to examine FDA's (1) actions to respond to the new serious adverse event reporting requirements, (2) ability to identify and act on concerns about the safety of dietary supplements, (3) ability to identify and act on concerns about the safety of foods with added dietary ingredients, and (4) actions to ensure that consumers have useful information about the safety and efficacy of supplements. FDA has made several changes in response to the new serious adverse event reporting requirements and has subsequently received an increased number of reports. For example, FDA has modified its data system, issued draft guidance, and conducted outreach to industry. Since mandatory reporting went into effect on December 22, 2007, FDA has seen a threefold increase in the number of all adverse event reports received by the agency compared with the previous year. For example, from January through October 2008, FDA received 948 adverse event reports--596 of which were mandatory reports submitted by industry--compared with 298 received over the same time period in 2007. Although FDA has received a greater number of reports since the requirements went into effect, underreporting remains a concern, and the agency has further actions planned to facilitate adverse event reporting. FDA has taken some steps to identify and act upon safety concerns related to dietary supplements; however, several factors limit the agency's ability to detect concerns and remove products from the market. For example, FDA has limited information on the number and location of dietary supplement firms, the types of products currently available in the marketplace, and information about moderate and mild adverse events reported to industry. Additionally, FDA dedicates relatively few resources to oversight activities, such as providing guidance to industry regarding notification requirements for products containing new dietary ingredients. Also, once FDA has identified a safety concern, the agency's ability to remove a product from the market is hindered by a lack of mandatory recall authority and the difficult process of demonstrating significant or unreasonable risk for specific ingredients. Although FDA has taken some actions when foods contain unsafe dietary ingredients, certain factors may allow potentially unsafe products to reach consumers. FDA may not know when a company has made an unsupported or incorrect determination about whether an added dietary ingredient in a product is generally recognized as safe until after the product becomes available to consumers because companies are not required to notify FDA of their self-determinations. In addition, the boundary between dietary supplements and conventional foods containing dietary ingredients is not always clear, and some food products could be marketed as dietary supplements to circumvent the safety standard required for food additives. FDA has taken limited steps to educate consumers about dietary supplements, and studies and experts indicate that consumer understanding is lacking. While FDA has conducted some outreach, these initiatives have reached a relatively small proportion of dietary supplement consumers. Additionally, surveys and experts indicate that consumers are not well-informed about the safety and efficacy of dietary supplements and have difficulty interpreting labels on these products. Without a clear understanding of the safety, efficacy, and labeling of dietary supplements, consumers may be exposed to greater health risks associated with the uninformed use of these products.
BIA’s irrigation program was initiated in the late 1800s, as part of the federal government’s Indian assimilation policy, and it was originally designed to provide economic development opportunities for Indians through agriculture. The Act of July 4, 1884, provided the Secretary of the Interior $50,000 for the general development of irrigation on Indian lands. Over the years, the Congress continued to pass additional legislation authorizing and funding irrigation facilities on Indian lands. BIA’s irrigation program includes over 100 “irrigation systems” and “irrigation projects” that irrigate approximately 1 million acres primarily across the West. BIA’s irrigation systems are non revenue-generating facilities that are primarily used for subsistence gardening and they are operated and maintained through a collaborative effort which generally involves other BIA programs, tribes, and water users. In contrast, BIA’s 16 irrigation projects charge their water users an annual operations and maintenance fee to fund the cost of operating and maintaining the project. Most of BIA’s irrigation projects are considered self-supporting through these operations and maintenance fees. The 16 irrigation projects are located on Indian reservations across the agency’s Rocky Mountain, Northwest, Southwest, and Western regions (see fig. 1). BIA’s management of the 16 irrigation projects is decentralized, with regional and local BIA offices responsible for day-to-day operations and maintenance. Table 1 provides the tribe or tribes served by each of the 16 irrigation projects along with the year each project was originally authorized. The irrigation facilities constructed by BIA included a range of structures for storing and delivering water for agricultural purposes. Figure 2 highlights an example of the key structural features found on BIA’s irrigation projects. The beneficiaries of BIA’s projects have evolved over time and at present are quite diverse. Over the years, non-Indians have bought or leased a significant portion of the land served by BIA’s irrigation program. As a result, current water users on BIA’s projects include the tribes, individual Indian landowners, non-Indian landowners, and non-Indian lessees of Indian lands. The extent of non-Indian landownership and leasing ranges significantly across BIA’s irrigation projects (see table 2). For example, 100 percent of the land served by the Colorado River Irrigation Project is Indian owned, while only about 10 percent of the land served by the Flathead Irrigation Project is Indian owned. Federal regulations and internal BIA guidance require that BIA collaborate with water users, both Indian and non-Indian, in managing the irrigation projects. For example, federal regulations state that close cooperation between BIA and water users is necessary and that the BIA official in charge of each project is responsible for consulting with all water users in setting program priorities. In addition, BIA’s manual requires that BIA “provide opportunities for water user participation in matters relating to irrigation project operations” and that BIA’s officer-in-charge “meet regularly with water users to discuss proposed [operation and maintenance] assessment rates … general operations and maintenance.” Although BIA guidance does not define “regularly,” BIA’s Irrigation Handbook explicitly recommends that project staff meet at least twice annually to discuss work performed over the course of the year and allow for water user feedback and suggestions for the coming year. Furthermore, BIA’s Irrigation Handbook states that, at a minimum, BIA should discuss annual project budgets and work plans with water users. Since their inception, BIA’s 16 irrigation projects have been plagued by maintenance concerns. Construction of the projects was never fully completed, resulting in structural deficiencies that have continually hindered project operations and efficiency. In addition, water users and BIA have reported that operations and maintenance fees provide insufficient funding for project operations. Due to insufficient funding, project maintenance has been consistently postponed, resulting in an extensive and costly list of deferred maintenance items. Such deferred maintenance ranges from repairing or replacing dilapidated irrigation structures to clearing weeds from irrigation ditches. In addition, concerns regarding BIA’s management of the projects have been raised for years, particularly in regard to its financial management practices. For example, problems concerning BIA’s billing practices for its operations and maintenance fees have been raised by many, prompting independent review on more than one occasion. We and the Department of the Interior’s Inspector General have both identified serious problems with the land use records BIA has used to develop its annual operations and maintenance bills. In response, BIA instituted a new financial management system called the National Irrigation Information Management System, which has begun to address some of the billing errors. However, concerns still exist regarding the accuracy of the data in the billing system. The accuracy of some of the information in the irrigation billing system is dependant on the irrigation program receiving accurate and timely information from other BIA programs, such as land ownership and leasing information from BIA’s Real Estate Services program. In 2001, the Yakama tribe and individual tribal members filed appeals challenging the Wapato Irrigation Project’s operation and maintenance fees for the pre-2000 and year 2000 bills. Furthermore, the Wapato Irrigation Project agreed to not send any bills to the tribe or its members since 2001. Although a settlement is under discussion, in the interim the Wapato Irrigation Project has not been able to collect about $2 million, annually, of its expected revenue. According to BIA’s latest estimate, it will cost about $850 million to complete the deferred maintenance on all of its 16 irrigation projects; but this estimate is still being refined. BIA initially estimated its deferred maintenance costs at over $1 billion in fiscal year 2004, but acknowledged that this estimate was preliminary and would need to be revised largely because it incorrectly included new construction items and was developed by non-engineers. BIA revised this estimate downward in fiscal year 2005 based on the implementation of a new facilities management system. However, BIA plans to further refine this estimate since some projects continued to incorrectly count new construction items as deferred maintenance. As part of its ongoing effort to identify the needs and costs of deferred maintenance on its 16 irrigation projects, BIA estimated in fiscal year 2004 that it would cost approximately $1.2 billion to complete all deferred maintenance. This initial estimate was based, in part, on preliminary condition assessments of irrigation structures and equipment for each of BIA’s 16 irrigation projects. These preliminary condition assessments generally consisted of visual inspections to classify each project’s structure and equipment using a scale of good, fair, poor, critical and abandoned based on the apparent level of disrepair. BIA staff then estimated how much it would cost to repair each item based on its condition classification. BIA generally defines deferred maintenance as upkeep that is postponed until some future time. Deferred maintenance varies from project to project and ranges from cleaning weeds and trees which divert water from irrigation ditches, to repairing leaky or crumbling check gates designed to regulate water flow, to resloping eroded canal banks to optimize water flow. Figure 3 shows examples of deferred maintenance on some of the irrigation projects we visited (clockwise from the upper left, figure 3 shows (1) a defunct check gate and overgrown irrigation ditch at the Fort Belknap Irrigation Project, (2) a cattle-crossing eroding a canal bank and impairing water flow at the Wind River Irrigation Project, (3) a crumbling irrigation structure at the Crow Irrigation Project, and (4) a check gate leaking water at the Colorado River Irrigation Project). For detailed information on key maintenance issues for each of the nine projects we visited, see appendix II. BIA officials acknowledged that their fiscal year 2004 deferred maintenance estimate was only a starting point and that it needed to be revised for three key reasons: (1) the individuals who conducted the assessments were not knowledgeable about irrigation projects or infrastructure; (2) not all projects used the same methodology to develop their deferred maintenance cost estimates; and (3) some projects incorrectly counted new construction items as deferred maintenance. BIA’s preliminary condition assessments were conducted by computer specialists, rather than by people with the expertise in irrigation or engineering needed to accurately assess project infrastructure. BIA contracted with geographic information system experts primarily to catalogue the structures on each project. These geographic information system experts also observed the condition of the structures they catalogued and classified the condition of each structure, based on the level of apparent disrepair, as part of the overall effort to inventory and map key structures on each project. Consequently, some items identified as being in “poor” condition may in fact be structurally sound but simply appear cosmetically dilapidated, whereas other structures classified as being in “good” condition may in fact be structurally dilapidated but appear cosmetically sound. For example, according to BIA staff at the Colorado River Irrigation Project, the recent repainting of certain check gates disguised severe rust and structural deterioration of key metal parts. BIA staff used inconsistent methodologies to develop the cost estimates for deferred maintenance. According to BIA staff, the deferred maintenance cost estimates were developed by different people, sometimes using different or unknown methodologies for assigning cost values to deferred maintenance items. For example, some projects developed their own cost estimates and sent them to BIA’s central office for inclusion in its overall figures, while BIA regional staff developed cost estimates for other projects based, in part, on information from BIA’s preliminary condition assessments. Some projects incorrectly included new construction items as deferred maintenance. According to BIA, work that would expand a project or its facilities should not be categorized as deferred maintenance. Therefore, expanding an existing water delivery system or constructing a new building is not deferred maintenance. However, some projects incorrectly counted new construction items as deferred maintenance. For example, the Fort Hall Irrigation Project included increasing the capacity of its main canal for about $15.3 million, the Duck Valley Irrigation Project included building new canals for about $1.3 million, and the Flathead Irrigation Project included building a new warehouse for about $147,000. To improve the accuracy of its deferred maintenance estimate in 2005 and to help staff develop, track, and continuously update deferred maintenance lists and cost estimates, BIA implemented MAXIMO—a facilities management system linked to the geographic information system mapping inventory developed from its preliminary condition assessments. Using data from MAXIMO, BIA revised its total deferred maintenance estimate for the irrigation projects downward to about $850 million for fiscal year 2005. Figure 4 shows the current deferred maintenance cost estimate for each of the 16 projects. In the summer of 2005, BIA technical experts from the central irrigation office conducted training for BIA irrigation projects on how to use MAXIMO to enter information on maintenance needs, and how to correctly define deferred maintenance. Projects used this system to revise their list of deferred maintenance items and associated cost estimates in fiscal year 2005. While MAXIMO is still being tailored to the needs of the irrigation program, its implementation generally standardized the process for identifying and calculating deferred maintenance among projects. Despite the implementation of MAXIMO, BIA’s fiscal year 2005 estimate of deferred maintenance is still inaccurate for the following reasons: Some projects continued to incorrectly count certain items as deferred maintenance. Despite training, some projects continued to incorrectly count certain items, such as new construction items and vehicles, as deferred maintenance. For example, the Fort Hall Irrigation Project included the installation of permanent diversion structures for about $2.1 million, the Wapato Irrigation Project included constructing reservoirs for about $640,000, and the San Carlos Indian Works Irrigation Project included building a new office for about $286,000. In addition, some projects included the cost of repairing vehicles or buying new ones in their deferred maintenance estimates, despite BIA’s new guidance that such items are not deferred maintenance. According to BIA officials, while projects can consider the weed clearing postponed due to broken vehicles as deferred maintenance, the delayed repair of the vehicle itself is not deferred maintenance. For example, the Wind River Irrigation Project included an excavator vehicle for about $500,000 and the Crow Irrigation Project included dump trucks for about $430,000. Some projects provided BIA with incomplete information. According to BIA officials, some projects did not do thorough assessments of their deferred maintenance needs, and some may not be including legitimate deferred maintenance items, such as re-sloping canal banks that have eroded by crossing cattle or overgrown vegetation. Moreover, both the Walker River and the Uintah Irrigation Projects failed to provide information detailing their deferred maintenance costs, and several projects lumped items together as “other” with little or no explanatory information other than “miscellaneous”—accounting for almost one- third of BIA’s total deferred maintenance cost estimate for its irrigation projects (see fig. 5). BIA made errors when compiling the total deferred maintenance cost estimates. For example, BIA inadvertently double-counted the estimate provided by the Colorado River Irrigation Project when compiling the overall cost estimate, according to BIA officials. Additionally, BIA officials erroneously estimated costs for all structures, such as flumes and check gates, based on the full replacement values even when items were in good or fair condition and needed only repairs. These structures account for over one-third of BIA’s total deferred maintenance estimate (see fig. 5). While the inclusion of incorrect items and calculation errors likely overestimate BIA’s total deferred maintenance costs, the incomplete information provided by some projects may underestimate total costs. To further refine its cost estimate and to develop more comprehensive deferred maintenance lists, BIA plans to hire experts in engineering and irrigation to periodically conduct thorough condition assessments of all 16 irrigation projects to identify deferred maintenance needs and costs. According to BIA officials, these thorough condition assessments are expected to more accurately reflect each project’s actual deferred maintenance, in part because experts in engineering and irrigation who can differentiate between structural and cosmetic problems will conduct them. These assessments will also help BIA prioritize the allocation of potential funds to complete deferred maintenance items because they will assign a prioritization rating to each deferred maintenance item based on the estimated repair or replacement cost as well as the overall importance to the project. The first such assessment was completed for the Flathead Irrigation Project in July 2005, and BIA plans to reassess the condition of each project at least once every 5 years, with the first round of such condition assessments completed by the end of 2010. BIA’s management of some of its irrigation projects has serious shortcomings that undermine effective decisionmaking about project operations and maintenance. Under BIA’s organizational structure, in many cases, officials with the authority to oversee project managers’ decisionmaking lack the technical expertise needed to do so effectively, while the staff who do have the expertise lack the necessary authority. In addition, despite federal regulations that require BIA to consult with project stakeholders in setting project priorities, BIA has not consistently provided the information or opportunities necessary for stakeholders— both Indian and non-Indian water users—to participate in decisionmaking about project operations and maintenance. (See appendix II for detailed information on key management concerns at each of the nine projects we visited.) Under BIA’s organizational structure, in many cases, officials with the authority to oversee project managers’ decisionmaking lack the expertise needed to do so effectively, while the staff who do have the expertise lack the necessary authority to oversee project managers’ decisionmaking. BIA regional directors, agency superintendents, and agency deputy superintendents who oversee the projects do not generally have engineering or irrigation expertise, and they rely heavily on the project managers to run the projects. (See fig. 6 for an organizational chart showing the lines of authority for providing oversight of a typical BIA irrigation project.) Of the nine projects we visited, only two had managers at the regional or agency levels who are experts in irrigation or engineering. At the same time, BIA staff with the irrigation and engineering expertise— regional irrigation engineers and central irrigation office staff—have no authority over the 16 projects under BIA’s current organizational structure. Consequently, key technical decisions about project operations and maintenance, such as when or how to repair critical water delivery infrastructure, do not necessarily get the technical oversight or scrutiny needed. This organizational structure and reliance on the project managers breaks down when the person managing the project lacks the expertise required for the position—that is, in cases in which BIA has had difficulty filling project manager vacancies and has, as a result, hired less qualified people or has the agency deputy superintendent temporarily serving in the project manager position. Of the nine projects we visited, four lacked project managers for all or part of the 2005 irrigation season and five project managers were experts in engineering or irrigation. The GAO Internal Control Management and Evaluation Tool recommends that federal agencies analyze the knowledge and skills needed to perform jobs appropriately and provides guidance on organizational structure and identification of potential risks to the agency in that structure. Specifically, it recommends that adequate mechanisms exist to address risks—such as the risks associated with staff vacancies or hiring less qualified staff. When the project manager is under-qualified and unchecked by managers who heavily rely on his or her decisionmaking, the potential for adverse impacts on the operations and maintenance of an irrigation project increases. For example, at the Crow Irrigation Project in 2002, a project manager with insufficient expertise decided to repair a minor leak in a key water delivery structure by dismantling it and replacing it with a different type of structure. The new structure was subsequently deemed inadequate by BIA’s irrigation experts, and the required reconstruction delayed water delivery by about a month. In addition, at the Blackfeet Irrigation Project in 2000, the accidental flooding and subsequent erosion of a farmer’s land was inadequately addressed by project and agency management who decided to use a short-term solution over the objections of the regional irrigation engineer, who lacked the authority to override the project manager and agency superintendent’s technical decision, despite their lack of expertise. At the time of this report, the regional irrigation engineer continues to negotiate the implementation of a long-term and technically sound solution. Furthermore, BIA lacks protocols to ensure that project managers consult with, or get input from, BIA’s technical experts before implementing technically complex decisions about project operations and maintenance, further exacerbating problems and undermining management accountability. For example, in the 2002 incident at the Crow Irrigation Project discussed above, the project manager was not required to consult with, notify, or get approval from either the regional irrigation engineer or central irrigation office staff, despite his lack of expertise and the complexity of the flume replacement project he undertook. According to BIA officials, if the project manager had consulted an engineer, his plan to replace the flume with two small culverts would have been rejected before work began because it was technically insufficient and would not have been completed before the start of the approaching irrigation season. A second serious management shortcoming is the extent to which some projects involve water users in decisionmaking. Federal regulations, as well as BIA guidance, call for involving project stakeholders—that is, tribal representatives as well as both Indian and non-Indian water users—in the operations and maintenance of each project. Specifically, federal regulations state that BIA is responsible for consulting with all water users in setting program priorities; BIA’s manual requires that BIA provide regular opportunities for project water users to participate in project operations; and BIA’s Irrigation Handbook recommends that BIA meet at least twice a year with project water users to discuss project budgets and desired work. Despite such requirements and recommendations, BIA has not consistently provided the opportunities or information necessary for water users to participate in such decisionmaking about project operations and maintenance. The frequency of meetings between BIA and its project water users varied considerably on the nine projects we visited, from rarely (generally zero meetings per year), to periodically (generally more than one meeting per year), to regularly (generally more than three meetings per year), as shown in figure 9. For example, both the Blackfeet and Colorado River Irrigation Projects hold regular meetings with both tribal and individual water users, with meetings held quarterly at the Blackfeet Irrigation Project and monthly at the Colorado River Irrigation Project. In contrast, BIA officials on the Pine River Irrigation Project do not meet with any non-tribal water users, and BIA officials at the Fort Belknap Irrigation Project have held few water users meetings in recent years. There was no meeting with water users at the Fort Belknap Irrigation Project to kick-off the 2005 irrigation season because the project manager position was vacant, worsening an already adversarial relationship between water users and BIA, according to water users and a local government official. Also, BIA officials on the Crow Irrigation Project have no regularly scheduled meetings with either the tribe or individual water users and, in fact, failed to send a single representative to the meeting it called in 2005 for water users to voice their concerns about project management and operations. In addition to a lack of regular meetings with all project water users, BIA has not consistently shared the type of information about project operations and finances that water users need to meaningfully participate in project decisionmaking. Although BIA officials at the Colorado River Irrigation Project share information on their budgets with water users and work collaboratively with water users to develop annual work priorities in accordance with BIA’s Irrigation Handbook, not all projects we visited provide or solicit this type of information. For example, BIA staff at the Wapato Irrigation Project does not solicit water users’ input on project priorities or share information on the project’s budget, according to water users we spoke with, and BIA officials at the Crow Irrigation Project do not share this type of critical information. However, some of the projects we visited have recently begun to share information on project spending and involve project water users in developing project priorities, despite not doing so historically. For example, the project management at the Blackfeet Irrigation Project began sharing budget information with its water users during the 2005 season, and the new project management at the Fort Belknap Irrigation Project stated that they plan on involving project water users in setting project priorities in the 2006 season. Moreover, although some project managers and their staff are approachable and responsive on an individual basis, according to water users on some projects we visited, others stated that project management on some of BIA’s irrigation projects were generally inaccessible and non- responsive. For example, BIA officials acknowledged that a former project manager at the Blackfeet Irrigation Project told water users to sue BIA to get information on project decisionmaking. In addition, some expressed concerns that BIA is less responsive to non-Indians because BIA’s mission does not specifically include non-Indians. Consequently, some non-Indian water users have opted to go directly to their congressional representatives to raise their concerns. For example, non-Indian water users at the Wapato Irrigation Project have sought congressional intervention on several occasions to help compel BIA staff to disclose information about project finances, such as information related to proposed operations and maintenance fee debts and data on project land not being billed for operations and maintenance. In addition, Senator Conrad Burns and Congressman Dennis Rehberg of Montana co-sponsored a town hall meeting in 2003 to provide local water users an opportunity to voice project concerns to BIA officials. Requests by non-Indian water users for project management and regional staff to address the lack of water delivery at the Crow Irrigation Project during the month of August 2005 went largely unanswered by BIA, resulting in congressional intervention. Such lack of access and communication about project operations limits the ability of water users to have an impact on project decisions as well as the ability of BIA to benefit from this input. The long-term direction of BIA’s irrigation program depends on the resolution of several larger issues. Of most importance, BIA does not know the extent to which its irrigation projects are capable of financially sustaining themselves, which hinders its ability to address long-standing concerns regarding inadequate funding. The future of BIA’s irrigation program also depends on the resolution of how the deferred maintenance will be funded. BIA currently has no plans for how it will obtain funding to fix the deferred maintenance items, and obtaining this funding presents a significant challenge in times of tight budgets and competing priorities. Finally, it might be more appropriate for other entities, including other federal agencies, tribes, and water users, to manage some or all of the projects. BIA does not know the extent to which Indian irrigation projects are capable of sustaining themselves. Reclamation law and associated policy require the Department of the Interior’s Bureau of Reclamation to test the financial feasibility of proposed projects comparing estimated reimbursable project costs with anticipated revenues. The Bureau of Reclamation then uses these reimbursable cost estimates to negotiate repayment contracts with water users, where appropriate. In contrast, Indian irrigation projects were authorized to support Indian populations residing on reservations without regard to whether the projects could be financially self-sustaining. As a result, neither the Congress nor project stakeholders have any assurance that these projects can sustain themselves. For example, a comprehensive 1930 study of BIA’s irrigation program concluded that the Blackfeet and Fort Peck Irrigation Projects should be abandoned. Specifically, the report noted, “fter a very careful study of all the available data relating to these projects, including a field examination, we are firmly convinced that any further attempts to rehabilitate and to operate and maintain these projects … can result only in increasing the loss that must be accepted and sustained by the Government. Adequate preliminary investigations and studies to which every proposed project should be subjected, in our opinion, would have condemned … these … projects as unfeasible.” Despite this lack of information on the overall financial situation for each of the projects, in the early 1960s BIA classified more than half of its 16 projects as fully self-supporting, on the basis of annual operations and maintenance fees they collected from water users. These self-supporting projects do not receive any ongoing appropriated funds. These projects are subject to full cost recovery despite the absence of financial information to demonstrate that the water users could sustain this financial burden. The Blackfeet and Fort Peck Irrigation Projects were two of the projects classified as fully self-supporting. While the specific financial situations for the Blackfeet and Fort Peck Irrigation Projects have likely changed since the 1920s, BIA does not know if these projects, or any of the other Indian irrigation projects, are financially self-supporting. The heavy reliance on water users to sustain these projects has created ongoing tension between the water users and BIA. Some water users have complained to BIA that they cannot afford the operations and maintenance fees and they pressure BIA to keep the fees as low as possible. The Bureau of Reclamation recently conducted a study of the Pine River Irrigation Project and concluded that some of the water users could not conduct a profitable farming operation with the 2005 operations and maintenance fee of $8.50 per acre. BIA has not responded to the Bureau of Reclamation study, and in October 2005 BIA proposed doubling the rate to $17.00 per acre for the 2006 irrigation season even though water users claim that they cannot afford to pay a higher fee. The operations and maintenance fee has been set at $8.50 at the Pine River Irrigation Project since 1992 and, according to BIA officials, the collections do not provide adequate funds to properly operate and maintain the project. As a result, BIA estimates that the deferred maintenance at the project has grown to over $20 million. Without definitive information on the financial situation of each project, BIA cannot determine what portion of project operations and maintenance costs can be reasonably borne by the water users and to what extent alternative sources of financing, such as congressional appropriations, should be pursued. Despite the estimated $850 million in deferred maintenance and the degree to which it impedes ongoing operations and maintenance at BIA’s irrigation projects, BIA currently has no plan for funding the list of deferred maintenance items. Funding deferred maintenance costs in the hundreds of millions of dollars will be a significant challenge in times of tight budgets and competing priorities. Nonetheless, officials stated that the agency has made little effort to identify options for funding the deferred maintenance. BIA acknowledges that income from ongoing operations and maintenance fees would likely be inadequate to cover the deferred maintenance, yet the agency has done little to identify alternative means of funding. According to officials, BIA has not asked the Congress for supplemental funding to cover the deferred maintenance. For example, water users report that the $7.5 million appropriated for BIA’s irrigation projects for fiscal year 2006 resulted from lobbying by concerned water users, not from BIA’s efforts. To date, BIA has primarily focused on developing and refining an accurate estimate of the cost to fix the deferred maintenance items. While developing an estimate of the projected cost is important, BIA officials believe that the agency also needs to develop a plan for ultimately funding the deferred maintenance. Developing a plan for funding the deferred maintenance is complicated by competing priorities and a crisis-oriented management style that complicates preventative maintenance, according to BIA officials. The current state of disrepair of most of the irrigation projects results in frequent emergency situations concerning project operations and maintenance. As a result, BIA irrigation staff spends a significant amount of its time addressing emergency maintenance situations, to the detriment of other maintenance needs that are essential to sustaining the projects over the long term. As a result of this “crisis-style” management, BIA has limited time to devote to non-emergency issues such as the list of deferred maintenance items. Furthermore, this “crisis-style” management prevents BIA from devoting adequate time to preventative maintenance. For example, irrigation staff at Wind River Irrigation Project stated that making “band-aid” emergency repairs on a regular basis prevents them from addressing long-standing deferred maintenance needs, as well as from conducting strategic improvements that would help sustain the project over the long term. It may be beneficial to consider whether other groups for whom irrigation is a priority or an area of expertise could better manage some of the irrigation projects, including other federal agencies, Indian tribes, and water users. BIA must balance its irrigation management responsibilities with its many other missions in support of Indian communities. As the federal agency charged with supporting Indian communities in the United States, BIA’s responsibility is to administer and manage land and natural resources held in trust for Indians by the U.S. government. Administration and management of these trust lands and resources involves a wide variety of responsibilities, including law enforcement, social services, economic development, education and natural resource management. Given the multitude of responsibilities that BIA must balance, there are inherent limits on the resources and knowledge that BIA is able to devote to any one program. As a result of these limitations and competing demands, officials report that irrigation management is not a priority for BIA. The fact that many water users on the irrigation projects are now non-Indian may further encourage BIA to prioritize and devote more resources to other programs before irrigation management. Successful management of the irrigation projects by other groups would depend on the unique characteristics of each project and its water users. Potential groups who may be able to assume management for some irrigation projects or portions of some irrigation projects include the following: The Bureau of Reclamation. As the federal agency charged with managing water in the western United States, the Bureau of Reclamation has extensive technical experience in managing irrigation projects and has served in a technical or advisory capacity to BIA’s irrigation staff. Furthermore, efforts have been made in the past to turn over some BIA irrigation projects to the Bureau of Reclamation and the Fort Yuma Irrigation Project is currently operated by the Bureau of Reclamation. In addition, the Bureau of Reclamation utilizes management practices for its irrigation projects that maximize information sharing and collaboration with water users. For example, in contrast to BIA, the Bureau of Reclamation delegates responsibility for much of the day-to-day operations and maintenance on its irrigation projects to irrigation districts, which are organized groups of water users. Indian Tribes. Officials report that some of the tribes have staff with extensive knowledge of irrigation and water management, as well as technical training. Some tribes stated that they have a vested interest in seeing their respective projects succeed, and they would like to assume direct responsibility for their reservation’s irrigation project, assuming the deferred maintenance items are fixed before the turnover occurs. Turning over some of the BIA projects to Indian tribes would be an option where tribes have the management and technical capability to assume responsibility for an irrigation project. Water Users. Water users have extensive familiarity with the day-to-day management of the projects and in some cases already handle many day-to-day operations and maintenance activities. For example, the Crowheart Water Users Association, a group of water users at the Wind River Irrigation Project, have successfully assumed responsibility for most of the maintenance needs on their portion of the project. In exchange for their efforts, BIA refunds to the Crowheart Water Users Association 50 percent of their annual operation and maintenance fees. Through this arrangement, the Crowheart Water Users Association believes it has been able to more effectively address maintenance needs and increase project efficiency. Turning over some of the BIA projects to water users would be an option where water users share similar interests and have positive working relationships, as well as the desire to organize an irrigation district or association. Any successful alternative management option would have to consider the sometimes disparate interests and priorities among water users. In some cases, a combination of the various alternative management options may be beneficial and feasible. This type of arrangement is currently being considered for the Flathead Irrigation Project, where BIA is currently in the process of turning over the operation and management of the project to a collaborative management group that may include the tribe, individual Indian water users, and non-Indian water users. However, regardless of the alternative management option, water users and tribal officials repeatedly stated that they would not be willing or able to take over project operations and maintenance unless the deferred maintenance had already been addressed or adequate funding was available to address the deferred maintenance needs. Since BIA historically has not had adequate funds to operate and maintain the projects, the projects are in a serious state of disrepair. BIA is in the process of implementing its plan to develop an accurate list and estimate of the deferred maintenance needs for each project. However, some of the projects also have day-to-day management shortcomings regarding technical support and stakeholder involvement that need to be addressed. BIA’s decentralized organizational structure combined with the difficulty in attracting and retaining highly qualified project managers at remote Indian reservations led to some poor decisionmaking at some of the projects. It is critically important that project managers, especially those with less than desirable qualifications, have the necessary level of technical support to prevent poor decisions from being made in the future. A lack of adequate stakeholder involvement at some projects has also seriously undermined project accountability. Unlike most other BIA programs, the operations and maintenance of the irrigation projects are funded almost entirely by the project beneficiaries—the water users, many of whom are non-Indian. Consequently, BIA is accountable to these water users and these water users expect to have an active voice in project operations and maintenance. Some projects have not fulfilled their obligations to regularly meet with project stakeholders, creating an adversarial environment in which BIA and project water users do not trust each other. This failure to involve stakeholders in the management of their own projects means that BIA does not benefit from water user expertise and has resulted in widespread feelings that BIA is non-responsive and evasive, alienating many water users who feel disenfranchised. Moreover, this failure has limited the ability of stakeholders to hold BIA accountable for its decisions and actions. In addition to some shortcomings with BIA’s ongoing day-to-day management of some of the projects, we also found that information on the financial sustainability of the projects is needed to help address the long- term direction of BIA’s irrigation program. BIA’s 16 irrigation projects were generally built in the late 1800s and early 1900s to further the federal government’s Indian policy of assimilation. The government made the decision to build these projects to support and encourage Indians to become farmers. This decision was generally not based on a thorough analysis designed to ensure that only cost effective projects were built. As a result, the financial sustainability of some of the projects has always been questionable, ultimately creating tension between BIA and its water users. BIA is under constant pressure to raise annual operations and maintenance fees to collect adequate funds to maintain the projects, while many water users contend that they do not have the ability to pay higher fees. Without a clear understanding of the financially sustainability of the projects, BIA does not know whether it is practical to raise operation and maintenance fees, or whether alternative sources of financing should be pursued. Information on financial sustainability, along with accurate deferred maintenance information, are both critical pieces of information needed to have a debate on the long-term direction of BIA’s irrigation program. Once this information is available, the Congress and interested parties will be able to address how the deferred maintenance will be funded and whether entities other than BIA could more appropriately manage some or all of the projects. We recommend that the Secretary of the Interior take the following three actions. To improve the ongoing management of the projects in the short-term, we recommend that the Secretary direct the Assistant Secretary for Indian Affairs to provide the necessary level of technical support for project managers who have less than the desired level of engineering qualifications by putting these projects under the direct supervision of regional or central irrigation office staff or by implementing more stringent protocols for engineer review and approval of actions taken at the projects; and require, at a minimum, that irrigation project management meet twice annually with all project stakeholders—once at the end of a season and once before the next season—to provide information on project operations, including budget plans and actual annual expenditures, and to obtain feedback and input. To obtain information on the long-term financial sustainability of each of the projects, we recommend that the Secretary direct the Assistant Secretary for Indian Affairs to conduct studies to determine both how much it would cost to financially sustain each project, and the extent to which water users on each project have the ability to pay these costs. This information will be useful to congressional decisionmakers and other interested parties in debating the long-term direction of BIA’s irrigation program. We provided the Department of the Interior with a draft of this report for review and comment. However, no comments were provided in time to be included as part of this report. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of the Interior, the Assistant Secretary for Indian Affairs, as well as to appropriate Congressional Committees, and other interested Members of Congress. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. If you or your staff have questions about this report, please contact me at (202) 512-3841 or nazzaror@gao.gov. Key contributions to this report are listed in appendix III. We were asked to address several issues concerning the Department of the Interior’s Bureau of Indian Affairs’ (BIA) management of its 16 irrigation projects. Specifically, we were asked to examine (1) BIA’s estimated deferred maintenance cost for its 16 irrigation projects; (2) what shortcomings, if any, exist in BIA’s current management of its irrigation projects; and (3) any issues that need to be addressed to determine the long-term direction of BIA’s irrigation program. For all three objectives, we collected documentation on BIA’s 16 irrigation projects from officials in each of BIA’s central Irrigation, Power, and Safety of Dams offices (central irrigation offices) located in Washington, D.C., and other locations in the western United States. We also visited and collected information from each of BIA’s four regional offices that oversee the 16 irrigation projects, including the Rocky Mountain, Northwest, Western, and Southwest regions. In addition, we visited 9 of the 16 projects located across all 4 regions. Specifically, we visited: (1) the Blackfeet Irrigation Project, (2) the Colorado River Irrigation Project, (3) the Crow Irrigation Project, (4) the Fort Belknap Irrigation Project, (5) the Pine River Irrigation Project, (6) the San Carlos Indian Works Irrigation Project, (7) the San Carlos Joint Works Irrigation Project, (8) the Wapato Irrigation Project, and (9) the Wind River Irrigation Project. We selected these projects based on a combination of factors aimed at maximizing our total coverage (over 50 percent of the projects), visiting at least one project in each of the regions where irrigation projects are located, visiting the project with the highest deferred maintenance cost estimate in each region using BIA’s fiscal year 2004 data, and visiting what BIA considered to be the three best projects and the five worst projects. During the site visits, we collected project- specific information from BIA officials and project stakeholders including tribes and water users. We also met with and collected documentation from the Department of the Interior’s Bureau of Reclamation, the federal agency charged with managing water in the western United States, for comparative purposes. To examine BIA’s estimated deferred maintenance cost for its 16 irrigation projects, we toured each of the 9 projects we visited to see examples of deferred maintenance and their impact, and we reviewed BIA’s lists of deferred maintenance items and associated cost estimates for both fiscal years 2004 and 2005. We also reviewed the methodology BIA used to develop these lists and estimates and interviewed BIA staff involved in developing these lists and estimates to identify major deficiencies. Although we analyzed the cost estimates provided by BIA, we did not develop our own estimate of deferred maintenance. To assess the reliability of data we received from BIA on deferred maintenance, we interviewed officials most knowledgeable about the collection and management of these data. We reviewed the relevant controls and found them adequate. We also conducted tests of the reliability of the computerized data. On the basis of these interviews, tests, and reviews, we concluded that BIA’s estimates of deferred maintenance were sufficiently reliable for the purposes of this report. To examine what shortcomings, if any, exist in BIA’s current management of its irrigation projects, we reviewed relevant federal regulations and agency guidance, and analyzed BIA-wide and project-specific management protocols and systems for the nine projects we visited. We also reviewed general guidance on internal control standards, including risk assessment, monitoring, and information and communication. We interviewed BIA officials from the central irrigation office in Washington, D.C., Colorado, Oregon, Arizona and Montana. We also interviewed BIA regional officials as well as agency and project officials associated with each of the 9 projects we visited for information on key shortcomings in BIA’s management of its irrigation projects. Finally, we interviewed a variety of project stakeholders—including tribal representatives, individual Indian water users, and non-Indian water users—at each of the 9 projects we visited for information on key shortcomings in BIA’s management. Finally, to examine any issues that need to be addressed to determine the long-term direction of BIA’s irrigation program, we reviewed previous studies highlighting key issues impacting the future of BIA’s irrigation program. This included reviewing previous studies conducted by GAO, the Department of the Interior’s Office of Inspector General, and the Bureau of Reclamation, as well as other studies conducted at the request of the Congress. We also reviewed relevant federal regulations and agency guidance, as well as historical information relevant to BIA’s management of the irrigation program, including budget information and agency memos. Finally, we interviewed BIA officials from the central irrigation office, regional offices, and the 9 projects we visited for information on the key challenges impacting the long-term direction of the program. We also interviewed project stakeholders—including tribal representatives and water users—at the 9 projects we visited for information on the key issues impacting the future direction of BIA’s irrigation program. We performed our work between March 2005 and February 2006 in accordance with generally accepted government auditing standards. This appendix contains brief profiles of the nine irrigation projects we visited. Each project profile begins with a short overview of basic facts about the project, followed by a set of bullet points describing the key operations and maintenance concerns and the key management concerns expressed to us by BIA officials, tribal officials, or water users during our site visits. The Blackfeet Irrigation Project was authorized for construction in 1907, but construction was never completed. It consists of 38,300 acres being assessed operations and maintenance fees (and 113,100 acres authorized for irrigation). The project is located in Browning, Montana on the Blackfeet Indian Reservation of Montana, home of the Blackfeet Tribe. About 60 percent of the project’s land is owned by either the tribe or individual tribal members, and about 40 percent is owned by non-Indians. BIA currently estimates the project’s total deferred maintenance costs to be $29,130,222. See figure 8 below for pictures of the Blackfeet Irrigation Project. Fees are insufficient to cover the costs of project operations and maintenance. Weeds and overgrown vegetation are problematic and impair water flow. Deferring maintenance has led to bigger and more costly maintenance problems. Deferring maintenance decreases water efficiency and access to water. The project as built cannot meet the increased demand for water. Communication between BIA and the water users could be improved, such as enhancing transparency, increasing involvement, and meeting separately with the tribe. Lack of training and expertise undermines BIA’s management of the project. Inadequate oversight within BIA exacerbates problems associated with lack of training and expertise. Project staff should report to managers with expertise in irrigation and/or engineering. BIA protocols are too vague, such as when project staff should consult with regional or central irrigation office engineers. BIA needs to be able to measure water in order to better manage water deliveries and identify critical problems. Irrigation is a low priority for BIA. The Colorado River Irrigation Project was the first BIA irrigation project built, authorized for construction in 1867, but construction was never completed. It is now considered the best of BIA’s 16 revenue-generating irrigation projects due, in part, to its innovative leadership and customer service attitude. The project has adopted a user fee system that measures and assesses water users based on their actual usage as well as charging water users additional fees for using more water than their individual allotment. The project is located in Parker, Arizona on the Colorado River Indian Reservation, home of the Colorado River Indian Tribes. The project, which has a 10-month-long irrigation season, consists of 79,350 assessed acres (and 107,588 acres authorized for irrigation), and is composed entirely of Indian land—land owned by the tribe or its members. BIA currently estimates the project’s total deferred maintenance costs to be $134,758,664. See figure 9 for pictures of the Colorado River Irrigation Project. Development leases may no longer be allowed, potentially resulting in irrigable land going un-irrigated and costing the tribe and project potential revenues. Replacement of deteriorating irrigation structures needed. Canal needs new lining due to years of deterioration and, in some cases, poor construction. Clearing moss and pondweed is needed lest the flow of water be impaired. New irrigation structures needed to regulate water flow where ditches converge. Understaffing and high turnover of project system operators adversely impact water deliveries in that there are too few system operators to deliver water in a timely manner. BIA procurement and contracting is time-consuming and costly. Annual project budget may understate actual funding because it does not include possible additional fees. Operations and maintenance fees can only be used to address operations and maintenance on the existing project, rather than expand the project. Crow Irrigation Project The Crow Irrigation Project was authorized for construction in 1890, but construction was never completed. It is one of the oldest of BIA’s 16 revenue-generating irrigation projects with 38,900 acres being assessed operations and maintenance fees (and 46,460 acres authorized for irrigation). The project is located in Crow Agency, Montana on the Crow Reservation, home of the Crow Tribe of Montana. About 56 percent of the project land is owned by either the tribe or individual tribal members, and about 44 percent is owned by individual non-Indians. BIA currently estimates the project’s total deferred maintenance costs to be $54,550,496. See figure 10 for pictures of the Crow Irrigation Project. Fees are insufficient to cover the project’s operations as well as maintenance costs. Weeds, overgrown vegetation, tree roots and garbage impair water flow in the canals and ditches. Crumbling or dilapidated irrigation structures impair water delivery. The repair of Rotten Grass Flume needs further work. Canal erosion causes sink holes and impairs water flow. Deferred maintenance of certain structures leads to safety concerns, such as when BIA staff must go into the canal to raise or lower broken check gates. The project’s recently reassigned project manager was under-qualified, resulting in some decisions that hurt the project and undermine water delivery, such as the Rotten Grass Flume incident. BIA has inadequate oversight of the project manager and his decisions. BIA relies on “crisis-style” management rather than a long-term plan to manager project. Allegations that a former project manager inappropriately used fees and was not accountable for financial decisions. Communication breakdown between BIA and its water users. The project may be better managed if BIA turned over the project’s management to water users or tribe. Irrigation is a low priority for BIA. The Fort Belknap Irrigation Project was authorized for construction in 1895, but construction was never completed. It is one of the smallest of BIA’s 16 revenue-generating irrigation projects with 9,900 acres being assessed operations and maintenance fees (and 13,320 acres authorized for irrigation). The project is located in Harlem, Montana on the Fort Belknap Reservation, home of the Fort Belknap Indian Community of the Fort Belknap Reservation of Montana. About 92 percent of the land is owned by either the tribe or individual tribal members, and about 8 percent is owned by individual non-Indians. BIA currently estimates the project’s total deferred maintenance costs to be $17,535,494. See figure 11 for pictures of the Fort Belknap Irrigation Project. Fees and appropriations are insufficient to cover the project maintenance needs. Weeds and overgrowth of vegetation impair water flow. Canal erosion caused by cattle-crossings impairs water flow. Deteriorated and leaking irrigation structures impair water delivery. Additional equipment is needed to conduct maintenance on project. Deferred maintenance exacerbates problems of poor farming land and low crop values. Poor communication and tense relations between BIA and water users. Staff turnover and difficulty finding qualified staff are problematic. Some project staff lack adequate expertise and training to manage project. Lack of transparency and water management plan limits BIA accountability. Some water users want BIA to begin water delivery earlier in season. The Pine River Irrigation Project is the only one of BIA’s 16 revenue- generating irrigation projects located in the Southwest region, with 11,855 acres being assessed operations and maintenance fees. Construction on the project was never completed. The project is located in Ignacio, Colorado on the Southern Ute Reservation, home to the Southern Ute Indian Tribe of the Southern Ute Reservation, Colorado. About 85 percent of the land is owned by either the tribe or individual tribal members, and about 15 percent is owned by individual non-Indians. BIA currently estimates the project’s total deferred maintenance costs to be $20,133,950. See figure 12 for pictures of the Pine River Irrigation Project. Collections from operations and maintenance fees do not provide adequate funds to properly operate and maintain the project. The project’s operations and maintenance fees have not been raised since 1992. BIA has proposed doubling the fees from $8.50 per acre to $17.00 per acre for the 2006 irrigation season. The project’s cash reserves were depleted in 2004. The project has a number of old water delivery contracts, referred to as “carriage contracts,” from the 1930s that are at low fixed rates. Under some of the contracts the water users only pay $1.00 per acre to the project. The practice of subsidizing the project through other BIA programs, such as Natural Resources, Roads Construction, Roads Maintenance and Realty, was scheduled to end at the end of fiscal year 2005. Alternative sources of funds must be found for the project manager and clerk positions. “Crisis-style” management only, no preventive maintenance. Project staff does not formally meet with or provide information to individual water users. A Bureau of Reclamation study in 1999 found that some of the water users could not afford to pay fees of $8.50 to the project and operate a profitable farming operation. BIA has not responded to the study. The former project manager stated that the BIA irrigation projects should be turned over to the Bureau of Reclamation. The San Carlos Indian Works Irrigation Project was authorized for construction in 1924, but construction was never completed. It is one of the newest of BIA’s 16 revenue-generating irrigation projects with 50,000 acres being assessed operations and maintenance fees (and 50,546 acres authorized for irrigation). The project, also referred to as Pima, is located in Sacaton, Arizona on the Gila River Indian Reservation, home of the Gila River Indian Community. It is served both by its own infrastructure and by that of the San Carlos Joint Works Irrigation Project. The project land is generally owned by the tribe or tribal members, with about 99 percent of the land owned by either the tribe or individual tribal members, and about 1 percent owned by individual non-Indians. BIA currently estimates Pima’s total deferred maintenance costs to be $62,865,503. See figure 13 for pictures of the San Carlos Indian Works Irrigation Project. Inefficiency in water delivery results in fewer water users being able to receive water, leading to idle acreage in some cases. Clearing tumbleweeds and other vegetation that can clog culverts are a recurring problem and represents a large part of the project’s spending on operations and maintenance. Erosion is a continuing problem, in part, because the canal is used for both water deliveries as well as drainage. BIA staff has a “wish list” of items that would bring the project into top condition, extending beyond the basic deferred maintenance. Project infrastructure may not have the capacity to deliver water to all potential water users. 2007 turnover to water users is still underway. Insufficient reserve funds means that project staff may not have enough money to conduct needed maintenance towards the end of the year. Vacancies are a constant problem at the project, leaving too few staff to conduct project maintenance. BIA is too slow to respond to water users’ requests for repairs. The San Carlos Joint Works Irrigation Project was authorized for construction in 1924, but construction was never completed. It provides water to non-Indian irrigators as well as the San Carlos Indian Works Irrigation Project. It consists of 100,000 acres being assessed operations and maintenance fees (and 100,546 acres authorized for irrigation), with 50 percent of the land owned by non-Indian irrigators and 50 percent owned by Indian irrigators (in the form of the San Carlos Indian Works Irrigation Project). The project is located in Coolidge, Arizona. BIA currently estimates Coolidge’s total deferred maintenance costs to be $5,775,427. See figure 14 for pictures of the San Carlos Joint Works Irrigation Project. Lack of certainty in BIA’s ability to deliver requested water to all water users has led some to purchase additional water from outside of the project. Silt removal from irrigation canals and ditches is a recurring problem, leading BIA to purposefully over-excavate the main canal each year in an attempt to catch excess silt that can clog culverts and prevent water delivery impairments. Repair of China Wash Flume is an expensive undertaking, but the flume’s failure could jeopardize water deliveries for much of the project. Removal of weeds to prevent clogged culverts is a recurring problem for the project. 2007 turnover to water users is under way but not finalized. Lawsuit against BIA’s increase in operations and maintenance fees resulted in some water delivery delays while the lawsuit is pending. Contracting delays within BIA have resulted in postponed project maintenance. Turnover of BIA staff and lack of water user inclusion in project decisionmaking impedes effective communication. BIA lacks accountability to water users in terms of how it spends operations and maintenance fees. The Wapato Irrigation Project is one of the oldest and largest of BIA’s 16 revenue-generating irrigation projects with 96,443 acres being assessed operations and maintenance fees (and 145,000 acres authorized for irrigation). It was authorized for construction in 1904, but construction was never completed. The project is located in Yakima, Washington on the Yakama Reservation, home of the Confederated Tribes and Bands of the Yakama Nation. About 60 percent of the project land is owned by either the tribe or individual tribal members, and about 40 percent is owned by individual non-Indians. BIA currently estimates the project’s total deferred maintenance costs to be $183,128,886. See figure 15 for pictures of the Wapato Irrigation Project. Deterioration of project prevents some water users from receiving water. Lack of regular project maintenance has led many water users to make repairs on their own in order to irrigate crops. Water users claim that project staff performs inadequate or faulty repairs, resulting in wasted operations and maintenance payments or the need for water users to fix the sloppy repairs. Fees are insufficient because (a) rates have been set too low, and (b) the tribe’s appeal of BIA’s operations and maintenance bills since 2001 has decreased income by at least $2 million annually because the agency will not collect on these bills or issue subsequent bills until the matters raised in the appeal are resolved. Fees are insufficient to cover both maintenance and administrative costs, such as salaries and benefits, leading to suggestions that BIA cover such costs. Understaffing due to inadequate funds and difficulty in finding qualified staff has resulted in too few staff to operate and maintain project. BIA relies on “crisis-style” management to manage project, resulting in a lack of planning and preventive maintenance. Water users lack voice in project decisionmaking, resulting in concerns about limited accountability of project staff to its water users. Alleged errors with operations and maintenance billing—such as BIA billing dead landowners and BIA overbilling living landowners—led the tribe and its members to appeal BIA’s billing of operations and maintenance fees. Resolution of these appeals is still pending within the agency. BIA will not collect on these bills or issue subsequent bills until the matters raised in the appeal are resolved. The Wind River Irrigation Project was authorized for construction in 1905, but construction was never completed. It is one of BIA’s 16 revenue- generating irrigation projects with 38,300 acres being assessed operations and maintenance fees (and 51,000 acres authorized for irrigation). The project is located in Fort Washakie, Wyoming on the Wind River Reservation, home of the Arapaho Tribe of the Wind River Reservation and the Shoshone Tribe of the Wind River Reservation. About 67 percent of the project land is owned by either the tribe or individual tribal members, and about 33 percent is owned by individual non-Indians. BIA currently estimates the project’s total deferred maintenance costs to be $84,956,546. See figure 16 for pictures of the Wind River Irrigation Project. Weeds and tree roots impair water flow and lead to seepage. Cattle-crossings erode canal banks and impair water flow. Deteriorating irrigation infrastructure impairs water delivery. Additional water storage and improved efficiency needed to meet demand for water. Deferring maintenance undermines long-term sustainability of project. BIA financial management may limit ability of project staff to conduct needed maintenance in short maintenance season. BIA relies on “crisis-style” management and “band-aid” solutions rather than a long-term plan to manage project. Poor communication between BIA and water users. Water users are not involved enough in project decisionmaking. Supervision of project staff is insufficient and BIA is not accountable to water users. Turnover of BIA staff is problematic. Some water users want to manage all or part of the project. In addition to those individuals named above, Jeffery D. Malcolm, Assistant Director, Tama R. Weinberg, Rebecca A. Sandulli, and David A. Noguera made key contributions to this report. Also contributing to the report were Richard P. Johnson, Nancy L. Crothers, Stanley J. Kostyla, Kim M. Raheb, and Jena Y. Sinkfield.
The Department of the Interior's Bureau of Indian Affairs (BIA) manages 16 irrigation projects on Indian reservations in the western United States. These projects, which were generally constructed in the late 1800s and early 1900s, include water storage facilities and delivery structures for agricultural purposes. Serious concerns have arisen about their maintenance and management. GAO was asked to examine (1) BIA's estimated deferred maintenance cost for its 16 irrigation projects, (2) what shortcomings, if any, exist in BIA's current management of its irrigation projects, and (3) any issues that need to be addressed to determine the long-term direction of BIA's irrigation program. BIA estimated the cost for deferred maintenance at its 16 irrigation projects at about $850 million for 2005, although the agency is in the midst of refining this estimate. BIA acknowledges that this estimate is a work in progress, in part, because some projects incorrectly counted new construction items as deferred maintenance. To further refine its estimate, BIA plans to hire engineering and irrigation experts to conduct thorough condition assessments of all 16 irrigation projects to correctly identify deferred maintenance needs and costs. BIA's management of some of its irrigation projects has serious shortcomings that undermine effective decisionmaking about project operations and maintenance. First, under BIA's organizational structure, officials with the authority to oversee irrigation project managers generally lack the technical expertise needed to do so effectively, while the staff that have the expertise lack the necessary authority. Second, despite federal regulations that require BIA to consult with project stakeholders in setting project priorities, BIA has not consistently provided project stakeholders with the necessary information or opportunities to participate in project decisionmaking. The long-term direction of BIA's irrigation program depends on the resolution of several larger issues. Of most importance, BIA does not know to what extent its irrigation projects are capable of financially sustaining themselves, which hinders its ability to address long-standing concerns regarding inadequate funding. Information on financial sustainability, along with accurate deferred maintenance information, are two critical pieces of information that are needed to have a debate on the long-term direction of BIA's irrigation program. Once this information is available, the Congress and interested parties will be able to address how the deferred maintenance will be funded and whether entities other than BIA could more appropriately manage some or all of the projects.
DEA establishes quotas for the maximum amount of each basic class of schedule I and II controlled substances—such as amphetamine or morphine—that can be produced each year in the United States. DEA also establishes quotas for individual manufacturers, who must apply to DEA to obtain quotas for specific classes of controlled substances. The CSA and DEA’s implementing regulations specify dates by which DEA must propose and establish its quotas. The quotas that DEA establishes each year are required to provide for the estimated medical, research, and industrial needs of the United States. In setting quotas, DEA considers information from many sources including manufacturers’ production history and anticipated needs from manufacturers’ quota applications and past histories of quota granted for each substance from YERS/QMS, which is DEA’s system for tracking and recording quota applicants and decisions. Both DEA and FDA have important responsibilities in preventing and responding to shortages of drugs containing controlled substances subject to quotas. In addition to preventing diversion, DEA works to ensure that an adequate and uninterrupted supply of controlled substances is available for legitimate medical and other needs. As part of its mission, FDA works to prevent, alleviate, and resolve drug shortages. The Food and Drug Administration Safety and Innovation Act (FDASIA), enacted in 2012, contains provisions that require DEA and FDA to coordinate their respective efforts during shortages of drugs containing controlled substances subject to quotas. When FDA is notified of a supply disruption of certain drugs that contain controlled substances subject to quotas, FDASIA requires that FDA request that DEA increase quotas applicable to that controlled substance, if FDA determines that it is necessary. Similarly, when FDA has determined that a drug subject to quotas is in shortage in the United States, manufacturers may submit quota applications requesting that DEA authorize additional quota for that substance. FDASIA requires that DEA respond to these requests from manufacturers within 30 days. The CSA requires businesses, entities, or individuals that import, export, manufacture, distribute, dispense, conduct research with respect to, or administer controlled substances to register with the DEA. As of December 2014, there were over 1.5 million registered distributors, pharmacies, and practitioners; more than 1.4 million of these registrants were practitioners. DEA registrants must comply with a variety of requirements imposed by the CSA and its implementing regulations. For example, a registrant must keep accurate records and maintain inventories of controlled substances, among other requirements, in compliance with applicable federal and state laws. Additionally, all registrants must provide effective controls and procedures to guard against theft and diversion of controlled substances. Examples of some of the specific regulatory requirements for distributors, pharmacists, and practitioners include the following: Distributors: Registrants must design and operate a system to disclose suspicious orders of controlled substances, and must inform the DEA field division office in the registrant’s area of suspicious orders when the registrant discovers them. Pharmacists: While the responsibility for proper prescribing and dispensing of controlled substances rests with the prescribing practitioner, the pharmacist who fills the prescription holds a corresponding responsibility for ensuring that the prescription was issued in the usual course of professional treatment for a legitimate purpose. Practitioners: Practitioners are responsible for the proper prescribing and dispensing of controlled substances for legitimate medical uses. A prescription for a controlled substance must be issued for a legitimate medical purpose by an individual practitioner acting in the usual course of that person’s professional practice. It is important for registrants to adhere to their responsibilities under the CSA because they play a critical role in the prescription drug supply chain, which is the means through which prescription drugs are ultimately delivered to patients with legitimate medical needs. Although prescription drugs are intended for legitimate medical uses, as shown in figure 1, the prescription drug supply chain may present opportunities for the drugs to be abused and diverted. For example, an individual may visit multiple practitioners posing as a legitimate patient, referred to as a doctor shopper, to obtain prescriptions for drugs for themselves or others. In an example of diversion, criminal enterprises may rob distributors and pharmacies of prescription drugs to sell to others for a profit. Confidential informants provide information and take action at the direction of law enforcement agencies to further investigations. Agencies may rely on confidential informants in situations in which it could be difficult to use an undercover officer. To help ensure appropriate oversight of informants, The Attorney General’s Guidelines Regarding the Use of Confidential Informants (the Guidelines) set forth detailed procedures and review mechanisms to ensure that law enforcement agencies exercise their authorities appropriately and with adequate oversight. Adherence to the Guidelines is mandatory for DOJ law enforcement agencies, including DEA. The Guidelines require each DOJ law enforcement agency to develop agency-specific policies regarding the use of informants, and the DOJ Criminal Division is tasked with reviewing these agency-specific policies to ensure that the policies comply with the Guidelines. The Guidelines require that, prior to using a person as an informant, agencies vet informants to assess their suitability for the work and that agents conduct a continuing suitability review for the informant at least annually thereafter. Additionally, the Guidelines permit agencies to authorize informants to engage in activities that would otherwise constitute crimes under federal, state, or local law if someone without such authorization engaged in these same activities. For example, in the appropriate circumstance, an agency could authorize an informant to purchase illegal drugs from someone who is the target of a drug- trafficking investigation. Such conduct is termed “otherwise illegal activity.” The Guidelines include certain requirements for authorizing otherwise illegal activity and restrictions on the types of activities an agency can authorize. In our February 2015 report, we found that DEA had not effectively administered the quota process, nor had DEA and FDA established a sufficiently collaborative relationship to address shortages of drugs containing controlled substances subject to quotas. Since then, DEA has taken some actions to address the seven recommendations we made in our February 2015 report with respect to the agency’s administration of the quota process and efforts to address drug shortages, but DEA has only fully implemented two of the seven recommendations. As we reported in February 2015, DEA had not proposed or established quotas within the time frames required by its regulations for any year from 2001 through 2014. DEA officials attributed this lack of compliance to inadequate staffing and noted that the agency’s workload with respect to quotas had increased substantially. Manufacturers who reported quota- related shortages cited late quota decisions as causing or exacerbating shortages of their drugs. We could not confirm whether DEA’s lack of timeliness in establishing quotas had caused or exacerbated shortages because of concerns about the reliability of DEA’s data, among other things. However, by not promptly responding to manufacturers’ quota applications, we concluded that DEA may have hindered manufacturers’ ability to manufacture drugs that contain schedule II controlled substances that may help prevent or resolve a shortage. Additionally, our February 2015 report found that DEA had weak internal controls, which jeopardized the agency’s ability to effectively manage the quota process. Specifically: DEA did not have adequate controls to ensure the reliability of YERS/QMS, which it used to track manufacturers’ quota applications and record its quota decisions. DEA officials described some data checks of YERS/QMS, such as managers verifying that information entered into the system was accurate. However, the agency did not have systematic quality checks to ensure that the data were accurate or the checks it had in place were sufficient. This lack of systematic data checks was also concerning because we estimated that 44 percent of YERS/QMS records in 2011 and 10 percent in 2012 had errors. DEA officials said that 2011 was the first year manufacturers applied for quotas electronically and they expected data from 2012 and beyond to be more accurate. DEA lacked critical management information because it did not have performance measures related to setting quotas. In the absence of such performance measures, we concluded that DEA was missing important information for program managers to use when making decisions about program resources, and the agency could not effectively demonstrate program results. DEA did not monitor or analyze YERS/QMS data to assess the performance of the quota process. Absent such analysis, DEA was unable to evaluate its responses to manufacturers’ quota applications or to understand the nature of its workload. DEA did not have reasonable assurance that the quotas it set were in accordance with its requirements and could not ensure continuity of its operations, as it did not have protocols, policies, training materials, or other documentation to manage the quota process. Instead, the agency said it relied on its regulations and the CSA to serve as guidance on how to conduct these activities. However, the need for detailed policies, procedures, and practices is particularly important because the process of setting quotas is very complex, requiring staff to weigh data from at least five different sources that may have contradictory information. To address these deficiencies, our February 2015 report recommended that DEA take four actions to ensure it is best positioned to administer the quota process. Specifically, we recommended that DEA (1) strengthen its internal controls of YERS/QMS, (2) establish performance measures related to quotas, (3) monitor and analyze YERS/QMS data, and (4) develop internal policies for processing quota applications and setting quotas. In commenting on our report, DEA did not explicitly agree or disagree with these four recommendations. As of June 2016, DEA has taken some actions to address these recommendations. Specifically, in response to our first recommendation, the agency stated that it implemented a series of system-generated flags in YERS/QMS that verify the information manufacturers enter into their quota applications and identify entries made by DEA staff that warrant further review within the agency. Additionally, in October 2015, DEA said that it would compare a random sample of manufacturers’ applications and DEA’s responses in YERS/QMS on a quarterly basis starting in fiscal year 2016. In June 2016, DEA provided the results of its review of 146 YERS/QMS records from March through May 2016, which identified a nearly nonexistent error rate (.01 percent). Because of these actions, we believe that DEA has implemented this recommendation. In response to our second recommendation, DEA stated in October 2015 that it would develop performance standards that outline time frames for when manufacturers should expect DEA to respond to their quota applications, as well as develop web-based training to help manufacturers improve the quality of the information submitted to the agency. However, in June 2016, DEA stated that developing performance measures specific to the quota process would not be feasible because actions affecting quotas are outside of the agency’s control. Instead, DEA focused on training manufacturers about the quota process to improve the accuracy and quality of their quota applications by holding additional trainings in April 2016 and developing web-based training. The agency plans to finish developing the web-based training in fiscal year 2017. Although training is an important step in improving the information being submitted to DEA, it is also important that DEA establish measures to assess its performance in achieving its mission of ensuring an adequate and uninterrupted supply of controlled substances, as it does for its diversion-related mission. As a result, we do not believe DEA’s actions are fully responsive to our recommendation. In response to our third recommendation, DEA stated that it streamlined its process for reviewing manufacturers’ quota applications, which led to a significant reduction in the agency’s response times. For example, DEA said that it is now responding to manufacturers’ quota applications within four weeks. As of June 2016, the agency plans to continue monitoring and analyzing the quality of the YERS/QMS data and DEA’s timeliness in responding to quota applications. We are currently awaiting documentation about DEA’s analysis of YERS/QMS data in relation to the agency’s timeliness in responding to manufacturers’ quota applications and will update the status of this recommendation as applicable. Lastly, in response to our fourth recommendation, in June 2016, DEA said that it established internal policies for the quota process and is in the process of updating its employee training materials for new staff to help ensure that each staff member has the information needed to issue quotas in accordance with the CSA and DEA’s regulations. DEA agreed to provide the materials to us when they are completed, and we will assess the status of this recommendation at that time. Our February 2015 report also identified several barriers that may hinder DEA and FDA from effectively coordinating with each other during shortages of drugs containing controlled substances subject to quotas. For example: We found that DEA and FDA sometimes disagreed about what constitutes a shortage because the two agencies defined drug shortages differently. FDA defined a drug shortage as a period of time when the demand or projected demand for the drug within the United States exceeds the supply of the drug. In contrast, DEA officials told us that there is no shortage, from DEA’s perspective, as long as there is quota available to manufacture a given controlled substance, regardless of which particular manufacturers are producing the product and which strengths or formulations are available. We concluded that by not reaching agreement about what constitutes a drug shortage, it was unclear whether the two agencies would be able to successfully coordinate should a shortage of a drug containing a controlled substance subject to a quota occur. We also found that DEA lacked policies, procedures, or other means to coordinate with FDA about shortages of a controlled substance related to quotas. FDA established such policies and procedures in September 2014, but DEA officials said the agency did not plan to establish formal policies and procedures to coordinate the agency’s response to FDA. While FDASIA directs DEA to respond within 30 days to manufacturers that request additional quota pertaining to a shortage of a schedule II drug, the law does not specify how quickly DEA must respond to a request from FDA. A time frame for DEA to respond would be particularly important given that a request from FDA means it has determined that there is a shortage of a life-sustaining drug that an increase in quota is necessary to address. Further, both agencies told us that they were subject to restrictions on exchanging the proprietary information they receive from drug manufacturers, which may be helpful to prevent or address shortages. At the time our report was issued in February 2015, the agencies had been working for more than 2 years to develop an updated memorandum of understanding (MOU) to share such information. To address these barriers to effective coordination, we made three recommendations. First, we recommended that DEA and FDA promptly update the MOU between the two agencies. Second, we recommended that either in the MOU or a separate agreement, DEA and FDA specifically outline what information they will share and the time frames for sharing such information in response to a potential or existing drug shortage. Third, we recommended that DEA expeditiously establish formal policies and procedures to coordinate with FDA, as directed by FDASIA, with respect to expediting shortage-related quota applications. In commenting on a draft of our report, DEA did not explicitly agree or disagree with these three recommendations. The Department of Health and Human Services agreed with the two recommendations we made to FDA. In March 2015, FDA and DEA updated the MOU to establish procedures regarding the exchange of proprietary and other sensitive information between DEA and FDA, which fully addresses one of our three recommendations. According to DEA, the two agencies have shared information under the auspices of the MOU at least six times in fiscal year 2016. Although the MOU established procedures for sharing information, it calls for the development of separate plans to specify precisely what information is to be shared, and who it is to be shared with. In October 2015, DEA said that it had met with FDA to determine the specific procedures by which information regarding drug shortages shall be exchanged, and a draft of such a work plan has been circulated between the two agencies for comment. As of June 2016, DEA expects the work plan to be completed no later than December 2016. DEA also noted that the work plan will contain formal policies and procedures to facilitate coordination with FDA, as directed by FDASIA. As a result, the two related recommendations remain open at this time. In June 2015, we reported that DEA provided information to its registrants regarding their roles and responsibilities for preventing abuse and diversion through conferences, training, and other initiatives. We also found that DEA provided additional resources, such as manuals for specific registrant groups and DEA’s Know Your Customer guidance for distributors. However, based on our generalizable survey of four DEA registrant groups, we reported that many registrants were not aware of these resources or they would like additional guidance, information, or communication from DEA to better understand their roles under the CSA. We recommended that DEA take three actions to address registrants’ concerns. DEA has made some progress, but additional actions are needed to fully address our recommendations. In June 2015, we reported that DEA periodically hosted events such as conferences or meetings for various components of its registrant population during which the agency provided information about registrants’ CSA roles and responsibilities for preventing abuse and diversion. We found that DEA was also often a presenter at various conferences at the national, state, or local level, which registrants could attend. We asked distributors whether representatives of their facility attended DEA’s 2013 Distributor Conference, and asked individual pharmacies and chain pharmacy corporate offices whether they or other representatives of their pharmacy (or pharmacy chain) had attended a Pharmacy Diversion Awareness Conference (PDAC). Based on our surveys, we estimated that 27 percent of distributors and 17 percent of individual pharmacies had participated in the DEA-hosted events, while 63 percent (20 of 32) of chain pharmacy corporate offices we surveyed had participated in a PDAC. Of the large percentages of distributors and pharmacies that did not participate in these conferences, many cited lack of awareness as the reason. For example, an estimated 76 percent of individual pharmacies that had not attended a PDAC and 35 percent of distributors that had not attended the 2013 Distributor Conference cited lack of awareness as a reason for not participating. Our June 2015 report also stated that DEA had created various resources, such as guidance manuals and a registration validation tool, which registrants could use to understand or meet their roles and responsibilities under the CSA. However, based on our surveys, we found that many registrants were not using these resources because they were not aware that they existed. For example, DEA had created guidance manuals for pharmacists and practitioners to help them understand how the CSA and its implementing regulations pertain to these registrants’ professions. These documents were available on DEA’s website. In 2011, DEA released guidance for distributors containing suggested questions a distributor should ask customers prior to shipping controlled substances (referred to as the Know Your Customer guidance). Additionally, DEA offered a registration validation tool on its website so that registrants, such as distributors and pharmacies, could determine if a pharmacy or practitioner had a valid, current DEA registration. However, our survey results suggested that many registrants were not using these resources that could help them better understand and meet their CSA roles and responsibilities because they were unfamiliar with them. For example, of particular concern were the estimated 53 percent of individual pharmacies that were not aware of either DEA’s Pharmacist’s Manual or the registration validation tool, and the 70 percent of practitioners that were not aware of DEA’s Practitioner’s Manual, and were therefore not using these resources. The lack of awareness among registrants of DEA resources and conferences suggested that DEA may not have an adequate means of communicating with its registrant populations. Further, with so many registrants unaware of DEA’s conferences and resources, we reported that DEA lacked assurance that registrants had sufficient information to understand and meet their CSA responsibilities. Therefore, we recommended that DEA identify and implement means of cost-effective, regular communication with distributor, pharmacy, and practitioner registrants, such as through listservs or web-based training. DEA agreed that communication from DEA to the registrant population was necessary and vital. As of April 2016, DEA reported that it had taken steps towards addressing this recommendation. In particular, DEA reported that it was in the process of developing web-based training modules for all of its registrant population, and was considering the best way to implement a listserv to disseminate information to its various registrant types. We plan to continue to monitor the agency’s efforts in this area, and this recommendation remains open. As we reported in June 2015, some responses to our registrant survey indicated that additional guidance for distributors regarding suspicious orders monitoring and reporting, as well as more regular communication, would be beneficial. In response to an open-ended question in our survey about how DEA could improve its Know Your Customer document, the guidance document DEA has provided to distributors, half of distributors (28 of 55) that offered comments said that they wanted more guidance from DEA. Additionally, just over one-third of distributors (28 of 77) reported that DEA’s Know Your Customer document was slightly or not at all helpful. Furthermore, in response to an open-ended question about what additional interactions they would find helpful to have with DEA, more than half of the distributors that offered comments (36 of 55) said that they needed more communication or information from, or interactions with, DEA. Some of the specific comments noted that distributors would like more proactive communication from DEA that was collaborative in nature, rather than being solely violation- or enforcement-oriented. Some of the additional communication and interactions proposed by distributors included quarterly meetings with the local field office and more training or conferences related to their regulatory roles and responsibilities. Also, while DEA had created guidance manuals for pharmacists and practitioners, the agency had not developed a guidance manual or comparable document for distributors. DEA officials told us that they believed the information in agency regulations was sufficient for distributors to understand their CSA responsibilities for suspicious orders monitoring and reporting. DEA officials also said that they met routinely with distributors and distributors had fewer requirements compared to other registrant types and officials did not believe such guidance was necessary. Additionally, DEA officials said that while distributors wanted specific instructions on how to avoid enforcement actions, DEA could not do that because circumstances that lead to enforcement actions (e.g., individual business practices) vary. However, as we stated in our June 2015 report, a guidance document for distributors similar to the one offered for pharmacies and practitioners could help distributors further understand and meet their roles and responsibilities under the CSA for preventing diversion, though the document may not need to be as detailed. Specifically, we concluded that although DEA may not be able to provide guidance that will definitively answer the question of what constitutes a suspicious order or offer advice about which customers to ship to, DEA could, for example, provide guidance around best practices in developing suspicious orders monitoring systems. DEA could also enhance its proactive communication with distributors—which could be done, for example, via electronic means if additional in-person outreach would be cost prohibitive. Such steps are key to addressing distributors’ concerns, because without sufficient guidance and communication from DEA, distributors may not be fully understanding or meeting their roles and responsibilities under the CSA for preventing diversion. Additionally, in the absence of clear guidance from DEA, our survey data showed that many distributors were setting thresholds on the amount of certain controlled substances that can be ordered by their customers (i.e., pharmacies and practitioners), which could negatively impact pharmacies and ultimately patients’ access. For example, we estimated that 62 percent of individual pharmacies did business with distributors that put thresholds on the quantity of controlled substances they could order, and we estimated that 25 percent of individual pharmacies have had orders cancelled or suspended by distributors. Responses to our surveys also showed that some pharmacies wanted updated or clearer guidance, as well as more communication and information, from DEA. For example, we found that DEA’s Pharmacist’s Manual was last updated in 2010, and since that time DEA had levied large civil fines against some pharmacies. Some pharmacy associations reported these fines had caused confusion in the industry about pharmacists’ CSA roles and responsibilities. In their responses to an open-ended question in our survey about DEA’s Pharmacist’s Manual, some chain pharmacy corporate offices (7 of 18) said that the manual needed updates or more detail, some chain pharmacy corporate offices (5 of 18) reported other concerns with the manual, and some individual pharmacies (13 of 33) said that the manual needed improvement, such as more specifics. For example, several chain pharmacy corporate offices commented that the manual needed to be updated to reflect changes in DEA enforcement practices or regulations (e.g., the rescheduling of hydrocodone from a schedule III to a schedule II drug). The need for clearer guidance for pharmacists was also suggested by some chain pharmacy corporate offices’ responses to a question about DEA field office consistency. Specifically, when asked how consistent the responses of staff in different field offices had been to their inquiries about pharmacists’ roles and responsibilities, nearly half of chain pharmacy corporate offices (8 of 19) that had contact with multiple DEA field offices said that staff responses were slightly or not at all consistent. In an open- ended response to this question, one chain pharmacy corporate office noted that in its interactions with different DEA field offices throughout the country it had received different, widely varying interpretations of DEA requirements that affected the chain’s day-to-day operations, such as requirements for theft/loss reporting of controlled substances and requirements for prescribers to be reported when the prescriber fails to provide a written prescription. These responses from chain pharmacy corporate offices about field office inconsistencies suggested that the existing pharmacy guidance may not be clear even to some DEA field office officials. Additionally, the desire for more or clearer guidance and more communication from DEA was a common theme in the responses offered from both individual pharmacies and chain pharmacy corporate offices to the open-ended questions in our survey related to DEA interactions. For example, in response to an open-ended question about what additional interactions they would find helpful to have with DEA headquarters or field office staff, nearly all of the chain pharmacy corporate offices that offered comments (15 of 18) said that they wanted more guidance or clearer interpretation of the guidance from DEA, more communication with DEA, or a more proactive, collaborative relationship with DEA. In addition, nearly a third of individual pharmacies (18 of 60) that offered open-ended answers to a question about any new guidance, resources, or tools that DEA should provide to help them understand their roles and responsibilities said that they would like more proactive communication from DEA through methods such as a newsletter or e-mail blast. To help address the concerns raised by some distributor and pharmacy registrants, we recommended that DEA solicit input from distributors, or associations representing distributors, and develop additional guidance for distributors regarding their roles and responsibilities for suspicious orders monitoring and reporting. We also recommended that the office solicit input from pharmacists, or associations representing pharmacists, about updates and additions needed to existing guidance for pharmacists, and revise or issue guidance accordingly. In commenting on our report, DEA raised concerns about the recommendation to solicit input from distributors and stated that short of providing arbitrary thresholds to distributors, it cannot provide more specific suspicious orders guidance because the variables that indicate a suspicious order differ among distributors and their customers. In April 2016, DEA provided information about ongoing efforts to educate distributors about their roles and responsibilities for monitoring and reporting suspicious orders, such as their Distributors’ Conferences, and noted that it plans to host yearly training for distributors. However, DEA did not mention any plans to develop and distribute additional guidance for distributors. We continue to believe that a guidance document similar to the one offered for pharmacies and practitioners could help distributors further understand and meet their role and responsibilities under the CSA. Specifically, although DEA may not be able to provide guidance that will definitively answer the question of what constitutes a suspicious order or offer advice about which customers to ship to, DEA could, for example, provide guidance around best practices in developing suspicious orders monitoring systems. In the absence of clear guidance from DEA, our survey data show that many distributors are setting thresholds on the amount of certain controlled substances that can be ordered by their customers (i.e., pharmacies and practitioners), which can negatively impact pharmacies and ultimately patients’ access. We plan to continue to monitor the agency’s efforts in this area, and this recommendation remains open. With respect to our recommendation that DEA solicit input from pharmacists, in commenting on our report, DEA described actions it would take to partially address the recommendation, including updating the Pharmacist’s Manual to reflect two subject matter area changes related to the rescheduling of hydrocodone and new drug disposal regulations. However, at that time, DEA did not comment about providing any additional guidance to pharmacists related to their roles and responsibilities in preventing abuse and diversion under the CSA. In April 2016, DEA reported that it continues to work with the National Association of Boards of Pharmacy regarding issues raised during stakeholder discussions, which resulted in a March 2015 consensus document published by stakeholders entitled “Stakeholders’ Challenges and Red Flag Warning Signs Related to Prescribing and Dispensing Controlled Substances.” DEA also described other ways in which the agency works with pharmacists or associations representing pharmacists, such as during regional one-day Pharmacy Diversion Awareness Conferences, and noted that it was still working to update the Pharmacist’s Manual regarding changes related to the rescheduling of hydrocodone and new drug disposal regulations. DEA also commented that it would continue to update or issue guidance as warranted, but again, did not indicate that it had updated, or planned to update, existing guidance to pharmacists related to their roles and responsibilities in preventing abuse and diversion under the CSA. We plan to continue to monitor the agency’s efforts in this area, as well, and consequently this recommendation remains open. In September 2015, we reported that DEA’s confidential informants policy required agents to consider most of the factors identified in the Attorney General’s Guidelines for conducting initial suitability reviews prior to using a person as an informant. Furthermore, in accordance with the Guidelines, DEA’s policy required that a continuing suitability review be conducted at least annually. However, we determined that DEA’s policy was either partially consistent with or did not address some provisions in the Guidelines regarding oversight of informants’ authorized illegal activities. We recommended that DEA update its policy and corresponding monitoring processes to address these provisions from the Guidelines. As of June 2016, DEA had made progress, but had not fully implemented our recommendation. In September 2015, we reported that DEA’s policy was partially consistent with the Guidelines’ requirements to provide written instructions to an informant regarding the parameters of the authorized otherwise illegal activity and to have the informant sign an acknowledgment of these instructions. Additionally, regarding the Guidelines’ provisions on the suspension or revocation of authorization for an informant to engage in otherwise illegal activity, DEA’s policy was consistent with the provision for revoking authorization in cases where DEA has reason to believe that an informant is not in compliance with the authorization. However, DEA’s policy did not address circumstances unrelated to the informant’s conduct in which DEA may, for legitimate reasons, be unable to comply with precautionary measures necessary for overseeing otherwise illegal activity. At the time of our review, DEA officials told us that they did not authorize informants to participate in otherwise illegal activity without agent supervision, and, therefore, these officials said they believe this requirement would not be applicable to DEA. However, we found that DEA’s policy did not explicitly state that direct supervision of an agent is required for all instances of an informant’s participation in otherwise illegal activity. Additionally, regardless of the circumstances for suspending or revoking an authorization for otherwise illegal activity, DEA’s policy did not require the informant to sign a written acknowledgment that the authorization had been suspended or revoked. As a result, we recommended that DEA, with assistance and oversight from the DOJ Criminal Division, update its policy and corresponding monitoring procedures to explicitly address the Guidelines’ provisions on oversight of informants’ illegal activities. DOJ concurred with this recommendation, and has coordinated with DEA on updating the agency’s policy. According to an April 2016 memo, the Criminal Division has reviewed a revised version of DEA’s agents manual, which contains DEA’s policies and practices regarding confidential informants, and the Criminal Division determined that the revised manual is fully consistent with the Guidelines. Based on follow up discussions with DOJ, as of June 2016, DEA’s Office of the Chief Counsel was preparing the language needed to incorporate the new policy and expects to complete this process in summer 2016. At that time, we plan to review the updated policy to determine whether DEA has fully implemented our recommendation. Chairman Grassley, Ranking Member Leahy, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For questions about this statement, please contact Diana C. Maurer at (202) 512-8777 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Kristy Love (Assistant Director), Karen Doran, Alana Finley, Sally Gilley, Rebecca Hendrickson, Lisa Lusk, Geri Redican- Bigott, Christina Ritchie, Kelly Rolfes-Haase, and Sarah Turpin. Key contributors for the previous work on which this testimony is based are listed in each product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
DEA administers and enforces the CSA to help ensure the availability of controlled substances, including certain prescription drugs, for legitimate use while limiting their availability for abuse and diversion. The CSA requires DEA to set quotas that limit the amount of certain substances that are available in the United States. The CSA also requires those handling controlled substances to register with DEA. In addition, DEA works to disrupt and dismantle major drug trafficking organizations and uses confidential informants to help facilitate its investigative efforts. This testimony addresses DEA's efforts to address prior GAO recommendations concerning: (1) administration of the quota process, (2) information provided to registrants on their roles and responsibilities under the CSA, and (3) compliance with guidelines regarding confidential informants. This statement is based on findings from three GAO reports issued during 2015, and selected status updates from DEA through June 2016. In its prior work, GAO analyzed quota data, surveyed DEA registrants, reviewed DEA policy documents and interviewed DEA officials. For selected updates, GAO reviewed DEA documentation and held discussions with agency officials. In three reports issued during 2015, GAO made eleven recommendations to the Drug Enforcement Administration (DEA) related to administering the quota process for controlled substances, providing information and guidance to registrants, and complying with guidelines for overseeing confidential informants. As of June 2016, DEA had taken some actions to address these recommendations but had fully implemented only two of them. Administering the quota process. In February 2015, GAO found that DEA had not effectively administered the quota process that limits the amount of certain controlled substances available for use in the United States. For example, manufacturers apply to DEA for quotas needed to make drugs annually. GAO found that DEA did not respond within the time frames required by its regulations for any year from 2001 through 2014, which, according to some manufacturers, caused or exacerbated shortages of drugs. GAO recommended that DEA take seven actions to improve its management of the quota process and to address drug shortages. In March 2015, DEA implemented one recommendation to finalize an information sharing agreement with the Food and Drug Administration regarding drug shortages. In June 2016, DEA implemented a second recommendation strengthening internal controls in the quota system. DEA has not fully implemented the other five recommendations. In October 2015, DEA identified steps it planned to take, including developing performance standards for responsiveness to manufacturers, but has not yet completed these actions. Providing information to registrants. In June 2015, based on four nationally representative surveys of DEA registrants, GAO reported that many registrants were not aware of various DEA resources, such as manuals for pharmacists and practitioners. In addition, some distributors, individual pharmacies, and chain pharmacy corporate offices wanted improved guidance from, and additional communication with, DEA about their roles and responsibilities under the Controlled Substances Act (CSA). GAO recommended that DEA take three actions to increase registrants' awareness of DEA resources and to improve the information DEA provides to registrants. In April 2016, DEA reported that it had taken some steps towards addressing these recommendations, such as developing web-based training and updating the Pharmacist's Manual to reflect new regulations. However, DEA did not mention plans to develop and distribute additional guidance for distributors or pharmacies and therefore has not yet fully implemented GAO's recommendations. Compliance with confidential informant guidelines. In September 2015, GAO reported that DEA's confidential informant policies were not fully consistent with provisions in the Attorney General's Guidelines . For example, DEA did not fully address the requirements to provide the informant with written instructions about authorized illegal activity and require signed acknowledgment from the informant. GAO recommended that DEA update its policy and corresponding monitoring processes to explicitly address these particular provisions in the Guidelines. According to an April 2016 memo and subsequent follow up, DEA has revised its policy accordingly, and it is undergoing internal processing, which is expected to be completed in summer 2016. Until GAO can review the new policy and verify that it complies with the Guidelines, this recommendation remains open. GAO previously made eleven recommendations to DEA related to the quota process, guidance to registrants, and confidential informants. DEA generally agreed with and has begun taking actions to address the recommendations, and has so far fully implemented two.
About 90 percent of the estimated $49 billion Recovery Act funding to be provided to states and localities in fiscal year 2009 will be through health, transportation and education programs. Within these categories, the three largest programs are increased Medicaid Federal Medical Assistance Percentage (FMAP) grant awards, funds for highway infrastructure investment, and the State Fiscal Stabilization Fund (SFSF). Table 1 shows the breakout of funding available for these three programs in the 16 selected states and the District. The Recovery Act funding for these 17 jurisdictions accounts for a little less than two-thirds of total Recovery Act funding for these three programs. The 16 states and the District have drawn down approximately $7.96 billion in increased FMAP grant awards for the period October 1, 2008, through April 1, 2009. The increased FMAP is for state expenditures for Medicaid services. The receipt of this increased FMAP may reduce the state share for their Medicaid programs. States have reported using funds made available as a result of the increased FMAP for a variety of purposes. For example, states and the District most frequently reported using these funds to maintain their current level of Medicaid eligibility and benefits, cover their increased Medicaid caseloads-which are primarily populations that are sensitive to economic downturns, including children and families, and to offset their state general fund deficits, thereby avoiding layoffs and other measures detrimental to economic recovery. States are undertaking planning activities to identify projects, obtain approval at the state and federal level, and move them to contracting and implementation. Some state officials told us they were focusing on construction and maintenance projects, such as road and bridge repairs. Before they can expend Recovery Act funds, states must reach agreement with the Department of Transportation on the specific projects; as of April 16, 2009, two of the 16 states had agreements covering more than 50 percent of their states’ apportioned funds, and three states did not have agreement on any projects. While a few, including Mississippi and Iowa had already executed contracts, most of the 16 states were planning to solicit bids in April or May. Thus, states generally had not yet expended significant amounts of Recovery Act funds. The states and the District must apply to the Department of Education for SFSF funds. Education will award funds once it determines that an application contains key assurances and information on how the state will use the funds. As of April 20, applications from three states had met that determination-South Dakota, and two of GAO’s sample states, California and Illinois. The applications from other states are being developed and submitted and have not yet been awarded. The states and the District report that SFSF funds will be used to hire and retain teachers, reduce the potential for layoffs, cover budget shortfalls, and restore funding cuts to programs. Planning continues for the use of Recovery Act funds. Figure 1 below shows the projected timing when funds will be made available to states and localities. State planning activities include appointing Recovery Czars, establishing task forces and other entities, and developing public websites to solicit input and publicize selected projects. In many states, legislative authorization is needed before the state can receive and/or expend funds or make changes to programs or eligibility requirements. Accountability Approaches We found that the selected states and the District are taking various approaches to ensure that internal controls are in place to manage risk up- front; they are assessing known risks and developing plans to address those risks. However, officials in most of the states and the District expressed concerns regarding the lack of Recovery Act funding provided for accountability and oversight. Due to fiscal constraints, many states reported significant declines in the number of oversight staff—limiting their ability to ensure proper implementation and management of Recovery Act funds. State auditors are also planning their work including conducting required single audits and testing compliance with federal requirements. The single audit process is important for effective oversight but can be modified to be a more timely and effective audit and oversight tool for the Recovery Act and OMB is weighing options on how to modify it. Nearly half of the estimated spending programs in the Recovery Act will be administered by non-federal entities. State officials suggested opportunities to improve communication in several areas. For example, they wish to be notified when Recovery Act funds are made available directly to prime recipients within their state that are not state agencies. An important objective of the Recovery Act is to preserve and create jobs and promote economic recovery. Officials in nine of the 16 states and the District expressed concern about determining jobs created and retained under the Recovery Act, as well as methodologies that can be used for estimation of each. OMB has moved out quickly to guide implementation of the Recovery Act. As OMB’s initiatives move forward, it has opportunities to build upon its efforts to date by addressing several important issues. The Director of OMB should: adjust the single audit process to provide for review of the design of internal controls during 2009 over programs to receive Recovery Act funding, before significant expenditures in 2010. continue efforts to identify methodologies that can be used to determine jobs created and retained from projects funded by the Recovery Act. evaluate current requirements to determine whether sufficient, reliable and timely information is being collected before adding further data collection requirements. The Director of OMB should clarify what Recovery Act funds can be used to support state efforts to ensure accountability and oversight. The Director of OMB should provide timely and efficient notification to (1) prime recipients in states and localities when funds are made available for their use, (2) states, where the state is not the primary recipient of funds, but has a state-wide interest in this information, and (3) all recipients, on planned releases of federal agency guidance and whether additional guidance or modifications are expected. We provided the Director of the Office of Management and Budget with a draft of this report for comment on April 20, 2009. OMB staff responded the next day, noting that in its initial review, OMB concurred with the overall objectives of our recommendations. OMB staff also provided some clarifying information, adding that OMB will complete a more thorough review in a few days. We have incorporated OMB’s clarifying information as appropriate. In addition, OMB said it plans to work with us to define the best path forward on our recommendations and to further the accountability and transparency of the Recovery Act. The Governors of each of the 16 states and the Mayor of the District were provided drafts for comment on each of their respective appendixes in this report. Those comments are included in the appendixes. Over time, the programmatic focus of Recovery Act spending will change. As shown in figure 2, about two-thirds of Recovery Act funds expected to be spent by states in the current 2009 fiscal year will be health-related spending, primarily temporary increases in Medicaid FMAP funding. Health, education, and transportation is estimated to account for approximately 90 percent of fiscal year 2009 Recovery Act funding for states and localities. However, by fiscal year 2012, transportation will be the largest share of state and local Recovery Act funding. Taken together, transportation spending, along with investments in community development, energy, and environmental areas that are geared more toward creating long-run economic growth opportunities, will represent approximately two-thirds of state and local Recovery Act funding in 2012. Medicaid is a joint federal-state program that finances health care for certain categories of low-income individuals, including children, families, persons with disabilities, and persons who are elderly. The federal government matches state spending for Medicaid services according to a formula based on each state’s per capita income in relation to the national average per capita income. The amount of federal assistance states receive for Medicaid service expenditures is known as the FMAP. Under the Recovery Act, states are eligible for an increased FMAP for expenditures that states make in providing services to their Medicaid populations. The Recovery Act provides eligible states with an increased FMAP for 27 months between October 1, 2008 and December 31, 2010. On February 25, 2009, CMS made increased FMAP grant awards to states, and states may retroactively claim reimbursement for expenditures that occurred prior to the effective date of the Recovery Act. Generally, for fiscal year 2009 through the first quarter of fiscal year 2011, the increased FMAP, which is calculated on a quarterly basis, provides for: (1) the maintenance of states’ prior year FMAPs; (2) a general across-the-board increase of 6.2 percentage points in states’ FMAPs; and (3) a further increase to the FMAPs for those states that have a qualifying increase in unemployment rates. For the first two quarters of 2009, the increases in the FMAP for the 16 states and the District ranged from 7.09 percentage points in Iowa to 11.59 percentage points in California, as shown in table 2. In our sample of 16 states and the District, officials from 15 states and the District indicated that they had drawn down increased FMAP grant awards, totaling $7.96 billion for the period of October 1, 2008 through April 1, 2009—47 percent of their increased FMAP grant awards. In our sample, the extent to which individual states and the District accessed these funds varied widely, ranging from 0 percent in Colorado to about 66 percent in New Jersey. Nationally, the 50 states and several territories combined have drawn down approximately $11 billion as of April 1, 2009, which represents almost 46 percent of the increased FMAP grants awarded for the first three quarters of federal fiscal year 2009 (table 3). In order for states to qualify for the increased FMAP available under the Recovery Act, they must meet certain requirements. In particular Maintenance of Eligibility: In order to qualify for the increased FMAP, states generally may not apply eligibility standards, methodologies, or procedures that are more restrictive than those in effect under their state Medicaid programs on July 1, 2008. In guidance to states, CMS noted that examples of restrictions of eligibility could include (1) the elimination of any eligibility groups since July 1, 2008 or (2) changes in an eligibility determination or redetermination process that is more stringent than what was in effect on July 1, 2008. States that fail to initially satisfy the maintenance of eligibility requirements have an opportunity to reinstate their eligibility standards, methodologies, and procedures before July 1, 2009 and become retroactively eligible for the increased FMAP. Compliance with Prompt Payment: Under federal law states are required to pay claims from health practitioners promptly. Under the Recovery Act, states are prohibited from receiving the increased FMAP for days during any period in which that state has failed to meet this requirement. Although the increased FMAP is not available for any claims received from a practitioner on each day the state is not in compliance with these prompt payment requirements, the state may receive the regular FMAP for practitioner claims received on days of non-compliance. CMS officials told us that states must attest that they are in compliance with the prompt payment requirement, but that enforcement is complicated due to differences across states in methods used to track this information. CMS officials plan to issue guidance on reporting compliance with the prompt payment requirement and are currently gathering information from states on the methods they use to determine compliance. Rainy Day Funds: States are not eligible for an increased FMAP if any amounts attributable (either directly or indirectly) to the increased FMAP are deposited or credited into any reserve or rainy day fund of the state. Percentage Contributions from Political Subdivisions: In some states, political subdivisions—such as cities and counties—may be required to help finance the state’s share of Medicaid spending. States that have such financing arrangements are not eligible to receive the increased FMAP if the percentage contributions required to be made by a political subdivision are greater than what was in place on September 30, 2008. In addition to meeting the above requirements, states that receive the increased FMAP must submit a report to CMS no later than September 30, 2011 that describes how the increased FMAP funds were expended, in a form and manner determined by CMS. In guidance to states, CMS has stated that further guidance will be developed for this reporting requirement. CMS guidance to states also indicates that, for federal reimbursement, increased FMAP funds must be drawn down separately, tracked separately, and reported to CMS separately. Officials from several states told us they require additional guidance from CMS on tracking receipt of increased FMAP funds and on reporting on the use of these funds. The increased FMAP available under the Recovery Act is for state expenditures for Medicaid services. However, the receipt of this increased FMAP may reduce the state share for their Medicaid programs. States have reported using these available funds for a variety of purposes. In our sample, individual states and the District reported that they would use the funds to maintain their current level of Medicaid eligibility and benefits, cover their increased Medicaid caseloads—which are primarily populations that are sensitive to economic downturns, including children and families, and to offset their state general fund deficits thereby avoiding layoffs and other measures detrimental to economic recovery. Ten states and the District reported using these funds to maintain program eligibility. Nine states and the District reported using these funds to maintain benefits. Specifically, Massachusetts reported that during a previous financial downturn, the state limited the number of individuals eligible for some services and reduced certain program benefits that were optional for the state to cover. However, with the funds made available as a result of the increased FMAP, the state did not have to make such reductions. Similarly, New Jersey reported that the state used these funds to eliminate premiums for certain children in its State Children’s Health Insurance Program, allowing it to retain coverage for children whose enrollment in the program would otherwise have been terminated for non-payment of premiums. Nine states and the District reported using these funds to cover increases to their Medicaid caseloads, primarily to populations that are sensitive to economic downturns, such as children and families. For example, New Jersey indicated that these funds would help the state meet the increased demand for Medicaid services. According to a New Jersey official, due to significant job losses, the state’s proposed 2010 budget would not have accommodated all the applicants newly eligible for Medicaid and that the funds available as a result of the increased FMAP have allowed the state to maintain a “safety net” of coverage for uninsured and unemployed people. Six states in our sample also reported that they used funds made available as a result of the increased FMAP to comply with prompt payment requirements. Specifically, Illinois reported that these funds will permit the state to move from a 90-day payment cycle to a 30-day payment cycle for all Medicaid providers. Three states also reported using these funds to restore or to increase provider payment rates. In addition, 10 states and the District indicated that the funds made available as a result of the increased FMAP would help offset deficits in their general funds. Pennsylvania reported that because funding for its Medicaid program is derived, in part, from state revenues, program funding levels fluctuate as the economy rises and falls. However, the state was able to use the funds made available to offset the effects of lower state revenues. Arizona officials also reported that the state used funds made available as a result of the increased FMAP to pay down some of its debt and make payroll payments, thus allowing the state to avoid a serious cash flow problem. In our sample, many states and the District indicated that they need additional guidance from CMS regarding eligibility for the increased FMAP funds. Specifically, 5 states raised concerns about whether certain programmatic changes could jeopardize the state’s eligibility for these funds. For example, Texas officials indicated that guidance from CMS is needed regarding whether certain programmatic changes being considered by Texas, such as a possible extension of the program’s eligibility period, would affect the state’s eligibility for increased FMAP funds. Similarly, Massachusetts wanted clarification from CMS as to whether certain changes in the timeframe for the state to conduct eligibility re- determinations would be considered a more restrictive standard. Four states also reported that they wanted additional guidance from CMS regarding policies related to the prompt payment requirements or changes to the non-federal share of Medicaid expenditures. For example, California officials noted that the state reduced Medicaid payments for in-home support services, but that counties could voluntarily choose to increase these payments without altering the cost sharing arrangements between the counties and the state. The state wants clarification from CMS on whether such an arrangement would be allowable in light of the Recovery Act requirements regarding the percentage of contributions by political subdivisions within a state toward the non-federal share of expenditures. In response to states’ concerns regarding the need for guidance, CMS told us that it is in the process of developing draft guidance on the prompt payment provisions in the Recovery Act. One official noted that this guidance will include defining the term practitioner, describing the types of claims applicable under the provision, and addressing the principles that are integral to determining a state’s compliance with prompt payment requirements. Additionally, CMS plans to have a reporting mechanism in place through which states would report compliance under this provision. With regard to Recovery Act requirements regarding political subdivisions, CMS described their current activities for providing guidance to states. Due to the variability of state operations, funding processes, and political structures, CMS has been working with states on a case-by-case basis to discuss particular issues associated with this provision and to address the particular circumstances for each state. A CMS official told us that if there were an issue(s) or circumstance(s) that had applicability across the states, or if there were broader themes having national significance, CMS would consider issuing guidance. The Recovery Act provides approximately $48 billion to fund grants to states, localities and regional authorities for transportation projects of which the largest piece is $27.5 billion for highway and related infrastructure investments. The Recovery Act largely provides for increased transportation funding through existing programs-such as the Federal-Aid Highway Surface Transportation Program—a federally funded, state-administered program. Under this program, funds are apportioned annually to each state department of transportation (or equivalent) to construct and maintain roadways and bridges on the federal-aid highway system. The Federal-Aid Highway Program refers to the separately funded formula grant programs administered by the Federal Highway Administration (FHWA) in the U.S. Department of Transportation. Of the $27.5 billion provided in the Recovery Act for highway and related infrastructure investments, $26.7 billion is provided to the 50 states for restoration, repair, construction and other activities allowed under the Federal-Aid Highway Surface Transportation Program. Nearly one-third of these funds are required to be sub-allocated to metropolitan and other areas. States must follow the requirements for the existing program, and in addition, the Recovery Act requires that the Governor must certify that the state will maintain its current level of transportation spending, and the governor or other appropriate chief executive must certify that the state or local government to which funds have been made available has completed all necessary legal reviews and determined that the projects are an appropriate use of taxpayer funds. The certifications must include a statement of the amount of funds the state planned to expend from state sources as of the date of enactment, during the period beginning on the date of enactment through September 30, 2010, for the types of projects that are funded by the appropriation. The U.S. Department of Transportation is reviewing the Governors’ certifications regarding maintaining their level of effort for highways. According to the Department, of the 16 states in our review and the District, three states have submitted a certification free of explanatory or conditional language—Arizona, Michigan, and New York. Eight submitted “explanatory” certifications—certifications that used language that articulated assumptions used or stated the certification was based on the “best information available at the time,” but did not clearly qualify the expected maintenance of effort on the assumptions proving true or information not changing in the future. Six submitted a “conditional” certification, which means that the certification was subject to conditions or assumptions, future legislative action, future revenues, or other conditions. Recovery Act funding for highway infrastructure investment differs from the usual practice in the Federal-Aid Highway Program in a few important ways. Most significantly, for projects funded under the Recovery Act, the federal share is 100 percent; typically projects require a state match of 20 percent while the federal share is typically 80 percent. Under the Recovery Act, priority is also to be given to projects that are projected to be completed within three years. In addition, within 120 days after the apportionment by the Department of Transportation to the states (March 2, 2009), 50 percent of the apportioned funds must be obligated. Any amount of this 50 percent of apportioned funding that is not obligated may be withdrawn by the Secretary of Transportation and redistributed to other states that have obligated their funds in a timely manner. Furthermore, one year after enactment, the Secretary will withdraw any remaining unobligated funds and redistribute them based on states’ need and ability to obligate additional funds. These provisions are applicable only to those funds apportioned to the state and not those funds required by the Recovery Act to be suballocated to metropolitan, regional and local organizations. Finally, states are required to give priority to projects that are located in economically distressed areas as defined by the Public Works and Economic Development Act of 1965, as amended. In March 2009, FHWA directed its field offices to provide oversight and take appropriate action to ensure that states gave adequate consideration to economically distressed areas in selecting projects. Specifically, field offices were directed to discuss this issue with the states and to document its review and oversight of this process. States are undertaking planning activities to identify projects, obtain approval at the state and federal level, and move projects to contracting and implementation. However, because of the steps necessary before implementation, states generally had not yet expended significant amounts of Recovery Act Funds. States are required to reach agreement with DOT on a list of projects. States will then request reimbursement from DOT as the state makes payments to contactors working on approved projects. As of April 16, 2009, the U.S. Department of Transportation reported that nationally $6.4 billion of the $26.6 billion in Recovery Act highway infrastructure investment funding provided to the states had been obligated--meaning Transportation and the states had reached agreements on projects worth this amount. As shown in Table 4 below, for the locations that GAO reviewed, the extent to which the Department of Transportation had obligated funds apportioned to the states and the District ranged from 0 to 65 percent. For two of the states, the Department of Transportation had obligated over 50 percent of the states’ apportioned funds, for 4 it had obligated 30 to 50 percent of the states’ funds, for 9 states it had obligated under 30 percent of funds, and for three it had not obligated any funds. While most states we visited had not yet expended significant funds, some told us they were planning to solicit bids in April or May. Officials, also stated that they planned to meet statutory deadlines for obligating the highway funds. A few states had already executed contracts. As of April 1, 2009, the Mississippi Department of Transportation (MDOT), for example, had signed contracts for 10 projects totaling approximately $77 million. These projects include the expansion of State Route 19 in eastern Mississippi into a four-lane highway. This project fulfills part of MDOT’s 1987 Four-Lane Highway Program which seeks to link every Mississippian to a four-lane highway within 30 miles or 30 minutes. Similarly, as of April 15, 2009, the Iowa Department of Transportation had competitively awarded 25 contracts valued at $168 million. Most often however, we found that highway funds in the states and the District have not yet been spent because highway projects were at earlier stages of planning, approval, and competitive contracting. For example, in Florida, the Department of Transportation (FDOT) plans to use the Recovery Act funds to accelerate road construction programs in its preexisting 5-year plan which will result in some projects being reprioritized and selected for earlier completion. On April 15, 2009, the Florida Legislative Budget Commission approved the Recovery Act-funded projects that FDOT had submitted. For the most part, states were focusing their selection of Recovery Act- funded highway projects on construction and maintenance, rather than planning and design, because they were seeking projects that would have employment impacts and could be implemented quickly. These included road repairs and resurfacing, bridge repairs and maintenance, safety improvements, and road widening. For example, in Illinois, the Department of Transportation is planning to spend a large share of its estimated $655 million in Recovery Act funds for highway and bridge construction and maintenance projects in economically distressed areas, those that are shovel-ready, and those that can be completed by February 2012. In Iowa, the contracts awarded have been for projects such as bridge replacements and highway resurfacing—shovel-ready projects that could be initiated and completed quickly. Knowing that the Recovery Act would include opportunities for highway investment, states told us they worked in advance of the legislation to identify appropriate projects. For example, in New York, the state DOT began planning to manage anticipated federal stimulus money in November 2008. A key part of New York’s DOT’s strategy was to build on existing planning and program systems to distribute and manage the funds. The Recovery Act provided $53.6 billion in appropriations for the State Fiscal Stabilization Fund (SFSF) to be administered by the U.S. Department of Education. The Act requires that the Secretary set aside $5 billion for State Incentive Grants, referred to by the department as the Reach for the Top program, and the establishment of an Innovation Fund. The Recovery Act specifies that 81.8 percent (about $39.5 billion) is to be distributed to states for support of elementary, secondary, and postsecondary education, and early childhood education programs. The remaining 18.2 percent of SFSF (about $8.8 billion) is available for basic government services but may also be used for educational purposes. These funds are to be distributed to states by formula, with 61 percent of the state award based on the state’s relative share of the population aged 5 to 24 and 39 percent based on the state’s relative share of the total U.S. population. The Department of Education announced on April 1, 2009 that it will award the SFSF in two phases. The first phase—$32.6 billion— represents about two-thirds of the SFSF. The states and the District must apply to the Department of Education for SFSF funds and Education must approve those applications. As of April 20, 2009, applications from three states had been approved—South Dakota, and two of GAO’s sample states, California and Illinois. Since applications from other states are now being developed and submitted, they have not yet received their SFSF funds. The applications to Education must contain certain assurances. For example, states must assure that, in each of fiscal years 2009, 2010, and 2011, they will maintain state support at fiscal year 2006 levels for elementary and secondary education and also for public institutions of higher education (IHEs). However, the Secretary of Education may waive maintenance of effort requirements if the state demonstrates that it will commit an equal or greater percentage of state revenues to education than in the previous applicable year. The state application must also contain (1) assurances that the state is committed to advancing education reform in increasing teacher effectiveness, establishing state-wide education longitudinal data systems, and improving the quality of state academic standards and assessments; (2) baseline data that demonstrates the state’s current status in each of the education reform areas; and (3) a description of how the state intends to use its stabilization allocation. Within two weeks of receipt of an approvable SFSF application, Education will provide the state with 67 percent of its SFSF allocation. Under certain circumstances, Education will provide the state with up to 90 percent of its allocation. In the second phase, Education intends to conduct a full peer review of state applications before awarding the final allocations. After maintaining state support for education at fiscal year 2006 levels, states are required to use the education portion of the SFSF to restore state support to the greater of fiscal year 2008 or 2009 levels for elementary and secondary education, public IHEs, and, if applicable, early childhood education programs. States must distribute these funds to school districts using the primary state education formula but maintain discretion in how funds are allocated to public IHEs. If, after restoring state support for education, additional funds remain, the state must allocate those funds to school districts according to the Title I, Part A funding formula. However, if a state’s education stabilization fund allocation is insufficient to restore state support for education, then a state must allocate funds in proportion to the relative shortfall in state support to public schools and IHEs. Education stabilization funds must be allocated to school districts and public IHEs and cannot be retained at the state level. Once stabilization funds are awarded to school districts and public IHEs, they have considerable flexibility over how they use those funds. School districts are allowed to use stabilization funds for any allowable purpose under the Elementary and Secondary Education Act (ESEA), (commonly known as the No Child Left Behind Act), the Individuals with Disabilities Education Act (IDEA), the Adult Education and Family Literacy Act, or the Perkins Act, subject to some prohibitions on using funds for, among other things, sports facilities and vehicles. In particular, because allowable uses under the Impact Aid provisions of ESEA are broad, school districts have discretion to use Recovery Act funding for things ranging from salaries of teachers, administrators, and support staff to purchases of textbooks, computers, and other equipment. The Recovery Act allows public IHEs to use SFSF funds in such a way as to mitigate the need to raise tuition and fees, as well as for the modernization, renovation, and repair of facilities, subject to certain limitations. However, the Recovery Act prohibits public IHEs from using stabilization funds for such things as increasing endowments, modernizing, renovating, or repairing sports facilities, or maintaining equipment. According to Education officials, there are no maintenance of effort requirements placed on local school districts. Consequently, as long as local districts use stabilization funds for allowable purposes, they are free to reduce spending on education from local-source funds, such as property tax revenues. States have broad discretion over how the $8.8 billion in SFSF funds designated for basic government services are used. The Recovery Act provides that these funds can be used for public safety and other government services and that these services may include assistance for education, as well as for modernization, renovation, and repairs of public schools or IHEs, subject to certain requirements. Education’s guidance provides that the funds can also be used to cover state administrative expenses related to the Recovery Act. However, the Act also places several restrictions on the use of these funds. For example, these funds cannot be used to pay for casinos (a general prohibition that applies to all Recovery Act funds), financial assistance for students to attend private schools, or construction, modernization, renovation, or repair of stadiums or other sports facilities. States expected that SFSF uses by school districts and public IHEs would include retaining current staff and spending on programmatic initiatives, among other uses. Some states’ fiscal condition could affect their ability to meet maintenance of effort (MOE) requirements in order to receive SFSF monies, but they are awaiting final guidance from Education on procedures to obtain relief from these requirements. For example, due to substantial revenue shortages, Florida has cut its state budget in recent years and the state will not be able to meet the maintenance-of-effort requirement to readily qualify for these funds. The state will apply to Education for a waiver from this requirement; however, it is awaiting final instructions from Education on submission of the waiver. Florida plans to use SFSF funds to reduce the impact of any further cuts that may be needed in the state education budget. In Arizona, state officials expect that SFSF recipients, such as local school districts, will generally use their allocations to improve the tools they use to assess student performance and determine to what extent performance meets federal academic standards, rehire teachers that were let go because of prior budget cuts, retain teachers, and meet the federal requirement that all schools have equal access to highly qualified teachers, among other things. Funds for the state universities will help them maintain services and staff as well as avoid tuition increases. Illinois officials stated that the state plans to use all of the $2 billion in State Fiscal Stabilization funds, including the 18.2 percent allowed for government services, for K-12 and higher education activities and hopes to avert layoffs and other cutbacks many districts and public colleges and universities are facing in their fiscal year 2009 and 2010 budgets. State Board of Education officials also noted that U.S. Department of Education guidance allows school districts to use stabilization funds for education reforms, such as prolonging school days and school years, where possible. However, officials said that Illinois districts will focus these funds on filling budget gaps rather than implementing projects that will require long-term resource commitments. While planning is underway, most of the selected states reported that they have not yet fully decided how to use the 18.2 percent of the SFSF, which is discretionary. States’ and localities’ tracking and accounting systems are critical to the proper execution and accurate and timely recording of transactions associated with the Recovery Act. OMB has issued guidance to the states and localities that provides for separate “tagging” of Recovery Act funds so that specific reports can be created and transactions traced. Officials from all 16 of the selected states and the District told us they have established or were establishing methods and processes to separately identify, monitor, track, and report on the use of Recovery Act funds they receive. Officials in some states expressed concern that the use of different accounting software among state agencies may make it difficult to provide consistent and timely reporting. Others reported that their ability to track Recovery Act funds may be affected by state hiring freezes, resulting from budget shortfalls. State officials reported a range of concerns regarding the federal requirements to identify and track Recovery Act funds going to sub- recipients, localities, and other non-state entities. These concerns include their ability to track these funds within existing systems, uncertainty regarding state officials’ accountability for the use of funds which do not pass through state government entities, and their desire for additional federal guidance to establish specific expectations on sub-recipient reporting requirements. Officials in many states expressed concern about being held accountable for funds flowing directly from federal agencies to localities or other recipients. Officials in some states said they would like to at least be informed about funds provided to non-state entities, in order to facilitate planning for their use and so they can coordinate Recovery Act activities. All of the 16 selected states and the District reported taking action to plan for and monitor the use of Recovery Act funding. Some states reported that Recovery Act planning activities for funds received by the state are directed primarily by the governor’s office. In New York, for example, the governor provides program direction to the state’s departments and offices, and he established a Recovery Act Cabinet comprised of representatives from all state agencies and many state authorities to coordinate and manage Recovery Act funding throughout the state. In North Carolina, Recovery Act planning efforts are led by the newly created Office of Economic Recovery and Investment, which was established by the governor to oversee the state’s economic recovery initiatives. Other states reported that their Recovery Act planning efforts were less centralized. In Mississippi, the governor has little influence over the state Departments of Education and Transportation, as they are led by independent entities. In Texas, oversight of federal Recovery Act funds involves various stakeholders, including the Office of the Governor, the Office of the Comptroller of Public Accounts, and the State Auditor’s Office as well as two entities established within the Texas legislature specifically for this purpose—the House Select Committee on Federal Economic Stabilization Funding and the House Appropriations’ Subcommittee on Stimulus. Several states reported that they have appointed “Recovery Czars” or identified a similar key official and established special offices, task forces or other entities to oversee the planning and monitor the use of Recovery Act funds within their states. In Michigan, the governor appointed a Recovery Czar to lead a new Michigan Economic Recovery Office, which is responsible for coordinating Recovery Act programs across all state departments and with external stakeholders such as GAO, the federal OMB, and others. Some states began planning efforts before Congress enacted the Recovery Act. For example, the state of Georgia recognized the importance of accounting for and monitoring Recovery Act funds and directed state agencies to take a number of steps to safeguard Recovery Act funds and mitigate identified risks. Georgia established a small core team in December 2008 to begin planning for the state’s implementation of the Recovery Act. Within 1 day of enactment, the governor appointed a Recovery Act Accountability Officer, and she formed a Recovery Act implementation team shortly thereafter. The implementation team includes a senior management team, officials from 31 state agencies, an accountability and transparency support group comprised of officials from the state’s budget, accounting, and procurement offices, and five cross- agency implementation teams. At one of the first implementation team meetings, the Recovery Act Accountability Officer disseminated an implementation manual to agencies, which included multiple types of guidance on how to use and account for Recovery Act funds, and new and updated guidance is disseminated at the weekly implementation team meetings. Officials in other states are using existing mechanisms rather than creating new offices or positions to lead Recovery Act efforts. For example, a District official stated that the District would not appoint a Recovery Czar, and instead would use its existing administrative structures to distribute and monitor Recovery Act funds to ensure quick disbursement of funds. In Mississippi, officials from the Governor’s office said that the state did not establish a new office to provide statewide oversight of Recovery Act funding, in part because they did not believe that the Recovery Act provided states with funds for administrative expenses—including additional staff. The Governor did designate a member of his staff to act as a Stimulus Coordinator for Recovery Act activities. All 16 states we visited and the District have established Recovery Act web sites to provide information on state plans for using Recovery funding, uses of funds to date, and, in some instances, to allow citizens to submit project proposals. For example, Ohio has created www.recovery.Ohio.gov, which represents the state’s efforts to create an open, transparent, and equitable process for allocating Recovery Act funds. The state has encouraged citizens to submit proposals for use of Recovery Act funds, and as of April 8, 2009, individuals and organizations from across Ohio submitted more than 23,000 proposals. Iowa officials indicated they want to use the state’s recovery web site (www.recovery.Iowa.gov) to host a “dashboard” function to report updated information on Recovery Act spending that is easily searchable by the public. Also in Colorado, the state plans to create a web-based map of projects receiving recovery funds to help inform the public about the results of Recovery Act spending in Colorado. The selected states and the District are taking various approaches to ensure that internal controls are in place to manage risk up-front, rather than after problems develop and deficiencies are identified after the fact, and have different capacities to manage and oversee the use of Recovery Act funds. Many of these differences result from the underlying differences in approaches to governance, organizational structures, and related systems and processes that are unique to each jurisdiction. A robust system of internal control specifically designed to deal with the unique and complex aspects of the Recovery Act funds will be key to helping management of the states and localities achieve the desired results. Effective internal control can be achieved through numerous different approaches, and, in fact, we found significant variation in planned approaches by state. For example, New York’s Recovery Act cabinet plans to establish a working group on internal controls; the Governor’s office plans to hire a consultant to review the state’s management infrastructure and capabilities to achieve accountability, effective internal controls, compliance and reliable reporting under the act; and, the state plans to coordinate fraud prevention training sessions. Michigan’s Recovery Office is developing strategies for effective oversight and tracking of the use of Recovery Act funds to ensure compliance with accountability and transparency requirements. Ohio’s Office of Internal Audit plans to assess the adequacy and effectiveness of the current internal control framework and test whether state agencies adhere to the framework. Florida’s Chief Inspector General established an enterprise-wide working group of agency program Inspectors General who are updating their annual work plans by including the Recovery Act funds in their risk assessments and will leave flexibility in their plans to address issues related to funds. Massachusetts’s Joint Committee on Federal Recovery Act Oversight will hold hearings regarding the oversight of Recovery Act spending. Georgia’s State Auditor plans to provide internal control training to state agency personnel in late April. The training will discuss basic internal controls, designing and implementing internal controls for Recovery Act programs, best practices in contract monitoring, and reporting on Recovery Act funds. Internal controls include management and program policies, procedures, and guidance that help ensure effective and efficient use of resources; compliance with laws and regulations; prevention and detection of fraud, waste, and abuse; and the reliability of financial reporting. Because Recovery Act funds are to be distributed as quickly as possible, controls are evolving as various aspects of the program become operational. Effective internal control is a major part of managing any organization to achieve desired outcomes and manage risk. GAO’s Standards for Internal Control include five key elements: control environment, risk assessment, control activities, information and communication, and monitoring. Our report contains a discussion of these elements and the related effort underway in the jurisdictions we visited. OMB’s Circular No. A-133 sets out implementing guidelines for the single audit and defines roles and responsibilities related to the implementation of the Single Audit Act, including detailed instructions to auditors on how to determine which federal programs are to be audited for compliance with program requirements in a particular year at a given grantee. The Circular No. A-133 Compliance Supplement is issued annually to guide auditors on what program requirements should be tested for programs audited as part of the single audit. OMB has stated that it will use its Circular No. A-133 Compliance Supplement to notify auditors of program requirements that should be tested for Recovery Act programs, and will issue interim updates as necessary. Both the Single Audit Act and OMB Circular No. A-133 call for a “risk- based” approach to determine which programs will be audited for compliance with program requirements as part of a single audit. In general, the prescribed approach relies heavily on the amount of federal expenditures during a fiscal year and whether findings were reported in the previous period to determine whether detailed compliance testing is required for a given program that year. Under the current approach for risk determination in accordance with Circular No. A-133, certain risks unique to the Recovery Act programs may not receive full consideration. Recovery Act funding carries with it some unique challenges. The most significant of these challenges are associated with (1) new government programs, (2) the sudden increase in funds or programs that are new for the recipient entity, and (3) the expectation that some programs and projects will be delivered faster so as to inject funds into the economy. This makes timely and efficient evaluations in response to the Recovery Act’s accountability requirements critical. Specifically, new programs and recipients participating in a program for the first time may not have the management controls and accounting systems in place to help ensure that funds are distributed and used in accordance with program regulations and objectives; Recovery Act funding that applies to programs already in operation may cause total funding to exceed the capacity of management controls and accounting systems that have been effective in past years; the more extensive accountability and transparency requirements for Recovery Act funds will require the implementation of new controls and procedures; and risk may be increased due to the pressures of spending funds quickly. In response to the risks associated with Recovery Act funding, the single audit process needs adjustment to put appropriate focus on Recovery Act programs and to provide the necessary level of accountability over these funds in a timely manner. The single audit process could be adjusted to require the auditor to perform procedures such as the following as part of the routine single audit: provide for review of the design and implementation of internal control over compliance and financial reporting for programs under the Recovery Act; consider risks related to Recovery Act-related programs in determining which federal programs are major programs; and specifically, test Recovery Act programs to determine whether the auditee complied with laws and regulations. The first two items above should preferably be accomplished during 2009 before significant expenditures of funds in 2010 so that the design of internal control can be strengthened prior to the majority of those expenditures. We further believe that OMB Circular No. A-133 and/or the Circular No. A-133 Compliance Supplement could be adjusted to provide some relief on current audit requirements for low-risk programs to offset additional workload demands associated with Recovery Act funds. OMB told us that it is developing audit guidance that would address the above audit objectives. OMB also said that it is considering reevaluating potential options for providing relief from certain existing audit requirements in order to provide some balance to the increased requirements for Recovery Act program auditing. Officials in several states also expressed concerns regarding the lack of funding provided to state oversight entities, given the additional federal requirements placed on states to provide proper accounting and ensure transparency. Due to fiscal constraints, many states reported significant declines in the number of oversight staff, limiting their ability to ensure proper implementation and management of Recovery Act funds. Although the majority of states reported that they lack the necessary resources to ensure adequate oversight of Recovery Act funds, some states reported that they are either hiring new staff or reallocating existing staff for this purpose. Officials we interviewed in several states said the lack of funding for state oversight entities in the Recovery Act presents them with a challenge, given the increased need for oversight and accountability. According to state officials, state budget and staffing cuts have limited the ability of state and local oversight entities to ensure adequate management and implementation of the Recovery Act. For example, Colorado’s state auditor reported that state oversight capacity is limited, noting that the Department of Health Care Policy and Financing has had 3 controllers in the past 4 years and the state legislature’s Joint Budget Committee recently cut field audit staff for the Department of Human Services in half. In addition, the Colorado Department of Transportation’s deputy controller position is vacant, as is the Department of Personnel & Administration’s internal auditor position. Colorado officials noted that these actions are, in part, due to the natural tendency in an economic downturn to cut administrative expenses in an attempt to maintain program delivery levels. Our report contains more examples of capacity issues from our selected states and the District. Although most states indicated that they lack the resources needed to provide effective monitoring and oversight, some states indicated they will hire additional staff to help ensure the prudent use of Recovery Act funds. For example, according to officials with North Carolina’s Governor’s Crime Commission, the current management capacity in place is not sufficient to implement the Recovery Act. Officials explained that the Recovery Act funds for the Edward Byrne Memorial Justice Assistance Grant program have created such an increase in workload that the department will have to hire additional staff to handle over the next 3 years. Officials explained that these staff will be hired for the short term since the money will run out in 3 years. Additionally, officials explained that they are able to use 10 percent of the Justice Assistance Grants funding to pay for the administrative positions that are needed. A number of states expressed concerns regarding the ability to track Recovery Act funds due to state hiring freezes, resulting from budget shortfalls. For instance, New Jersey has not increased its number of state auditors or investigators, nor has there been an increase in funding specifically for Recovery Act oversight. In addition, the state hiring freeze has not allowed many state agencies to increase their Recovery Act oversight efforts. For example, despite an increase of $469 million in Recovery Act funds for state highway projects, no additional staff will be hired to help with those tasks or those directly associated with the Recovery Act, such as reporting on the number of jobs created. While the state’s Department of Transportation has committed to shift resources to meet any expanded need for internal Recovery Act oversight, one person is currently responsible for reviewing contractor-reported payroll information for disadvantaged business enterprises, ensuring compliance with Davis-Bacon wage requirements, and development of the job creation figures. State education officials in North Carolina also said that greater oversight capacity is needed to manage the increase in federal funding. However, due to the state’s hiring freeze, the agency will be unable to use state funds to hire the additional staff needed to oversee Recovery funds. The North Carolina Recovery Czar said that his office will work with state agencies to authorize hiring additional staff when directly related to Recovery Act oversight. With respect to oversight of Recovery Act funding at the local level, varying degrees of preparedness were reported by state and local officials. While the California Department of Transportation (Caltrans) officials stated that extensive internal controls exist at the state level, there may be control weaknesses at the local level. Caltrans is collaborating with local entities to identify and address these weaknesses. Likewise, Colorado officials expressed concerns that effective oversight of funds provided to Jefferson County may be limited due to the recent termination of its internal auditor and the elimination of its internal control audit function. Arizona state officials expressed some concerns about the ability of rural, tribal, and some private entities such as boards, commissions, and nonprofit organizations to manage, especially if the Recovery Act does not provide administrative funding. As recipients of Recovery Act funds and as partners with the federal government in achieving Recovery Act goals, states and local units of government are expected to invest Recovery Act funds with a high level of transparency and to be held accountable for results under the Recovery Act. As a means of implementing that goal, guidance has been issued and will continue to be issued to federal agencies, as well as to direct recipients of funding. To date, OMB has issued two broad sets of guidance to the heads of federal departments and agencies for implementing and managing activities enacted under the Recovery Act. OMB has also issued for public comment detailed proposed standard data elements that federal agencies will require from all recipients (except individuals) of Recovery Act funding. When reporting on the use of funds, recipients must show the total amount of recovery funds received from a federal agency, the amount expended or obligated to the project, and project specific information including the name and description of the project, an evaluation of its completion status, the estimated number of jobs created and retained by the project, and information on any subcontracts awarded by the recipient, as specified in the Recovery Act. State reactions vary widely and often include a mixture of responses to the reporting requirements. Some states will use existing federal program guidance or performance measures to evaluate impact, particularly for on- going programs. Other states are waiting for additional guidance from federal departments or from OMB on how and what to measure to assess impact. While Georgia is waiting on further federal guidance, the state is adapting an existing system (used by the State Auditor to fulfill its Single Audit Act responsibilities) to help the state report on Recovery Act funds. The statewide web-based system will be used to track expenditures, project status, and job creation and retention. The Georgia governor is requiring all state agencies and programs receiving Recovery Act funds to use this system. Some states indicated that they have not yet determined how they will assess impact. Officials in 9 of the 16 states and the District expressed concern about the definitions of jobs retained and jobs created under the Recovery Act, as well as methodologies that can be used for estimation of each. Officials from several of the states we met with expressed a need for clearer definitions of “jobs retained” and “jobs created.” Officials from a few states expressed the need for clarification on how to track indirect jobs, while others expressed concern about how to measure the impact of funding that is not designed to create jobs. Mississippi state officials suggested the need for a clearly defined distinction for time-limited, part-time, full-time, and permanent jobs; since each state may have differing definitions of these two categories. Officials from Massachusetts expressed concern that contractors may overestimate the number of jobs retained and created. Some existing programs, such as highway construction, have methodologies for estimating job creation. But other programs, existing and new, do not have job estimation methodologies. Some of the questions that states and localities have about Recovery Act implementation may have been answered in part via the guidance provided by OMB for the data elements as well as by guidance issued by federal departments. For example, OMB provided draft definitions for employment, as well as for jobs retained and jobs created via Recovery Act funding. However, OMB did not specify methodologies for estimating jobs retained and jobs created, which has been a concern for some states. Data elements were presented in the form of templates with section by section data requirements and instructions. OMB provided a comment period during which it is likely to receive many questions and requests for clarifications from states, localities, and other entities that can be direct recipients of Recovery Act funding. OMB plans to update this guidance again in the next 30 to 60 days. Some federal agencies have also provided guidance to the states. The Departments of Education, Housing and Urban Development, Justice, Labor, Transportation, the Corporation for National Community Service, the National Institutes of Health, and the Centers for Medicare & Medicaid Services have provided guidance for program implementation, particularly for established programs. Although guidance is expected, some new programs, such as Broadband Deployment Grants, are awaiting issuance of implementation instructions. It has been a little over two months since enactment of the Recovery Act and OMB has moved out quickly. In this period, OMB has issued two sets of guidance, first on February 18 and next on April 3, with another round to be issued within 60 days. OMB has sought formal public comment on its April 3 guidance update and before this, according to OMB, reached out informally to Congress, federal, state, and local government officials, and grant and contract recipients to get a broad perspective on what is needed to meet the high expectations set by Congress and the Administration. In addition, OMB is standing up two new reporting vehicles, Recovery.gov, which will be turned over to the Recovery Accountability and Transparency Board and is expected to provide unprecedented public disclosure on the use of Recovery Act funds, and a second system to capture centrally information on the number of jobs created or retained. As OMB’s initiatives move forward and it continues to guide the implementation of the Recovery Act, OMB has opportunities to build upon its efforts to date by addressing several important issues. These issues can be characterized broadly in three categories: (1) Accountability and Transparency Requirements, (2) Administrative Support and Oversight, and (3) Communications. Recipients of Recovery Act funding face a number of implementation challenges in this area. The Act includes many programs that are new or new to the recipient and, even for existing programs; the sudden increase in funds is out of normal cycles and processes. Add to this the expectation that many programs and projects will be delivered faster so as to inject funds into the economy and it becomes apparent that timely and efficient evaluations are needed. The following are our recommendations to help strengthen ongoing efforts to ensure accountability and transparency. The single audit process is a major accountability vehicle but should be adjusted to provide appropriate focus and the necessary level of accountability over Recovery Act funds in a timelier manner than the current schedule. OMB has been reaching out to stakeholders to obtain input and is considering a number of options related to the single audit process and related issues. We Recommend: To provide additional leverage as an oversight tool for Recovery Act programs, the Director of OMB should adjust the current audit process to: focus the risk assessment auditors use to select programs to test for compliance with 2009 federal program requirements on Recovery Act funding; provide for review of the design of internal controls during 2009 over programs to receive Recovery Act funding, before significant expenditures in 2010; and evaluate options for providing relief related to audit requirements for low-risk programs to balance new audit responsibilities associated with the Recovery Act. Responsibility for reporting on jobs created and retained falls to non- federal recipients of Recovery Act funds. As such, states and localities have a critical role in determining the degree to which Recovery Act goals are achieved. Senior Administration officials and OMB have been soliciting views and developing options for recipient reporting. In its April 3 guidance, OMB took an important step by issuing definitions, standard award terms and conditions, and clarified tracking and documenting Recovery Act expenditures. Furthermore, OMB and the Recovery Accountability and Transparency Board are developing the data architecture for the new federal reporting system that will be used to collect recipient reporting information. According to OMB, state chief information officers commented on an early draft and OMB expects to provide an update for further state review. We Recommend: Given questions raised by many state and local officials about how best to determine both direct and indirect jobs created and retained under the Recovery Act, the Director of OMB should continue OMB’s efforts to identify appropriate methodologies that can be used to: assess jobs created and retained from projects funded by the Recovery Act; determine the impact of Recovery Act spending when job creation is indirect; identify those types of programs, projects, or activities that in the past have demonstrated substantial job creation or are considered likely to do so in the future. Consider whether the approaches taken to estimate jobs created and jobs retained in these cases can be replicated or adapted to other programs. There are a number of ways that the needed methodologies could be developed. One option would be to establish a working group of federal, state and local officials and subject matter experts. Given that governors have certified to the use of funds in their states, state officials are uncertain about their reporting responsibilities when Recovery Act funding goes directly to localities. Additionally, they have concerns about the capacity of reporting systems within their states, specifically, whether these systems will be capable of aggregating data from multiple sources for posting on Recovery.gov. Some state officials are concerned that too many federal requirements will slow distribution and use of funds and others have expressed reservations about the capacity of smaller jurisdictions and non-profits to report data. Even those who are confident about their own systems are uncertain about the cost and speed of making any required modifications for Recovery.gov reporting or further data collection. Problems also have been identified with federal systems that support the Recovery Act as well. For example, questions have been raised about the reliability of www.USASpending.gov (USAspending.gov) and the ability of Grants.gov to handle the increased volume of grant applications. OMB is taking concerted actions to address these concerns. It plans to reissue USASpending guidance shortly to include changes in operations that are expected to improve data quality. In a memorandum dated March 9, OMB said that it is working closely with federal agencies to identify system risks that could disrupt effective Recovery Act implementation and acknowledged that Grants.gov is one such system. A subsequent memorandum on April 8, offered a short-term solution to the significant increase in Grants.gov usage while longer-term alternative approaches are being explored. GAO has work underway to review differences in agency policies and methods for submitting grant applications using Grants.gov and will issue a report shortly. OMB addressed earlier questions about reporting coverage in its April 3 guidance. According to OMB there are limited circumstances in which prime and sub recipient reporting will not be sufficient to capture information at the project level. OMB stated that it will expand its current model in future guidance. OMB guidance described recipient reporting requirements under the Recovery Act’s section 1512 as the minimum which must be collected, leaving it to federal agencies to determine whether additional information would be required for program oversight. We Recommend: In consultation with the Recovery Accountability and Transparency Board and States, the Director of OMB should evaluate current information and data collection requirements to determine whether sufficient, reliable and timely information is being collected before adding further data collection requirements. As part of this evaluation, OMB should consider the cost and burden of additional reporting on states and localities against expected benefits. At a time when states are experiencing cutbacks, state officials expect the Recovery Act to incur new regulations, increase accounting and management workloads, change agency operating procedures, require modifications to information systems, and strain staff capacity, particularly for contract management. Although federal program guidelines can include a percentage of grants funding available for administrative or overhead costs, the percentage varies by program. In considering other sources, states have asked whether the portion of the State Fiscal Stabilization Fund that is available for government services could be used for this purpose. Others have suggested a global approach to increase the percentage for all Recovery Act grants funding that can be applied to administrative costs. As noted earlier, state auditors also are concerned with meeting increased audit requirements for Recovery Act funding with a reduced number of staff and without a commensurate reduction in other audit responsibilities or increase in funding. OMB and senior administration officials are aware of the states’ concerns and have a number of options under consideration. We Recommend: The Director of OMB should timely clarify what Recovery Act funds can be used to support state efforts to ensure accountability and oversight, especially in light of enhanced oversight and coordination requirements. State officials expressed concerns regarding communication on the release of Recovery Act funds and their inability to determine when to expect federal agency program guidance. Once funds are released, there is no consistent procedure for ensuring that the appropriate officials in states and localities are notified. According to OMB, agencies must immediately post guidance to the Recovery Act web site and inform to the “maximum extent practical, a broad array of external stakeholders.” In addition, since nearly half of the estimated spending programs in the Recovery Act will be administered by non-federal entities, state officials have suggested opportunities to improve communication in several areas. For example, they wish to be notified when funds are made available to prime recipients that are not state agencies. Some of the uncertainty can be attributed to evolving reports and timing of these reports at the federal level as well as the recognition that different terms used by federal assistance programs add to the confusion. A reconsideration of how best to publicly report on federal agency plans and actions led to OMB’s decision to continue the existing requirement to report on the federal status of funds in the Weekly Financial and Activity Reports and eliminate a planned Monthly Financial Report. The Formula and Block Grant Allocation Report has been replaced and renamed the Funding Notification Report. This expanded report includes all types of awards, not just formula and block grants, and is expected to better capture the point in the federal process when funds are made available. We Recommend: To foster timely and efficient communications, the Director of OMB should develop an approach that provides dependable notification to (1) prime recipients in states and localities when funds are made available for their use, (2) states, where the state is not the primary recipient of funds, but has a state-wide interest in this information, and (3) all non-federal recipients, on planned releases of federal agency guidance and, if known, whether additional guidance or modifications are expected. Mr. Chairman, Senator Collins, and Members of the Committee, this concludes my statement. I would be pleased to respond to any questions you may have. For further information on this testimony, please contact J. Christopher Mihm on (202) 512-6806 or mihmj@gao.gov. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses GAO's work examining the uses and planning by selected states and localities for funds made available by the American Recovery and Reinvestment Act of 2009 (Recovery Act). The Recovery Act is estimated to cost about $787 billion over the next several years, of which about $280 billion will be administered through states and localities. Funds made available under the Recovery Act are being distributed to states, localities, and other entities and individuals through a combination of grants and direct assistance. As Congress may know, the stated purposes of the Recovery Act are to: (1) preserve and create jobs and promote economic recovery; (2) assist those most impacted by the recession; (3) provide investments needed to increase economic efficiency by spurring technological advances in science and health; (4) invest in transportation, environmental protection, and other infrastructure that will provide long-term economic benefits; and (5) stabilize state and local government budgets, in order to minimize and avoid reductions in essential services and counterproductive state and local tax increases. As described in GAO's March testimony, the Recovery Act specifies several roles for GAO including conducting bimonthly reviews of selected states' and localities' use of funds made available under the act. This statement today is based on our report being released today, Recovery Act: As Initial Implementation Unfolds in States and Localities, Continued Attention to Accountability Issues Is Essential, which is the first in a series of bimonthly reviews we will do on states' and localities' uses of Recovery Act funding and covers the actions taken under the Act through April 20, 2009. Our report and our other work related to the Recovery Act can be found on our new website called Following the Money: GAO's Oversight of the Recovery Act, which is accessible through GAO's home page at www.gao.gov . Like the report, this statement discusses (1) selected states' and localities' uses of and planning for Recovery Act funds, (2) the approaches taken by the selected states and localities to ensure accountability for Recovery Act funds, and (3) states' plans to evaluate the impact of the Recovery Act funds they received. About 90 percent of the estimated $49 billion in Recovery Act funding to be provided to states and localities in FY2009 will be through health, transportation and education programs. Within these categories, the three largest programs are increased Medicaid Federal Medical Assistance Percentage (FMAP) grant awards, funds for highway infrastructure investment, and the State Fiscal Stabilization Fund (SFSF). The funding notifications for Recovery Act funds for the 16 selected states and the District of Columbia (the District) have been approximately $24.2 billion for Medicaid FMAP on April 3, $26.7 billion for highways on March 2, and $32.6 billion for SFSF on April 2. Fifteen of the 16 states and the District have drawn down approximately $7.96 billion in increased FMAP grant awards for the period October 1, 2008 through April 1, 2009. The increased FMAP is for state expenditures for Medicaid services. The receipt of this increased FMAP may reduce the state share for their Medicaid programs. States have reported using funds made available as a result of the increased FMAP for a variety of purposes. For example, states and the District reported using these funds to maintain their current level of Medicaid eligibility and benefits, cover their increased Medicaid caseloads-which are primarily populations that are sensitive to economic downturns, including children and families, and to offset their state general fund deficits thereby avoiding layoffs and other measures detrimental to economic recovery. States are undertaking planning activities to identify projects, obtain approval at the state and federal level and move them to contracting and implementation. For the most part, states were focusing on construction and maintenance projects, such as road and bridge repairs. Before they can expend Recovery Act funds, states must reach agreement with the Department of Transportation on the specific projects; as of April 16, two of the 16 states had agreements covering more than 50 percent of their states' apportioned funds, and three states did not have agreement on any projects. While a few, including Mississippi and Iowa had already executed contracts, most of the 16 states were planning to solicit bids in April or May. Thus, states generally had not yet expended significant amounts of Recovery Act funds. The states and D.C. must apply to the Department of Education for SFSF funds. Education will award funds once it determines that an application contains key assurances and information on how the state will use the funds. As of April 20, applications from three states had met that determination- South Dakota, and two of GAO's sample states, California and Illinois. The applications from other states are being developed and submitted and have not yet been awarded. The states and the District report that SFSF funds will be used to hire and retain teachers, reduce the potential for layoffs, cover budget shortfalls, and restore funding cuts to programs. Planning continues for the use of Recovery Act funds. State activities indlude appointing Recovery Czars; establishing task forces and other entities, and developing public websites to solicit input and publicize selected projects. GAO found that the selected states and the District are taking various approaches to ensuring that internal controls manage risk up-front; they are assessing known risks and developing plans to address those risks. State auditors are also planning their work including conducting required single audits and testing compliance with federal requirements. Nearly half of the estimated spending programs in the Recovery Act will be administered by non-federal entities. State officials suggested opportunities to improve communication in several areas. Officials in nine of the 16 states and the District expressed concern about determining the jobs created and retained under the Recovery Act, as well as methodologies that can be used for estimation of each.
The Nuclear Waste Policy Act of 1982, as amended, establishes a comprehensive policy and program for the safe, permanent disposal of commercial spent nuclear fuel and other highly radioactive wastes in one or more geologic repositories. The act charges DOE with (1) establishing criteria for recommending sites for repositories; (2) “characterizing” (investigating) the Yucca Mountain site to determine its suitability for a repository; (3) if the site is found suitable, recommending it to the President, who would submit a recommendation to the Congress if he agreed that the site was qualified; and (4) seeking permission from NRC to construct and operate a repository at the approved site. Under the Nuclear Waste Policy Act, users of nuclear-power-generated electricity pay $0.001 per kilowatt-hour into a Nuclear Waste Fund, which may be used only to pay for the siting, licensing, and construction of a nuclear waste repository. In fiscal year 2006, DOE reported that the fund had $19.4 billion. DOE also reported that it had spent about $11.7 billion (in fiscal year 2006 dollars) from project inception in fiscal years 1983 through 2005 and estimated that an additional $10.9 billion (in fiscal year 2006 dollars) would be incurred from fiscal years 2006 to 2017 to build the repository. Since the early 1980s, DOE has studied the Yucca Mountain site to determine whether it is suitable for a high-level radioactive waste and spent nuclear fuel repository. For example, DOE completed numerous scientific studies of water flow and the potential for rock movement near the mountain, including the likelihood that volcanoes and earthquakes will adversely affect the repository’s performance. To allow scientists and engineers greater access to the rock being studied, DOE excavated two tunnels for studying the deep underground environment: (1) a 5-mile main tunnel that loops through the mountain, with several research areas or alcoves connected to it, and (2) a 1.7-mile tunnel that crosses the mountain, allowing scientists to study properties of the rock and the behavior of water near the potential repository area. Since July 2002, when the Congress approved the President’s recommendation of the Yucca Mountain site for the development of a repository, DOE has focused on preparing its license application. In October 2005, DOE announced a series of changes in the management of the project and in the design of the repository to simplify the project and improve its safety and operation. Previously, DOE’s design required radioactive waste to be handled at least four separate times by transporting the waste to the Yucca Mountain site, removing the waste from its shipping container, sealing it in a special disposal container, and moving it into the underground repository. The new repository design relies on uniform canisters that would be filled and sealed before being shipped, reducing the need for direct handling of most of the waste prior to being placed in the repository. As a result, DOE will not have to construct several extremely large buildings costing millions of dollars for handling radioactive waste. In light of these changes, DOE has been working on revising the designs for the repository’s surface facilities, developing the technical specifications for the canisters that will hold the waste, and revising its draft license application. In accordance with NRC regulations, before filing its license application, DOE must first make all documentary material that is potentially relevant to the licensing process electronically available via NRC’s Internet-based document management system. This system, known as the Licensing Support Network, provides electronic access to millions of documents related to the repository project. DOE is required to initially certify to NRC that it has made its documentary material available no later than 6 months in advance of submitting the license application. NRC, Nevada, and other parties in the licensing process must also certify their documentary material was made available following DOE’s initial certification. This information will then be available to the public and all the parties participating in the licensing process. OCRWM currently expects to certify its material in the Licensing Support Network by December 21, 2007. In addition, OCRWM expects to complete the necessary designs and have the draft license application ready for DOE management’s review by February 29, 2008. NRC is charged with regulating the construction, operation, and decommissioning phases of the project and is responsible for ensuring that DOE satisfies public health, safety, and environmental regulatory requirements. Once DOE files the license application, NRC will begin a four-stage process to process the application and decide whether to (1) authorize construction of the repository, (2) authorize construction with conditions, or (3) deny the application. As shown in figure 1, this process includes the following steps: Acceptance review. NRC plans to take up to 180 days to examine the application for completeness to determine whether the license application has all of the information and components NRC requires. If NRC determines that any part of the application is incomplete, it may either reject the application or require that DOE furnish the necessary documentation. NRC will docket the application once it deems the application complete, indicating its readiness for a detailed technical review. Technical review. The detailed technical review, scheduled for 18 to 24 months, will evaluate the soundness of the scientific data, computer modeling, analyses, and preliminary facility design. The review will focus on evaluating DOE’s conclusions about the ability of the repository designs to limit exposure to radioactivity, both during the construction and operation phase of the repository (known as preclosure) and during the phase after the repository has been filled, closed, and sealed (known as postclosure.) If NRC discovers problems with the technical information used to support the application, it may conduct activities to determine the extent and effect of the problem. As part of this review, NRC staff will prepare a safety evaluation report that details staff findings and conclusions on the license application. Public hearings. NRC will also convene an independent panel of judges— called the Atomic Safety Licensing Board—to conduct a series of public hearings to address contested issues raised by affected parties and review in detail the related information and evidence regarding the license application. Upon completion, the board will make a formal ruling (called the initial decision) resolving matters put into controversy. This initial decision can then be appealed to the NRC commissioners for further review. NRC commission review. In the likely event of an appeal, the NRC commissioners will review the Atomic Safety Licensing Board’s initial decision. In addition, outside of the adjudicatory proceeding, they will complete a supervisory examination of those issues contested in the proceeding to consider whether any significant basis exists for doubting that the facility will be constructed or operated with adequate protection of the public health and safety. The commissioners will also review any issues about which NRC staff must make appropriate findings prior to the authorization of construction, even if they were not contested in the proceeding. However, until DOE submits a license application, NRC’s role has involved providing regulatory guidance; observing and gathering information on DOE activities related to repository design, performance assessment, and environmental studies; and verifying site characterization activities. These prelicensing activities are intended to identify and resolve potential licensing issues early to help ensure that years of scientific work are not found to be inadequate for licensing purposes. DOE and NRC have interacted since 1983 on the repository. In 1998, they entered into a prelicensing interaction agreement that provides for technical and management meetings, data and document reviews, and the prompt exchange of information between NRC’s on-site representatives and DOE project personnel. Consistent with this prelicensing interaction agreement and NRC’s regulations, NRC staff observe and review activities at the site and other scientific work as they are performed to allow early identification of potential licensing issues for timely resolution at the staff level. EPA also has a role in the licensing process––setting radiation exposure standards for the public outside the Yucca Mountain site. In 2001, EPA set standards for protecting the public from inadvertent releases of radioactive materials from wastes stored at Yucca Mountain, which are required by law to be consistent with recommendations of the National Academy of Sciences. In July 2004, the U.S. Court of Appeals for the District of Columbia Circuit ruled that EPA’s standards were not consistent with the National Academy of Sciences’ recommendations. In response, EPA proposed a revised rule in August 2005. The director of EPA’s Office of Air and Radiation Safety told us that EPA plans to finalize its rule this year. In addition, NRC must develop exposure limits that are compatible with EPA’s rule. NRC published a proposed rule which it states is compatible with EPA’s rule, received public comments in 2005, but has not yet finalized the rule. If EPA’s rule does not change significantly in response to public comments, NRC’s rule would not require major revisions either and could be finalized within months. However, if EPA’s final rule has major changes, it could require major changes to NRC’s rule, which could take more than a year to redraft, seek and incorporate public comments, and finalize, according to NRC officials. In July 2006, DOE announced its intent to file a license application to NRC no later than June 30, 2008. OCRWM’s director set the June 30, 2008, goal to jump-start what he viewed as a stalled project. OCRWM’s director told us that he consulted with DOE and contractor project managers to get a reasonable estimate of an achievable date for submitting the license application and asked OCRWM managers to develop a plan and schedule for meeting the June 30, 2008, goal. OCRWM’s director believes this schedule is achievable, noting that DOE had already performed a significant amount of work toward developing a license application. Specifically, DOE completed a draft license application in September 2005, but opted not to file it with NRC to allow more time to address the USGS e-mail issue, revise the repository’s design to simplify the project and improve its safety and operation, and consider revising its technical documents in response to the possibility that EPA would revise the radiation standards for the proposed repository. Table 1 shows the project’s major milestones. DOE did not consult with external stakeholders in developing this schedule because there was no legal or regulatory requirement or compelling management reason to do so, according to senior OCRWM officials. However, these officials noted that the NRC review process includes extensive public hearings on the application, which will provide stakeholders with an opportunity to comment on and challenge the substance of the application. In addition, regarding other aspects of the program, senior OCRWM officials noted that they have often consulted with external stakeholders, including city and county governments near the proposed repository site, NRC, USGS, and nuclear power companies. OCRWM has also consulted with Nevada, the U.S. Department of the Navy, and other DOE offices. For example, in developing its standards for the canisters that will be used to store, transport, and place the waste in the repository, DOE consulted with the Navy and the nuclear power plant operators that generate the nuclear waste and will use the proposed canisters. In addition, DOE has worked with the local city and county governments near the repository to develop the plans for transporting the waste to the proposed repository. OCRWM’s director has made the submission of the license application by June 30, 2008, the project’s top strategic objective and management priority. Accordingly, each OCWRM office has created business plans detailing how its work will support this objective. Furthermore, DOE has developed a license application management plan that incorporates the lessons learned from previous license application preparation efforts and works to ensure that the license application meets all DOE and NRC statutory, regulatory, and quality requirements. The plan establishes a process whereby teams assess the statutory and regulatory requirements for the license application, identify any gaps and inadequacies in the existing drafts of the license application, and draft or revise these sections. Since the license application is expected to be thousands of pages long, the plan divides the license application into 71 subsections, each with a team assigned specific roles and responsibilities, such as for drafting a particular subsection or approving a particular stage of the draft. Finally, the plan also creates new project management controls to provide oversight of this process and manage risks. For example, the plan details how issues that may pose risks to the schedule or quality of the license application should be noted, analyzed, and resolved, and how the remaining issues should be elevated to successively higher levels of management. NRC officials believe it is likely that DOE will submit a license application by June 30, 2008, but will not speculate about its quality due to a long- standing practice to maintain an objective and neutral position toward proposed license applications until they are filed with NRC. According to NRC officials, NRC’s ability to review an application in a timely manner is contingent on the application being high quality, which NRC officials define as being complete and accurate, including traceable and transparent data that adequately support the technical positions presented in the license application. NRC has expressed concern about the lack of a rigorous quality assurance program and the reliability of USGS scientific work that DOE had certified before the USGS e-mails were discovered. Based on its prelicensing review, NRC recognizes that DOE is addressing problems with its quality assurance program and, by developing a new water infiltration model, is restoring confidence in the reliability of its scientific work. When the Nuclear Waste Policy Act of 1982 gave NRC responsibility for licensing the nuclear waste repository, NRC staff began engaging in prelicensing activities aimed at gathering information from DOE and providing guidance so that DOE would be prepared to meet NRC’s statutory and regulatory requirements and NRC would be prepared to review the license application. NRC issued high-level waste disposal regulations containing criteria for approving the application and publicly available internal guidance detailing the steps and activities NRC will perform to review the application. NRC also established a site office at OCRWM’s Las Vegas, Nevada, offices to act as NRC’s point of contact and to facilitate prompt information exchanges. NRC officials noted that they have also been working for several years to communicate NRC’s expectations for a high-quality license application. Although NRC has no formal oversight role in the Yucca Mountain project until DOE files a license application, NRC staff observe DOE audits of its quality assurance activities to identify potential issues and problems that may affect licensing. The NRC staff then report their findings in quarterly reports that summarize their work and detail any problems or issues they identify. For example, after observing a DOE quality assurance audit at the Lawrence Livermore National Laboratory in August 2005, NRC staff expressed concern that humidity gauges used in scientific experiments at the project were not properly calibrated—an apparent violation of quality assurance requirements. Due in part to concerns that quality assurance requirements had not been followed, BSC issued a February 7, 2006, stop- work order affecting this scientific work. In June 2007, OCRWM project managers told us that because quality assurance rules were not followed, DOE could not use this scientific work to support the license application. To facilitate prelicensing interactions, NRC and DOE developed a formal process in 1998 for identifying and documenting technical issues and information needs. As shown in table 2, issues were grouped into nine key technical issues focused mainly on postclosure performance of the geologic repository. Within this framework, NRC and DOE defined 293 agreements in a series of technical exchange meetings. An agreement is considered closed when NRC staff determines that DOE has provided the requested information. Agreements are formally closed in public correspondence or at public technical exchanges. As of June 2007, DOE has responded to all 293 of the agreements. NRC considers 260 of these to be closed. NRC considers 8 of the remaining 33 agreements to be potentially affected by the USGS e-mail issue that emerged in 2005. Their resolution will be addressed after NRC examines the new water infiltration analysis. NRC considers that the remaining 25 have been addressed but still need additional information. DOE has indicated that it does not plan any further responses on these agreements, and that the information will be provided in the June 2008 license application. NRC determined that adding agreements to the original 293 was not an efficient means to continue issue resolution during prelicensing, given DOE’s stated intent to submit its license application, first in 2004, and now in 2008. NRC is now using public correspondence, as well as public technical exchanges and management meetings, to communicate outstanding and emerging technical issues. For example, NRC’s September 2006 correspondence provided input on DOE’s proposed approach for estimating seismic events during the postclosure period and requested further interactions on the topic. Also, since May 2006, NRC and DOE have conducted a series of technical exchanges to discuss such topics as DOE’s total system performance assessment model, the seismic design of buildings, and other DOE design changes. Other interactions are planned to ensure that NRC has sufficient information to conduct its prelicensing responsibilities. DOE is implementing the recommendations and addressing the challenges identified in our March 2006 report, but it is unclear whether the department’s actions will prevent similar problems from recurring. Specifically, in response to our recommendations that DOE improve its management tools, DOE has eliminated the one-page summary (or panel) of performance indicators and has revised its trend evaluation reports. DOE is supplementing these changes with more rigorous senior management meetings that track program performance to better ensure that new problems are identified and resolved. DOE has also begun addressing additional management challenges by independently reworking USGS’s water infiltration analysis, fixing problems with a design and engineering process known as requirements management, and reducing the high-turnover rate and large number of acting managers in key project management positions. Our March 2006 report found that two of the project’s management tools— the panel of performance indicators and the trend evaluation reports— were ineffective in helping DOE management to monitor progress toward meeting performance goals, detecting new quality assurance problems, and directing management attention where needed. In response, DOE has stopped using its panel of performance indicators and replaced them with monthly program review meetings—chaired by OCRWM’s director and attended by top-level OCRWM, BSC, Sandia, and USGS managers—that review the progress of four main OCRWM projects: (1) the drafting of the license application; (2) the effort to select and load documents and records into NRC’s Licensing Support Network; (3) work supplementing DOE’s environmental impact statement to reflect the October 2005 changes in repository design, which shift from direct handling of waste to the use of canisters; and (4) the development of a system to transport waste from where it is generated, mainly nuclear power plants, to the repository. In addition, DOE has developed the following four new, high- level performance indicators that it evaluates and discusses at its monthly program review meetings: safety, including injuries and lost workdays due to accidents at the project; quality, including efforts to improve OCRWM’s corrective action program, which works to detect and resolve problems at the project and the performance of the quality assurance program; cost, including actual versus budgeted costs, staffing levels, and efforts to recruit new employees; and culture, including the project’s safety conscious work environment program, which works to ensure that employees are encouraged to raise safety concerns to their managers or to NRC without fear of retaliation and that employees’ concerns are resolved in a timely and appropriate manner according to their importance. Although DOE plans to develop additional performance indicators, these four simplified indicators have replaced about 250 performance indicators on the previous performance indicator panel. According to a cognizant DOE official, the previous performance indicator panel was ineffective, in part, because it focused on what could be measured, as opposed to what should be measured, resulting in DOE focusing its efforts on developing the performance indicator panel instead of determining how to use this information as a management tool. The monthly program review and the new performance indicators are designed to be more useful to OCRWM management by being simpler and more focused on the key mission activities. DOE has also revised its trend evaluation reports to create new organizational structures and procedures that detail the processes and steps for detecting and analyzing trends and preparing trend evaluation reports for senior management review. DOE has appointed a trend program manager and implemented a work group to oversee these processes. Furthermore, as we recommended, the new trend program has an increased focus on the significance of the monitored condition by synthesizing trends projectwide instead of separating OCRWM’s and BSC’s trend evaluation reports. To improve the utility of trend evaluation reports as a management tool, the procedures now identify the following three types of trends and criteria for evaluating them: Adverse trends are (1) repeated problems that involve similar tasks or have similar causes and are determined by management to be significant or critical to the success of the project; (2) repeated problems that are less significant but collectively indicate a failure of the quality assurance program, may be precursors to a more significant problem, or pose a safety problem; and (3) patterns of problems that management determines warrant further analysis and actions to prevent their recurrence. Emerging trends are problems that do not meet the criteria for an adverse trend, but require actions to ensure that they do not evolve into an adverse trend. Monitored trends are fluctuations in the conditions being monitored that OCRWM management determines do not warrant action, but each fluctuation needs close monitoring to ensure that it does not evolve into an emerging or adverse trend. DOE has also implemented changes to its corrective action program—the program that provides the data that are analyzed in the trend evaluation program. The corrective action program is the broader system for recognizing problems and tracking their resolution. It is one of the key elements of the project’s quality assurance framework and has been an area of interest to NRC in its prelicensing activities. The corrective action program consists of a computer system that project employees can use to enter information about a problem they have identified and create a record, known as a condition report, and a set of procedures for evaluating the condition reports and ensuring these problems are resolved. Regarding our broader conclusions that the OCRWM quality assurance program needed more management attention, in spring 2006, DOE requested a team of external quality assurance experts to review the performance of the quality assurance program. The experts concluded that 8 of the 10 topics they studied—including the corrective action program— had not been effectively implemented. Specifically, the team found that the corrective action program did not ensure that problems were either quickly or effectively resolved. Furthermore, a follow-up internal DOE study, called a root cause analysis report, concluded that the corrective action program was ineffective primarily because senior management had failed to recognize the significance of repeated internal and external reviews and did not aggressively act to correct identified problems and ensure program effectiveness. In response, DOE has revised the corrective action program in an effort to change organizational behaviors and provide increased management attention. For example, DOE has restructured the condition screening team, which previously had poor internal communication and adversarial relationships among its members, according to a senior project manager. Similarly, a December 2006 external review of the quality assurance program found that OCRWM staff had focused its efforts on trying to downgrade the significance of condition reports to deflect individual and departmental responsibility, rather than ensuring that the underlying causes and problems were addressed. In response, DOE (1) reorganized the condition screening team to reduce the size of the team but include more senior managers; (2) identified roles, responsibilities, and management expectations for the team, including expectations for collaborating and communicating; and (3) formalized processes and criteria for screening and reviewing condition reports. The condition screening team now assigns one of four significance levels to each new condition report and assigns a manager who is responsible for investigating the problem. In addition, DOE has restructured the management review committee, which oversees the corrective action program and the condition screening team. The management review committee is charged with, among other things, reviewing the actions of the condition screening team, particularly regarding the condition reports identified as having the highest two levels of significance. The management review committee also reviews draft root cause analysis reports, and any condition reports that could affect the license application. Whereas these functions were previously performed by BSC, the management review committee is now sponsored by OCRWM’s deputy director and includes senior DOE, BSC, and Sandia managers. DOE has also created written policies to clarify the roles, responsibilities, and expectations of the management review committee. The goal of these changes is to refocus management attention––with OCRWM’s deputy director serving as a champion for the corrective action program––and ensure that problems are resolved in a timely and efficient manner. DOE has addressed to varying degrees three other management challenges identified in our March 2006 report: (1) restoring confidence in USGS’s scientific documents; (2) problems with a design and engineering process known as requirements management; and (3) managing a changing and complex program, particularly given the high turnover in key management positions. Specifically: USGS e-mail issue. DOE has taken three actions to address concerns about the reliability of USGS’s scientific work after a series of e-mails implied that some USGS employees had falsified scientific and quality assurance documents and disdained DOE’s quality assurance processes. Specifically, DOE (1) evaluated USGS’s scientific work; (2) directed Sandia to independently develop a new water infiltration model to compare with USGS’s model and reconstruct USGS’s technical documents; and (3) completed a root cause analysis, including a physical review of more than 50,000 e-mails and keyword searches of nearly 1 million other e-mails sampled from more than 14 million e-mails. DOE’s evaluation of USGS’s scientific work concluded that there was no evidence that the USGS employees falsified or modified information. DOE’s root cause analysis team concluded that there was no apparent widespread or pervasive pattern across OCRWM of a negative attitude toward quality assurance or willful noncompliance with quality assurance requirements. However, the analysis found that OCRWM’s senior management had failed to hold USGS personnel accountable for the quality of the scientific work, fully implement quality assurance requirements, and effectively implement the corrective action program. These internal studies and reports and Sandia’s independent development of a new water infiltration model are intended to restore public confidence in the water infiltration modeling work in the license application. Problems with design control and the requirements management process. DOE has revised its design control and requirements management processes to address the problems that our March 2006 report identified. In addition, to gauge the effectiveness of these changes, DOE conducted an internal study called a readiness review, in which it determined that the changes in the processes were sufficient and that BSC was prepared to resume design and engineering work. Subsequently, in January 2007, DOE’s independent assessment of BSC and the requirements management process concluded that the processes and controls were adequate and provided a general basic direction for the design control process. DOE has also contracted with Longenecker and Associates to review the project’s engineering processes with the final report due in the summer of 2007. Management turnover. DOE has worked to fill and retain personnel in key management positions that had been vacant for extended periods of time, most notably the director of quality assurance and the OCRWM project director. In addition, as part of an effort to change the organizational culture, OCRWM’s director has created a team to evaluate how to improve succession planning and identify gaps in the skills or staffing levels in OCRWM. However, DOE continues to lose key project managers, most recently with the departure of OCRWM’s deputy director. Furthermore, additional turnover is possible after the 2008 presidential election, when the incoming administration is likely to replace OCRWM’s director. Historically, new directors have tended to have different management priorities and have implemented changes to the organizational structure and policies. To address this concern, OCRWM’s director suggested legislatively changing the director position by making it a long-term appointment to reflect the long-term nature of the Yucca Mountain project. The OCRWM director’s schedule for filing a repository license application with NRC by June 30, 2008, will require a concerted effort by project personnel. However, given the waste repository’s history since its inception in 1983, including two prior failed efforts to file a license application, it is unclear whether DOE’s license application will be of sufficient quality to enable NRC to conduct a timely review of the supporting models and data that meet the statutory time frames. DOE has taken several important actions to change the organizational culture of the Yucca Mountain project since the issuance of our March 2006 report. These actions appear to be invigorating, for example, the quality assurance program by focusing management attention on improving quality by resolving problems. However, for a variety of reasons, it has yet to be seen whether DOE’s actions will prevent the kinds of problems our March 2006 report identified from recurring or other challenges from developing. First, some of DOE’s efforts, such as its efforts to reduce staff turnover, are in preliminary or planning stages and have not been fully implemented. Therefore, their effectiveness cannot yet be determined. Second, improving the quality assurance program will also require changes in the organizational behaviors of OCRWM’s staff and contractors. OCRWM’s director told us that these types of cultural changes can be particularly difficult and take a long time to implement. Consequently, it may be years before OCRWM fully realizes the benefits of these efforts. Finally, as we have previously reported, DOE has a long history of quality assurance problems and has experienced repeated difficulties in resolving these problems. We provided DOE and NRC with a draft of this report for their review and comment. In their written responses, both DOE and NRC agreed with our report. (See apps. I and II.) In addition, both DOE and NRC provided comments to improve the draft report’s technical accuracy, which we have incorporated as appropriate. To examine the development of DOE’s license application schedule, we reviewed DOE documents related to the announcement and creation of the license application. We also reviewed the DOE management plan for creating the license application and other internal reports on the progress in drafting the application. We interviewed OCRWM’s director and other OCRWM senior management officials in DOE headquarters and its Las Vegas project office about the process for creating the schedule, including consultations with stakeholders. In addition, we observed meetings covering topics related to the license application schedule between DOE and NRC, the Advisory Committee on Nuclear Waste and Materials, and the Nuclear Waste Technical Review Board. These meetings were held in Rockville, Maryland; Las Vegas, Nevada; and Arlington, Virginia. To obtain NRC’s assessment of DOE’s readiness to file a high-quality license application, we obtained NRC documents—such as the status of key technical issues and briefing slides on NRC’s technical exchanges with DOE. We also attended NRC staff briefings for the Commission’s Advisory Committee on Nuclear Waste and Materials, including a briefing on NRC’s prelicensing activities; reviewed meeting transcripts; and observed a NRC- DOE quarterly meeting and recorded NRC’s comments. In addition, we interviewed NRC’s project manager who is responsible for reviewing the postclosure portion of a license application, NRC’s on-site representative at the Las Vegas office, and other NRC regional officials. Furthermore, we interviewed the director of EPA’s Office of Air and Radiation Safety regarding the status of EPA’s rulemaking to set radiation exposure standards for the public outside the Yucca Mountain site. To determine DOE’s progress in implementing the recommendations and resolving the additional challenges identified in our March 2006 report, we reviewed prior GAO reports that assessed DOE’s quality assurance process and relevant DOE corrective action reports, root cause analyses, and other internal reviews that analyzed DOE’s efforts to improve its management tools and its corrective action program in general. We also reviewed related NRC documents, such as some observation audit reports. We observed NRC and DOE management meetings and technical exchanges in Rockville, Maryland, and Las Vegas, Nevada, that covered related issues. We also interviewed OCRWM’s director in DOE headquarters and senior managers at the Yucca Mountain project office in Las Vegas about their efforts to address our recommendations. Regarding the quality assurance challenges noted in our prior report, we reviewed a January 2007 GAO report discussing the USGS issue and reviewed DOE documents detailing their actions to restore confidence in the scientific documents. We reviewed internal DOE documents regarding requirements management and interviewed the program’s chief engineer in charge of resolving this issue. Finally, regarding staff turnover in key management positions, we reviewed OCRWM’s strategic objectives, business plan, and project documents and interviewed OCRWM’s director and other senior project managers about their efforts to improve succession planning. As agreed with your office, unless you publicly announce the contents of this report, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Energy, the Chairman of the Nuclear Regulatory Commission, the director of the Office of Management and Budget, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or gaffiganm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were Richard Cheston, Casey Brown, Omari Norman, Alison O’Neill, and Daniel Semick.
Nuclear power reactors generate highly radioactive waste. To permanently store this waste, the Department of Energy (DOE) has been working to submit a license application to the Nuclear Regulatory Commission (NRC) for a nuclear waste repository at Yucca Mountain about 100 miles from Las Vegas, Nevada. Although the project has been beset with delays, in part because of persistent problems with its quality assurance program, DOE stated in July 2006 that it will submit a license application with NRC by June 30, 2008. NRC states that a high-quality application needs to be complete, technically adequate, transparent by clearly justifying underlying assumptions, and traceable back to original source materials. GAO examined (1) DOE's development of its schedule for submitting a license application and the stakeholders with whom it consulted, (2) NRC's assessment of DOE's readiness to submit a high-quality application, and (3) DOE's progress in addressing quality assurance recommendations and challenges identified in GAO's March 2006 report. GAO reviewed DOE's management plan for creating the license application, reviewed correspondence and attended prelicensing meetings between DOE and NRC, and interviewed DOE managers and NRC on-site representatives for the Yucca Mountain project. In commenting on a draft of the report, both DOE and NRC agreed with the report. The director of DOE's Office of Civilian Radioactive Waste Management set the June 30, 2008, date for filing the license application with NRC in consultation with the DOE and contractor managers for the Yucca Mountain project. DOE officials told us that external stakeholders were not consulted because there was neither a legal requirement nor a compelling management reason to do so. According to the director, the June 2008 schedule is achievable because DOE has already completed a large amount of work, including the completion of a draft license application in 2005 that DOE decided not to submit to NRC. NRC officials believe it is likely that DOE will submit a license application by June 30, 2008, but until NRC receives the application, officials will not speculate about whether it will be high quality. NRC has not seen a draft of the license application, and NRC's long-standing practice is to maintain an objective and neutral position toward a future application until it is filed. To help ensure that DOE understands its expectations, NRC has, among other things, held periodic prelicensing management and technical meetings with DOE. DOE has made progress in resolving the quality assurance recommendations and challenges identified in GAO's March 2006 report. For example, DOE has replaced the one-page summary of performance indicators that GAO had determined was ineffective with more frequent and rigorous project management meetings. DOE has addressed the management challenges GAO identified to varying degrees. For example, regarding management continuity, DOE has worked to fill and retain personnel in key management positions, such as the director of quality assurance. However, for various reasons--including the long history of recurring problems and likely project leadership changes in January 2009 when the current administration leaves office--it is unclear whether DOE's actions will prevent these problems from recurring.
BLM, an agency of the U.S. Department of the Interior, manages 261 million surface acres and an additional 700 million acres of subsurface mineral estate throughout the nation. The agency’s mission is to maintain the health, diversity, and productivity of public lands for the use and enjoyment of present and future generations. To carry out this mission, BLM employs a workforce of about 11,000 employees located in headquarters in Washington, D.C., 12 state offices, 130 field offices, and national centers specializing in training, fire management support, science and technology, human resources management, information resources management, and business services. BLM collects, analyzes, and records a tremendous amount of business information about the public lands and resources, ranging from land title to recreational usage to wildlife habitat. These data are mainly geographic in character and are best understood when displayed and analyzed in spatial form using automated geographic information systems. Numerous parties—including public land users; educational institutions; public interest groups; other federal, state, tribal, and local agencies; and the scientific community—use these data or information to make thousands of business decisions each year. The central focus of BLM’s IT strategy is to develop integrated systems that help BLM meet national and local needs in managing the lands and natural resources, while supporting the mission and goals outlined in BLM’s Strategic Plan. For example, BLM uses its Automated Fluid Minerals Support System to support its Oil and Gas Program. The system helps match mineral estate information to information on existing wells, facilities, permits, and inspections and provides an automated system for granting permits to BLM customers and creating reports. Currently, BLM issues approximately 41,000 permits and reports and conducts about 19,000 inspections per year. BLM also uses its Management Information System, which provides a Web-enabled, business information, budgetary, financial, and program performance system so that simple data analysis can be performed benefiting the entire bureau. This system provides managers with up-to-date business data to make cost effective decisions concerning the management of BLM’s people, resources, and natural resources for several of Interior’s mission goals. BLM’s estimated IT expenditures are $146.45 million for fiscal year 2003. Between 1995 and 2001, we conducted reviews and issued several reports on problems and risks that threatened the successful development and deployment of BLM’s modernization of its Automated Land and Mineral Record System. In a recent report addressing this issue, we noted, among other things, that after 15 years and about $411 million obligated, the project was terminated because the Initial Operating Capability module—a major component of the system—did not meet BLM’s business needs and therefore could not be deployed. We also reported that the absence of adequate investment management processes and practices at BLM was a significant factor contributing to the failure of the system. Accordingly, we recommended that the Secretary of the Interior direct BLM to take certain actions to help it strengthen its investment management process. BLM has been working on improving its process since that time. BLM has assigned several individuals and groups with responsibilities for managing national IT investments, that is investments which, among other things, are considered major applications or general support systems; have a life-cycle value of greater than $500,000; or will affect multiple states, centers, or business areas. These individuals and groups and their roles are described below. National Information Technology Investment Board (ITIB)—Chaired by BLM’s Deputy Director of Operations, this board is responsible for selecting, controlling, and evaluating all national IT investments. Members include the CIO, Chief Financial Officer, Assistant Directors from the business units, two State Directors, an Associate State Director, the CIO Council Chair, the Bureau Architecture Chair, a Fire and Aviation Portfolio Representative, and several ex officio members including Interior’s CIO, the bureau Architect, and managers from the System Coordination Office and the Investment Management Group. System Coordination Office—Created in June 2000 to support a number of IT management functions, the System Coordination Office, among other things, is responsible for coordinating the screening of all IT investments and projects to ensure that they are in line with the bureau’s selection, control, and evaluation criteria, and monitors project performance (scope, schedule, and budget). The office is also responsible for coordinating the development of a project management curriculum and mentoring and developing a cadre of trained and experienced project managers. Investment Management Group—Responsibilities of this group include coordinating the development and maintenance of the bureau’s IT investment portfolio, ensuring that all investments fit within budget constraints, and providing investment updates and forecasting as needed. Information Technology Portfolio Management Council—Chartered in June 2003, but established about a year ago, this council serves as an advisory council to the ITIB and is responsible, among other things, for applying business-related rating and ranking criteria to BLM’s portfolio, performing trade-off analyses, and working with the Investment Management Group to develop funding strategies. The council is also responsible for ensuring that investments are clearly tied to the mission and strategic plans (both business and information resources management) of the bureau and selected by a consistent, repeatable, objective process. Members include national IT portfolio managers for each of the directorates, representatives from the state portfolios and the Bureau Enterprise Architecture Team, and members from the System Coordination Office and the Investment Management Group. Bureau Enterprise Architecture Team—Responsible for ensuring that investment proposals and business cases are aligned with the bureau enterprise architecture’s business processes, data, applications, and technology components. Project proponent—Responsible, among other things, for leading the development of the investment proposal, coordinating and championing the development of the business case, and working with the project manager throughout the life cycle of the project. Project sponsor—A field, center, or Washington office manager who authorizes the development of a business case. The project sponsor shifts roles to become the system owner when the project moves into operations and maintenance. The project sponsor is responsible for selecting a project manager, approving all project documentation, and participating in a management oversight role throughout the planning, design, development, testing, acceptance, and deployment of the project. Project manager—Responsible for developing the project plan and leading and managing the project. The project manager reports directly to the project sponsor. Ultimately, it is the project managers who are responsible for successfully managing and completing one or more projects approved by the ITIB. The bureau has also defined a three-phase IT investment management process which involves selecting proposed IT projects (select phase), controlling ongoing projects through development (control phase), and evaluating projects that have been deployed (evaluate phase). Each phase comprises multiple stages that have entrance and exit criteria defined in the IT Investment Management Process guide that must be satisfied before a project can move from one stage to the next stage or phase in the process. The System Coordination Office tracks projects’ progress through the various stages, ensuring that they comply with the processes defined in the guide. The national ITIB stays abreast of projects’ performance through quarterly report reviews. The board is also directly involved in key milestone (i.e., stage) reviews. The purpose of the phase is to ensure that BLM chooses the IT projects that best support its mission and align with the bureau’s architecture. During this phase, the project proponent and portfolio manager are expected to collaborate to develop an investment proposal. The System Coordination Office is responsible for reviewing the proposal and ensuring that issues are identified and resolved. Finally, the ITIB is to review the proposal and either approve it, approve it with stipulations, return it for further analysis, or reject it. If the ITIB approves the proposal, the project manager and project proponent are to work to develop a more elaborate business case. The System Coordination Office reviews the business case and coordinates the reviews performed by other groups (e.g., the Bureau Enterprise Architecture Team). The Office then makes recommendations for approval to the ITIB on the basis of these reviews. At the end of the select phase, a project plan is to be developed that defines the strategies for managing the project. According to BLM officials, to date the ITIB has placed more emphasis on this phase than on the other two. Once selected for inclusion in the bureau’s IT portfolio, each project is to be managed by a trained or experienced IT project manager and monitored by the System Coordination Office and ITIB on a quarterly basis throughout its life cycle. Included within the project’s plan, which is developed at the end of the select phase, are milestones for architecture, technical, and project management reviews. Factors such as project risk, complexity, and cost determine the scope and frequency of each of these milestone reviews. Projects that fall short of meeting their predetermined budget, schedule, or scope requirements are to be reviewed by the ITIB, who works with the project managers to develop an appropriate course of action. If this issue arises, the ITIB must decide whether to continue the project; rebaseline the scope, schedule, or budget; or to terminate the project. Ultimately, all decisions that are carried out are a result of the ITIB voting process. Once a project has been fully implemented and accepted by the users and system owner, the System Coordination Office and ITIB are responsible for monitoring its schedule and budget quarterly. BLM has also, on a limited basis, begun performing postimplementation reviews—BLM refers to these as postdeployment reviews— in which a project’s actual results are to be evaluated against expected results to compare realized to estimated benefits and assess the project’s impact on mission performance. Necessary changes or modifications to the project are to be identified, and technical compliance with the bureau enterprise architecture is also to be assessed. The main objective of the postimplementation review is to derive lessons learned, which may lead to investment management process improvements and opportunities for improving business processes (which in turn provide input into the select phase). To date, BLM has performed postimplementation reviews for two systems. Figure 1 illustrates BLM’s IT investment management process phases and stages. The highlighted stages represent those for which the ITIB must make an approval decision before a project can move forward. On the basis of research into the IT investment management practices of leading private- and public-sector organizations, we have developed an information technology investment management (ITIM) maturity framework. This framework identifies critical processes for successful IT investments organized into a framework of five increasingly mature stages. The ITIM is intended to be used as both a management tool for implementing these processes incrementally and an evaluation tool for determining an organization’s current level of maturity. The overriding purpose of the framework is to encourage investment processes that increase business value and mission performance, reduce risk, and increase accountability and transparency in the decision process. This framework has been used in several GAO evaluations and adopted by a number of agencies. These agencies have used ITIM for purposes ranging from self-assessment to redesign of their IT investment management processes. ITIM is a hierarchical model comprising five “maturity stages.” These maturity stages represent steps toward achieving stable and mature processes for managing IT investments. Each stage builds upon the lower stages; the successful achievement of each stage leads to improvement in the organization’s ability to manage its investments. With the exception of the first stage, each maturity stage is composed of “critical processes” that must be implemented and institutionalized for the organization to achieve that stage. These critical processes are further broken down into key practices that describe the types of activities that an organization should be performing to successfully implement each critical process. An organization may be performing key practices from more than one maturity stage at one time. This is not unusual, but efforts to improve investment management capabilities should focus on becoming compliant with lower stage practices before addressing higher stage practices. Stage 2 in the ITIM framework encompasses building a sound investment management process—by developing the capability to control projects so that they finish predictably within established cost and schedule expectations—and establishing basic capabilities for selecting new IT projects. Stage 3 requires that an organization continually assess proposed and ongoing projects as parts of a complete investment portfolio: an integrated and competing set of investment options. This approach enables the organization to consider the relative costs, benefits, and risks of newly proposed investments along with those previously funded and to identify the optimal mix of IT investments to meet its mission, strategies, and goals. Stages 4 and 5 require the use of evaluation techniques to continuously improve both the investment portfolio and investment processes to better achieve strategic outcomes. Figure 2 shows the five maturity stages and the associated critical processes. As defined by the model, each critical process consists of “key practices” that must be executed to implement the critical process. In order to have the capabilities to effectively manage IT investments, an agency should (1) have basic, project-level control and selection practices in place (stage 2 capabilities) and (2) manage its projects as a portfolio of investments, treating them as an integrated package of competing investment options and pursuing those that best meet the strategic goals, objectives, and mission of the agency (stage 3 capabilities). In addition, an agency would be well served by implementing capabilities for improving its investment management process (stage 4 capabilities). BLM has executed the majority of the project-level control and selection practices. The bureau has also initiated efforts to manage its projects as a portfolio and performed two postimplementation reviews to learn lessons to improve its investment management process. When BLM implements all critical processes associated with building an investment foundation and managing its projects as a portfolio, the bureau will have greater confidence that it has selected the mix of projects that best supports its strategic goals and that the projects will be managed to successful completion. At ITIM stage 2 maturity, an organization has attained repeatable, successful IT project-level investment control processes and basic selection processes. Through these processes, the organization can identify expectation gaps early and take appropriate steps to address them. According to ITIM, critical processes at stage 2 include (1) defining investment review board operations, (2) collecting information about existing investments, (3) developing project-level investment control processes, (4) identifying the business needs for each IT project, and (5) developing a basic process for selecting new IT proposals. Table 1 discusses the purpose for each of the stage 2 critical processes. To its credit, BLM has put in place about 85 percent of the key practices associated with stage 2 critical processes. The bureau has satisfied all the key practices associated with establishing the governing boards responsible for managing IT investments and ensuring that IT projects support organizational needs and meet users’ needs. It has satisfied a majority of the key practices associated with proposal selection and IT project oversight and is working on incorporating the use of an IT project and system inventory into its IT investment management process. Table 2 summarizes the status of BLM’s critical processes for stage 2, showing how many associated key practices it has executed. The creation of decision-making bodies or boards is central to the IT investment management process. At the stage 2 level of maturity, organizations define one or more boards, provide resources to support their operations, and appoint members who have expertise in both operational and technical aspects of proposed investments. Resources provided to support the operations of IT investment boards typically include top management’s participation in creating the board(s) and defining their scope and formal evidence acknowledging management’s support for board decisions. The boards operate according to a written IT investment process guide tailored to the organization’s unique characteristics, thus ensuring that consistent and effective management practices are implemented across the organization. Once board members are selected, the organization ensures that they are knowledgeable about policies and procedures for managing investments. Organizations at the stage 2 level of maturity also take steps to ensure that executives and line managers support and carry out the decisions of the IT investment board. According to ITIM, an IT investment management process guide should be a key authoritative document that the organization uses to initiate and manage IT investment processes and should provide a comprehensive foundation for policies and procedures developed for all other related processes. (The complete list of key practices is provided in table 3.) BLM has executed all the key practices for this critical process. For example, in 1998, the bureau established an IT Investment Board (the ITIB) to manage national investments. With the development of the IT Investment Management Process guide in 2001, BLM provided the board and all involved parties (i.e., project managers and sponsors, portfolio managers, investment management group) with specifics concerning responsibilities and procedures. This guide is centered on a project’s life cycle and requisite decision points (phases and stages) in the investment process from the submission of a proposal to a postimplementation review. The board is also adequately resourced, with the main support being provided by the System Coordination Office, whose responsibilities include developing and modifying the bureau’s criteria for selecting, controlling, and evaluating potential and existing IT investments and documenting, recording, and transmitting decisions made by the board. Experienced senior-level officials from both business and IT areas are members of the board and exhibit the core competencies required by the investment management process. Finally, all actions by the board are well documented using meeting minutes and records of decision. In June 2003, an action-item tracking matrix was introduced. This matrix is used to identify and track ITIB-approved decisions and assigned responsibilities to ensure that the board’s decisions are carried out. By executing all key practices associated with creating and defining investment board operations, BLM has greater assurance that the ITIB will effectively carry out its responsibilities. Table 3 shows the rating for each key practice required to implement the critical process for establishing IT investment board operation at the stage 2 level of maturity. Each of the “Executed” ratings shown below represents instances where, based on the evidence provided by BLM officials, we concluded that the specific key practices were executed by the organization. An IT project and system inventory provides information to investment decision makers to help evaluate the impacts and opportunities created by proposed or continuing investments. This inventory (which can take many forms) should, at a minimum, identify the organization’s IT projects (including new and existing systems) and a defined set of relevant investment management information about them (e.g., purpose, owner, life- cycle stage, budget cost, physical location, and interfaces with other systems). Information from the IT project and system inventory can, for example, help identify systems across the organization that provide similar functions and help avoid the commitment of additional funds for redundant systems and processes. It can also help determine more precise development and enhancement costs by informing decision makers and other managers of interdependencies among systems and how potential changes in one system can affect the performance of other systems. According to ITIM, effectively managing an IT project and system inventory requires, among other things, (1) identifying projects and systems, collecting relevant information about them, and capturing this information in a repository; (2) assigning responsibility for managing the inventory process and ensuring that the inventory meets the needs of the investment management process; (3) developing written policies and procedures for maintaining the project and system inventory; (4) making information from the inventory available to staff and managers throughout the organization so they can use it, for example, to build business cases and support activities to select and control projects; and (5) maintaining the inventory and its information records to contribute to future investment selections and assessments. (The full list of key practices is provided in table 4.) BLM has executed three out of seven of the key practices for IT project and system identification. For example, the bureau is using its target application architecture and Budget Planning System to collect information on its IT projects and systems to make informed IT investment management decisions; according to CIO officials, the architecture is used for the information it contains on BLM’s business processes and supporting data, applications, and technology, while the Budget Planning System is used for the financial information on the investments. Resources have been assigned to support activities related to identifying IT projects and systems, including the Bureau Enterprise Architecture Team and the Budget Planning System system owners. According to BLM, all national projects and systems are in both the target application architecture and Budget Planning System (although BLM officials told us that they have planned a meeting to determine whether additional requirements are needed for the Budget Planning System to effectively serve as an inventory for investment management purposes). Despite these strengths, policies and procedures for collecting project and system information in the Budget Planning System for investment management purposes have not yet been defined. However, the CIO has directed teams composed of the System Coordination Office, portfolio managers, the Investment Management Group, and system owners of the Budget Planning System to “identify the ownership of each process associated with the IT project and system inventory.” This step would form the basis for policies and procedures relating to the collection (and use) of information in the inventory. Until BLM defines these policies and procedures, it cannot adequately ensure that its inventory can be relied upon as an effective tool to assist in investment decision making. Table 4 shows the rating for each key practice required to implement the critical process for IT project and system identification at the stage 2 level of maturity and summarizes the evidence that supports these ratings. Investment boards should effectively oversee IT projects throughout all life-cycle phases (concept, design, testing, implementation, and operations/maintenance). At the stage 2 level of maturity, investment boards should review each project’s progress toward predefined cost and schedule expectations, using established criteria and performance measures, and take corrective actions to address cost and milestone variances. According to ITIM, effective project oversight requires, among other things, (1) having written policies and procedures for project management; (2) developing and maintaining an approved management plan for each IT project; (3) making up-to-date cost and schedule data for each project available to the oversight boards; (4) reviewing each project’s performance by regularly comparing actual cost and schedule data with expectations; (5) ensuring that corrective actions for each under- performing project are documented, agreed to, implemented, and tracked until the desired outcome is achieved; and (6) having written policies and procedures for oversight of IT projects. (The complete list of key practices is provided in table 5.) BLM has in place all but one of the key practices associated with effective project oversight. Project management policies and high-level procedures are defined in the IT Investment Management Process guide, associated memoranda, and in best practices guidance. In addition, project oversight policies and procedures are defined in the guide and associated memoranda, which, among other things, require the involvement and approval of the board at key stages in a project’s life cycle. For example, according to the guide, the ITIB must review and approve investment proposals before they can be developed into business cases. Further, once a project has been approved, the ITIB reviews up-to-date cost, schedule, and scope information quarterly and analyzes this information against predetermined performance expectations. The board also determines corrective actions for projects that have not met performance expectations. We verified that cost, schedule, and scope information was submitted to the ITIB quarterly and analyzed against expectations and, when significant variances occurred, corrective actions were determined for the three projects we reviewed. This involved rebaselining the project plan based on schedule slippages or increased costs. In all cases, the ITIB was responsible for this decision. Notwithstanding these strengths, as discussed in the previous section, BLM’s IT project and systems inventory has not yet been developed to the point where information is consistently collected and maintained to make informed investment management decisions. This increases the risk that the ITIB will not have at its disposal reliable information for supporting project and portfolio investment decisions and oversight. Table 5 shows the rating for each key practice required to implement the critical process for project oversight at the stage 2 level of maturity and summarizes the evidence that supports these ratings. Defining business needs for each IT project helps ensure that projects support the organization’s mission goals and meet users’ needs. This critical process creates the link between the organization’s business objectives and its IT management strategy. According to ITIM, effectively identifying business needs requires, among other things, (1) defining the organization’s business needs or stated mission goals, (2) identifying users for each project who will participate in the project’s development and implementation, (3) training IT staff adequately in identifying business needs, and (4) defining business needs for each project. (The complete list of key practices is provided in table 6.) BLM has executed all the key practices for this critical process. The bureau’s IT Investment Management Process guide requires that business needs and associated users of each IT project be identified in the investment proposal and business case stages of the select phase. BLM also has detailed procedures for developing these two documents that call for identifying business needs and associated users. Resources for identifying business needs and associated users include the project sponsor, project proponent, and the System Coordination Office and detailed procedures and associated templates for developing investment proposals and business cases. Bureau specific business needs are defined in the Bureau of Land Management’s Strategic Plan for fiscal years 2000–2005, and projects are also often linked to the Department of the Interior’s strategic goals and goals of the President’s Management Agenda. In addition, individuals responsible for managing projects at BLM can adequately identify business needs; according to BLM, the bureau’s practice is to select staff from business units as project managers (instead of staff from the IT unit). Finally, according to BLM officials, all national applications are in the Budget Planning System and target enterprise architecture, which are repositories of investment information that BLM is planning on using to make informed investment management decisions. For the three projects we reviewed, business needs and associated users were identified in business case or project planning documents, and users were involved in project management throughout the life cycle of the project through, for example, chartered user-group meetings, structured walk-throughs, prerelease workshops, and conference calls. Because it is executing all key practices associated with business needs identification, BLM can have greater assurance that its projects will support business needs and meet users’ needs. Table 6 shows the rating for each key practice required to implement the critical process for business needs identification at the stage 2 level of maturity and summarizes the evidence that supports these ratings. Selecting new IT proposals requires an established and structured process to ensure informed decision making and infuse management accountability. According to ITIM, this critical process requires, among other things, (1) making funding decisions for new IT proposals according to an established process; (2) providing adequate resources to proposal selection activities; (3) using an established proposal selection process; (4) analyzing and ranking new IT proposals according to established selection criteria, including cost and schedule criteria; and (5) designating an official to manage the proposal selection process. (The complete list of key practices is provided in table 7.) BLM has executed five of the six key practices associated with proposal selection. For example, the IT Investment Management Process guide defines a multistage selection process (the select phase), including developing a business case and project plan that executives and project proponents follow. Resources for proposal selection activities include project proponents and the System Coordination Office. As previously noted, detailed procedures and a template have been defined for developing new IT proposals. In addition, according to BLM officials, funding decisions are made through the budget process, which BLM is working on to better integrate with the investment management process. Despite these strengths, the key practice associated with analyzing and prioritizing new proposals according to established criteria has not yet been executed because the various criteria for doing so have not yet been fully defined. The Bureau Enterprise Architecture Team has defined criteria it uses to determine projects’ compliance with the bureau enterprise architecture’s business processes, data, applications, and technology components. Another set of criteria is being developed to assess proposals for their business value. These criteria are intended to be used by the IT Portfolio Management Council to screen proposals before they are reviewed by the ITIB. The ITIB members believe that these criteria are key to analyzing new proposals and have charged the IT Portfolio Management Council with developing draft criteria. The council intends to finalize the criteria in time for their use in the spring of 2004 to evaluate proposals for the fiscal year 2006 budget. Until BLM finalizes its proposal selection criteria and uses them to analyze and prioritize proposals, the bureau will not be adequately assured that it is consistently and objectively selecting proposals that best meet the needs and priorities of the agency. Table 7 shows the rating for each key practice required to implement the critical process for proposal selection at the stage 2 level of maturity and summarizes the evidence that supports these ratings. Once an agency has attained stage 2 maturity, it needs to establish capabilities for managing its investments as a portfolio (stage 3). Such capabilities enable the agency to consider its investments comprehensively so that the collective investments optimally address its mission, strategic goals, and objectives. Stage 3 capabilities include (1) defining portfolio selection criteria, (2) engaging in project-level investment analysis, and (3) aligning the authority of IT investment boards. In addition, establishing higher level stage capabilities—for example performing postimplementation reviews—can help an agency improve its investment management process. BLM has initiated efforts to manage its investments as a portfolio. BLM has defined procedures for aligning the national ITIB with subordinate boards. Portfolio categories have been defined that correspond to BLM’s organizational units (e.g., Minerals, Realty, and Resource Protection; Information Resource Management). Moreover, the board revised its charter in April 2003 to include portfolio management responsibilities (i.e., stage 3) and established an IT Portfolio Management Council to support it in carrying out these responsibilities. At a higher level stage, BLM has begun to address postimplementation reviews, a critical process associated with the most capable organizations. BLM has begun performing postimplementation reviews on a limited basis to learn lessons for improving both project management and investment management processes. To date, the bureau has conducted two such reviews. Compared with the progress at stage 2, BLM’s progress to date in defining practices for higher level maturity stages has been limited because, according to its officials, the ITIB first focused its resources on establishing the processes associated with building the IT investment management foundation. Full implementation of the critical processes associated with portfolio management will provide BLM with the capability to determine whether it is selecting the mix of products that best meet the bureau’s mission needs. Implementing critical processes at higher level stages will equip BLM with the capabilities it needs to improve its investment management processes. We have previously reported that to effectively implement ITIM processes, agencies need to be guided by a plan that (1) is based on an assessment of strengths and weaknesses; (2) specifies measurable goals, objectives, and milestones; (3) specifies needed resources; and (4) assigns clear responsibility and accountability for accomplishing well-defined tasks. In addition, these plans should be approved by senior management. Although a plan was developed a few years ago to establish the practices currently in place, BLM does not have a plan to guide further improvement of its investment management process. An independent assessment of BLM's ITIM process relative to stage 2 of our IT investment management framework was completed in January 2003, but BLM has not yet used the results of this assessment to develop an improvement plan. According to the CIO, this is because BLM intends to develop a plan integrating improvements for IT investment management and other IT management areas, and the results of the comprehensive assessment to be used as a basis for this integrated plan were not received until June 2003. BLM officials recognize the importance of having a plan to guide their improvement efforts, however, and stated their commitment to developing one, although they do not have a specific time frame for doing so. Until BLM develops this plan, the bureau risks losing the momentum it has gained in implementing its ITIM process. BLM has made good progress in defining and establishing its investment management process in the 2 years since we reported that the lack of such a process had largely contributed to the failure of a key program. By establishing most of the key practices associated with building the investment foundation, the bureau has strengthened its basic capabilities for selecting and controlling projects and positioned itself to develop the processes for managing its investments as a portfolio. Critical to BLM’s success going forward will be the development of an implementation plan—preferably integrated with implementation plans for improving other IT management areas—to (1) guide and establish accountability for executing the stage 2 key practices that we noted needed to be addressed and (2) proceeding with efforts to define and implement stage 3 key practices. Without this plan, BLM risks not being able to sustain the progress made to date in establishing its investment management process. To strengthen BLM’s IT investment management capability and address the weaknesses discussed in this report, we recommend that the Secretary of the Department of the Interior direct the BLM Director to develop and implement a plan for improving its IT investment management process that is based on GAO’s ITIM stage 2 and 3 critical processes. The plan should, at a minimum, provide for accomplishing the following: implementing the recently approved procedures for determining corrective actions for projects that have not met performance expectations; defining and implementing policies and procedures for collecting project and system information in the Budget Planning System for investment management purposes; fully defining criteria for analyzing and prioritizing new IT proposals; proceeding with plans to define and implement all stage 3 critical processes, which are necessary for portfolio management. In developing the plan, the BLM Director should ensure that it (1) is based on the results of the bureau’s recent assessment of ITIM stage 2 capabilities; (2) specifies measurable goals, objectives, milestones, and outcomes; (3) specifies needed resources; and (4) assigns clear responsibility and accountability for accomplishing well-defined tasks. In implementing the plan, the Director should ensure that the needed resources are provided and that progress is measured and reported periodically to the Secretary of the Interior. In written comments on a draft of this report (reprinted in app. II), BLM’s Director agreed with our findings and recommendations and stated that they represented a fair and accurate evaluation of the bureau’s status and progress towards IT investment management maturity. The Director also noted that BLM has begun developing a plan, in accordance with our recommendations, to (1) complete the key practices for reaching GAO’s ITIM stage 2 maturity and (2) identify the goals, time frames, outcomes, and resources needed to reach stage 3 maturity. BLM provided additional technical comments, which we have incorporated into the report as appropriate. We are sending copies of this report to interested congressional committees. We are also sending copies to the Director of the Office of Management and Budget, the Secretary of the Interior, and BLM’s Director and CIO. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at www.gao.gov. Should you or your offices have questions on matters discussed in this report, please contact me at (202) 512-6240, or Lester P. Diamond, Assistant Director, at (202) 512-7957. We can also be reached by E-mail at at koontzl@gao.gov, or diamondl@gao.gov, respectively. Key contributors to this assignment were Jamey A. Collins, Sabine R. Paul, and Sophia Harrison. The objectives of our review were to (1) evaluate the Bureau of Land Management’s (BLM) IT investment management capabilities against the key practices defined in GAO’s IT investment management assessment framework and (2) determine the agency’s plans for improving these capabilities. To address our first objective, we assessed the extent to which BLM satisfied the five critical processes identified in stage 2 of GAO’s Information Technology Investment Management (ITIM) framework. We applied the framework as it is described in the exposure draft, except that we used a revised version of the IT Asset Inventory critical process, called IT Project and System Identification, after discussions with departmental officials at the beginning of this engagement. This revised critical process has been used in our evaluations since June 2001. We did not formally assess BLM’s progress in establishing capabilities found in stages 3, 4, and 5 because BLM acknowledged that it had so far primarily focused on stage 2 and had not executed many key practices in higher maturity stages. In addition, we limited our review to BLM’s management of its national investments because they represent the investments of greater cost and impact to the organization. To determine whether BLM had implemented the critical processes associated with stage 2, we reviewed the results of a self-assessment of stage 2 practices using GAO’s ITIM framework and validated and updated the results of the self-assessment through document reviews and interviews with officials. We reviewed written policies, procedures, and guidance and other documentation providing evidence of executed practices, including BLM’s IT Investment Management Process guide, various board/council charters, and instruction memorandums. We also reviewed national Information Technology Investment Board (ITIB) meeting materials, including quarterly status reports, meeting minutes, records of decision, and matrices tracking action items through completion. We interviewed several BLM officials, including system coordination office officials, portfolio managers, and ITIB members. We also attended a 2-day national ITIB meeting in March 2003. As part of our analysis, we selected three IT projects as case studies to verify application of the critical processes and practices. We selected projects that (1) supported different BLM functional areas (directorates), (2) were in different life-cycle phases, and (3) required various levels of funding. The three projects are the following: PayCheck—The objective of PayCheck, currently in the evaluate phase, is to allow employees to input their time and attendance data using an automated system. Previously, the employee developed a hard-copy time sheet, and then a timekeeper keyed the same information into the payroll system. By allowing the employees to enter their own data, PayCheck changed the business process and eliminated the duplication of manual effort. This project, with an estimated life-cycle cost of $1,681,000, is the responsibility of the National Human Resources Management Center, which supports BLM’s human resources function. National Integrated Land System—The objective of this system, which is currently in a mixed life cycle, is to provide a process to collect, maintain, and store survey- and parcel-based land information that meets the common, shared business needs of land title and land resource management. This system will provide agencies, BLM’s partners, and the public with business solutions for the management of cadastral records and land parcel information in a Geographic Information System environment, accessible via the Internet. This project, which has an estimated life-cycle cost of $31.3 million, is under the Minerals, Realty, and Resource Protection directorate. This directorate is responsible for managing commercial energy and mineral production from the public lands. Antivirus—The objective of Antivirus, currently in the control phase, is to renew or replace BLM’s existing antivirus contract that is expiring to provide antivirus coverage to all simple mail transfer protocol gateways, mail servers, other servers, desktops, and laptops. In addition, BLM is seeking to provide improved enterprise management and reporting capabilities, as well as a more automated methodology for deploying virus update files across the bureau. This project is being carried out by the Information Resources Management directorate, which provides IT services to BLM states, centers, and partners in support of the bureau’s mission. The estimated life cycle cost for Antivirus is $800,600. For these projects, we reviewed project management documentation, such as business cases, project plans, and quarterly reports. We also reviewed user-group meeting minutes and analyzed national ITIB decision documents related to each of the projects. We also interviewed the project managers for these projects. We compared the evidence collected from our document reviews and interviews to the key practices in ITIM. We rated the key practices as “executed” on the basis of whether the agency demonstrated (by providing evidence of performance) that it had met the criteria of the key practice. A key practice was rated as ”not executed” when we found insufficient evidence of a practice during the review, or when we determined that there were significant weaknesses in BLM’s execution of the key practice. To address our second objective, we interviewed officials from the System Coordination Office, whose main responsibility it is to oversee and ensure that BLM’s IT investment management process is implemented and followed; the chief information officer; and other national ITIB members to determine efforts undertaken to improve IT investment management processes. We also reviewed an improvement plan developed about 3 years ago based on strengths and weaknesses of BLM’s IT investment management process at that time, and the results of the comprehensive IT management assessment BLM officials stated they plan to use as a basis for an integrated plan for improving IT investment management and other IT management areas. We conducted our work at BLM Headquarters in Washington, D.C., from March through July 2003, in accordance with generally accepted government auditing standards. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The mission of the Department of the Interior's Bureau of Land Management (BLM) is to maintain the health, diversity, and productivity of the public lands for the use and enjoyment of present and future generations. BLM employs about 11,000 people, with information technology (IT) playing a critical role in helping BLM perform its responsibilities. The bureau estimates that it will spend about $146 million on IT initiatives in fiscal year 2003. GAO was asked to evaluate BLM's IT investment management (ITIM) capabilities and determine the bureau's plans for improving these capabilities. GAO's evaluation was based on applying its ITIM maturity framework, which identifies critical processes for successful IT investment management. BLM has made progress in establishing its ITIM capabilities. Specifically, BLM has established most of the key practices associated with building an investment foundation. For example, the bureau has established a board for managing IT investments, implemented processes to ensure that IT projects support business needs and meet users' requirements, and established a process for selecting IT proposals. In addition, the bureau has efforts under way to address the key practices it has not yet established. BLM has also initiated efforts to manage its investments as a portfolio. For example, it has established a council to support portfolio management activities and begun defining portfolio selection criteria. BLM has also begun performing postimplementation reviews to learn lessons that will help define and implement an IT investment evaluation process. However BLM's progress to date in defining practices for managing its investments as a portfolio has been limited because, according to its officials, its investment board first focused its resources on establishing the processes associated with building the IT investment management foundation. Although BLM has made progress in developing its IT investment process, it has not yet developed a plan to guide its efforts in this area and, as a result, may not be able to successfully establish more mature ITIM processes. According to the chief information officer, this is because BLM wanted to develop an ITIM plan that is integrated with improvement plans for other IT management areas, and the results of the comprehensive assessment that were to be used as the basis for such a plan were obtained only in June 2003. BLM officials agree that this plan is necessary for guiding improvement efforts and stated their intention to develop one. Developing such a plan will help BLM sustain progress made to date.
To date, the Congress has designated 24 national heritage areas, primarily in the eastern half of the country (see fig. 1). Generally, national heritage areas focus on local efforts to preserve and interpret the role that certain sites, events, and resources have played in local history and their significance in the broader national context. For example, the Rivers of Steel Heritage Area commemorates the contribution of southwestern Pennsylvania to the development of the nation’s steel industry by providing visitors with interpretive tours of historic sites and other activities. Heritage areas share many similarities— such as recreational resources and historic sites—with national parks and other park system units but lack the stature and national significance to qualify them as these units. The process of becoming a national heritage area usually begins when local residents, businesses, and governments ask the Park Service, within the Department of the Interior, or the Congress for help in preserving their local heritage and resources. In response, although the Park Service has no program governing these activities, the agency provides technical assistance, such as conducting or reviewing studies to determine an area’s eligibility for heritage area status. The Congress then may designate the site as a national heritage area and set up a management entity for it. This entity could be a state or local governmental agency, an independent federal commission, or a private nonprofit corporation. Usually within 3 years of designation, the area is required to develop a management plan, which is to detail, among other things, the area’s goals and its plans for achieving those goals. The Park Service then reviews these plans, which must be approved by the Secretary of the Interior. After the Congress designates a heritage area, the Park Service enters into a cooperative agreement with the area’s management entity to assist the local community in organizing and planning the area. Each area can receive funding through the Park Service’s budget—generally limited to not more than $1 million a year for 10 or 15 years. The agency allocates the funds to the area through the cooperative agreement. No systematic process is in place to identify qualified candidate sites and designate them as national heritage areas. In this regard, the Park Service conducts studies—or reviews studies prepared by local communities—to evaluate the qualifications of sites proposed for national heritage designation. On the basis of these studies, the agency advises the Congress as to whether a particular location warrants designation. The agency usually provides its advice to the Congress by testifying in hearings on bills to authorize a particular heritage area. The Park Services’ studies of prospective sites’ suitability help the agency ensure that the basic components necessary to a successful heritage area—such as natural and cultural resources and community support—are either already in place or are planned. Park Service data show that the agency conducted or reviewed some type of study addressing the qualifications of all 24 heritage areas. However, in some cases, these studies were limited in scope so that questions concerning the merits of the location persisted after the studies were completed. As a result, the Congress designated 10 of the 24 areas with only a limited evaluation of their suitability as heritage areas. Of these 10 areas, the Park Service opposed or suggested that the Congress defer action on 6, primarily because of continuing questions about, among other issues, whether the areas had adequately identified goals or management entities or demonstrated community support. Furthermore, of the 14 areas that were designated after a full evaluation, the Congress designated 8 consistent with the Park Service’s recommendations, 5 without the agency’s advice, and 1 after the agency had recommended that action be deferred. Furthermore, the criteria the Park Service uses to evaluate the suitability of prospective heritage areas are not specific and, in using them, the agency has determined that a large portion of the sites studied qualify as heritage areas. According to the Heritage Area national coordinator, before the early 1990s, the Park Service used an ad hoc approach to determining sites’ eligibility as heritage areas, with little in the way of objective criteria as a guide. Since then, however, the Park Service developed general guidelines to use in evaluating and advising the Congress on the suitability of sites as heritage areas. Based on these guidelines, in 1999, the agency developed a more formal approach to evaluating sites. This approach consisted of four actions that the agency believed were critical before a site could be designated as well as 10 criteria to be considered when conducting studies to assess an area’s suitability. The four critical steps include the following: complete a suitability/feasibility study; involve the public in the suitability/feasibility study; demonstrate widespread public support for the proposed designation; and demonstrate commitment to the proposal from governments, industry, and private, nonprofit organizations. A suitability/feasibility study, should examine a proposed area using the following criteria: The area has natural, historic, or cultural resources that represent distinctive aspects of American heritage worthy of recognition, conservation, interpretation, and continuing use, and are best managed through partnerships among public and private entities, and by combining diverse and sometimes noncontiguous resources and active communities; The area’s traditions, customs, beliefs, and folk life are a valuable part of the national story; The area provides outstanding opportunities to conserve natural, cultural, historic, and/or scenic features; The area provides outstanding recreational and educational opportunities; Resources that are important to the identified themes of the area retain a degree of integrity capable of supporting interpretation; Residents, businesses, nonprofit organizations, and governments within the area that are involved in the planning have developed a conceptual financial plan that outlines the roles for all participants, including the federal government, and have demonstrated support for designation of the area; The proposed management entity and units of government supporting the designation are willing to commit to working in partnership to develop the area; The proposal is consistent with continued economic activity in the area; A conceptual boundary map is supported by the public; and The management entity proposed to plan and implement the project is described. These criteria are broad and subject to multiple interpretations, as noted by an official in the agency’s Midwest region charged with applying these criteria to prospective areas. Similarly, according to officials in the agency’s Northeast region, they believe that the criteria were developed to be inclusive and that they are inadequate for screening purposes. The national coordinator believes, however, that the criteria are valuable but that the regions need additional guidance to apply them more consistently. The Park Service has developed draft guidance for applying these criteria but has no plans to issue them as final guidance. Rather, the agency is incorporating this guidance into a legislative proposal for a formal heritage area program. According to the national coordinator, some regions have used this guidance despite its draft status, but it has not been widely adopted or used to date. The Park Service’s application of these broad criteria has identified a large number of potential heritage areas. Since 1989, the Park Service has determined that many of the candidate sites it has evaluated would qualify as national heritage areas. According to data from 22 of the 24 heritage areas, about half of their total funding of $310 million in fiscal years 1997 through 2002 came from the federal government and the other half from state and local governments and private sources. Table 1 shows the areas’ funding sources from fiscal years 1997 through 2002. As figure 2 shows, the federal government’s total funding to these heritage areas increased from about $14 million in fiscal year 1997 to about $28 million in fiscal year 2002, peaking at over $34 million in fiscal year 2000. The Congress sets the overall level of funding for heritage areas, determining which areas will receive funding and specifying the amounts provided. Newly designated heritage areas usually receive limited federal funds while they develop their management plans and then receive increasing financial support through Park Service appropriations after their plans are established. The first heritage areas received pass-through grants from the Park Service and funding through the agency’s Statutory and Contractual Aid appropriations. However, in 1998, the Congress began appropriating funds to support heritage areas through the Heritage Partnership Program. In addition, the Congress has placed in each area’s designating legislation certain conditions on the receipt of federal funds. While the legislation designating the earliest heritage areas resulted in different funding structures, generally those created since 1996 have been authorized funding of up to $10 million over 15 years, not to exceed $1 million in any single year. In conjunction with this limit, the designating legislation attempts to identify a specific date when heritage areas no longer receive federal financial or technical assistance. Although heritage areas are ultimately expected to become self-sufficient without federal support, to date the sunset provisions have not limited federal funding. Since the first national heritage area was designated in 1984, five have reached the sunset date specified in their designating legislation. However, in each case, the sunset date was extended and the heritage area continued to receive funding from the Congress. Finally, the areas’ designating legislation typically requires the heritage areas to match the amount of federal funds they receive with a specified percentage of funds from nonfederal sources. Twenty-two of the 24 heritage areas are required to match the federal funds they receive. Of these 22 areas, 21 have a 50-percent match requirement—they must show that at least 50 percent of the funding for their projects has come from nonfederal sources—and one has a 25-percent match requirement. In the absence of a formal program, the Park Service oversees heritage areas’ activities by monitoring the implementation of the terms set forth in the cooperative agreements. According to Park Service headquarters officials, the agency’s cooperative agreements with heritage areas allow the agency to effectively oversee their activities and hold them accountable. These officials maintain that they can withhold funds from heritage areas—and have, in some circumstances, done so—if the areas are not carrying out the requirements of the cooperative agreements. However, regional managers have differing views on their authority for withholding funds from areas and the conditions under which they should do so. Although Park Service has oversight opportunities through the cooperative agreements, it has not taken advantage of these opportunities to help to improve oversight and ensure these areas’ accountability. In this regard, the agency generally oversees heritage areas’ funding through routine monitoring and oversight activities, and focuses specific attention on the areas’ activities only when problems or potential concerns arise. However, the Park Service regions that manage the cooperative agreements with the heritage areas do not always review the areas’ annual financial audit reports, although the agency is ultimately the federal agency responsible for heritage area projects that are financed with federal funds. For example, managers in two Park Service regions told us that they regularly review heritage areas’ annual audit reports, but a manager in another region said that he does not. As a result, the agency cannot determine the total amount of federal funds provided or their use. According to these managers, the inconsistencies among regions in reviewing areas’ financial reports primarily result from a lack of clear guidance and the collateral nature of the Park Service regions’ heritage area activities—they receive no funding for oversight, and their oversight efforts divert them from other mission-critical activities. Furthermore, the Park Service has not yet developed clearly defined, consistent, and systematic standards and processes for regional staff to use in reviewing the adequacy of areas’ management plans, although these reviews are one of the Park Service’s primary heritage area responsibilities. Heritage areas’ management plans are blueprints that discuss how the heritage area will be managed and operated and what goals it expects to achieve, among other issues. The Secretary of the Interior must approve the plans after Park Service review. According to the national coordinator, heritage area managers in the agency’s Northeast region have developed a checklist of what they consider to be the necessary elements of a management plan to assist reviewers in evaluating the plans. While this checklist has not been officially adopted, managers in the Northeast and other regions consult it in reviewing plans, according to the national coordinator. Heritage area managers in the Park Service regions use different criteria for reviewing these plans, however. For example, managers in the regions told us that, to judge the adequacy of the plans, one region uses the specific requirements in the areas’ designating legislation, another uses the designating legislation in conjunction with the Park Service’s general designation criteria, and a third adapts the process used for reviewing national park management plans. While these approaches may guide the regions in determining the content of the plans, they provide little guidance in judging the adequacy of the plans for ensuring successful heritage areas. Finally, the Park Service has not yet developed results-oriented performance goals and measures—consistent with the requirements of the Government Performance and Results Act—that would help to ensure the efficiency and effectiveness of its heritage area activities. The act requires agencies to, among other actions, set strategic and annual goals and measure their performance against these goals. Effectively measuring performance requires developing measures that demonstrate results, which, in turn, requires data. According to the national coordinator, the principal obstacles to measuring performance are the difficulty of identifying meaningful indicators of success and the lack of funding to collect the needed data. With regard to indicators, the national coordinator told us that the agency has tried to establish meaningful and measurable goals both for their activities and the heritage areas. The agency has identified a series of “output” measures of accomplishment, such as numbers of heritage areas visitors, formal and informal partners, educational programs managed, and grants awarded. However, the national coordinator acknowledged that these measures are insufficient, and the agency continues to pursue identifying alternative measures that would be more meaningful and useful. However, without clearly defined performance measures for its activities, the agency will continue to be unable to effectively gauge what it is accomplishing and whether its resources are being employed efficiently and cost-effectively. The Park Service also has not required heritage areas to adopt a results- oriented management approach—linked to the goals set out in their management plans—which would enable both the areas and the agency to determine what is being accomplished with the funds that have been provided. In this regard, the heritage areas have not yet developed an effective, outcome-oriented method for measuring their own performance and are therefore unable to determine what benefits the heritage area— and through it, the federal funds—have provided to the local community. For example, for many heritage areas, increasing tourism is a goal, but while they may be able to measure an increase in tourism, they cannot demonstrate whether this increase is directly associated with the efforts of the heritage area. To address these issues, the Alliance of National Heritage Areas is currently working with Michigan State University to develop a way to measure various impacts associated with a national heritage area. These impacts include, among others, the effects on tourism and local economies through jobs created and increases in tax revenues. According to Park Service officials, the agency has not taken actions to improve oversight because, without a formal program, it does not have the direction or funding it needs to effectively administer its national heritage area activities. National heritage areas do not appear to have affected private property rights, although private property rights advocates have raised a number of concerns about the potential effects of heritage areas on property owners’ rights and land use. These advocates are concerned that heritage areas may be allowed to acquire or otherwise impose federal controls on nonfederal lands. However, the designating legislation and the management plans of some areas explicitly place limits on the areas’ ability to affect private property rights and use. In this regard, eight areas’ designating legislation stated that the federal government cannot impose zoning or land use controls on the heritage areas. Moreover, in some cases, the legislation included explicit assurances that the areas would not affect the rights of private property owners. For example, the legislation creating 13 of the 24 heritage areas stated that the area’s managing entity cannot interfere with any person’s rights with respect to private property or have authority over local zoning ordinances or land use planning. While management entities of heritage areas are allowed to receive or purchase real property from a willing seller, under their designating legislation, most areas are prohibited from using appropriated funds for this purpose. In addition, the designating legislation for five heritage areas requires them to convey the property to an appropriate public or private land managing agency. As a further protection of property rights, the management plans of some heritage areas deny the managing entity authority to influence zoning or land use. For example, at least six management plans state that the managing entities have no authority over local zoning laws or land use regulations. However, most of the management plans state that local governments’ participation will be crucial to the success of the heritage area and encourage local governments to implement land use policies that are consistent with the plan. Some plans offer to aid local government planning activities through information sharing or technical or financial assistance to achieve their cooperation. Property rights advocates are concerned that such provisions give heritage areas an opportunity to indirectly influence zoning and land use planning, which could restrict owners’ use of their property. Some of the management plans state the need to develop strong partnerships with private landowners or recommend that management entities enter into cooperative agreements with landowners for any actions that include private property. Despite concerns about private property rights, officials at the 24 heritage areas, Park Service headquarters and regional staff working with these areas, and representatives of six national property rights groups that we contacted were unable to provide us with a single example of a heritage area directly affecting—positively or negatively—private property values or use. National heritage areas have become an established part of the nation’s efforts to preserve its history and culture in local areas. The growing interest in establishing additional areas will put increasing pressure on the Park Service’s resources, especially since the agency receives limited funding for the technical and administrative assistance it provides to these areas. Under these circumstances, it is important to ensure that only those sites’ that are most qualified are designated as heritage areas. However, no systematic process for designating these areas exists, and the Park Service does not have well-defined criteria for assessing sites’ qualifications or effective oversight of the areas’ use of federal funds and adherence to their management plan. As a result, the Congress and the public cannot be assured that future sites will have the necessary resources and local support needed to be viable or that federal funds supporting them will be well spent. Given the Park Service’s resource constraints, it is important to ensure that the agency carries out its heritage area responsibilities as efficiently and effectively as possible. Park Service officials pointed to the absence of a formal program as a significant obstacle to effective management of the agency’s heritage area efforts and oversight of the areas’ activities. In this regard, without a program, the agency has not developed consistent standards and processes for reviewing areas’ management plans, the areas’ blueprints for becoming viable and self-sustaining. It also has not required regional heritage area managers to regularly and consistently review the areas’ annual financial audit reports to ensure that the Park Service—the agency with lead responsibility for these areas—has complete information on their use of funds from all federal agencies as a basis for holding them accountable. Finally, the Park Service has not defined results-oriented performance goals and measures—both for its own heritage area efforts and those of the individual areas. As a result, it is constrained in its ability to determine both the agency’s and areas’ accomplishments, whether the agency’s resources are being employed efficiently and effectively, and if federal funds could be better utilized to accomplish its goals. In the absence of congressional action to establish a formal heritage area program within the National Park Service or to otherwise provide direction and funding for the agency’s heritage area activities, we recommend that the Secretary of the Interior direct the Park Service to take actions within its existing authority to improve the effectiveness of its heritage area activities and increase areas’ accountability. These actions should include developing well-defined, consistent standards and processes for regional staff to use in reviewing and approving heritage areas’ management plans; requiring regional heritage area managers to regularly and consistently review heritage areas’ annual financial audit reports to ensure that the agency has a full accounting of their use of funds from all federal sources, and developing results-oriented performance goals and measures for the agency’s heritage area activities, and requiring, in the cooperative agreements, heritage areas to adopt such a results-oriented management approach as well. Thank you, Mr. Chairman and Members of the Committee. This concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Committee may have. For more information on this testimony, please contact Barry T. Hill at (202) 512-3841. Individuals making key contributions to this testimony included Elizabeth Curda, Preston S. Heard, Vincent P. Price, and Barbara Timmerman. To examine the establishment, funding, and oversight of national heritage areas and their potential effect on private property rights, we (1) evaluated the process for identifying and designating national heritage areas, (2) determined the amount of federal funding provided to support these areas, (3) evaluated the process for overseeing and holding national heritage areas accountable for their use of federal funds, and (4) determined the extent to which, if at all, these areas have affected private property rights. To address the first issue, we discussed the process for identifying and designating heritage areas with the Park Service’s Heritage Area national coordinator and obtained information on how the 24 existing heritage areas were evaluated and designated. To determine the amount of federal funding provided to support these areas, we discussed funding issues and the availability of funding data with the national coordinator, the Park Service’s Comptroller, and officials from the agency’s Northeast, Midwest, Southeast, and Intermountain Regional Offices. We also obtained funding information from 22 of the 24 heritage areas for fiscal years 1997 through 2002, and discussed this information with the executive directors and staff of each area. As of mid-March 2004, two heritage areas had not provided us with funding data. To verify the accuracy of the data we obtained from these sources, we compared the data provided to us with data included in the heritage areas’ annual audit and other reports that we obtained from the individual areas and the Park Service regions. We also discussed these data with the executive directors and other officials of the individual heritage areas and regional office officials. To evaluate the processes for holding national heritage areas accountable for their use of federal funds, we discussed these processes with the national coordinator and regional officials, and obtained information and documents supporting their statements. To determine the extent to which, if at all, private property rights have been affected by these areas, we discussed this issue with the national coordinator, regional officials, the Executive Director of the Alliance of National Heritage Areas—an organization that coordinates and supports heritage areas’ efforts and is their collective interface with the Park Service—the executive directors of the 23 heritage areas that were established at the time of our work, and representatives of several private property rights advocacy groups and individuals, including the American Land Rights Association, the American Policy Center, the Center for Private Conservation, the Heritage Foundation, the National Wilderness Institute, and the Private Property Foundation of America. In each of these discussions, we asked the individuals if they were aware of any cases in which a heritage area had positively or negatively affected an individual’s property rights or restricted its use. None of these individuals were able to provide such an example. In addition, we visited the Augusta Canal, Ohio and Erie Canal, Rivers of Steel, Shenandoah Valley Battlefields, South Carolina, Southwestern Pennsylvania (Path of Progress), Tennessee Civil War, and Wheeling National Heritage Areas to discuss these issues in person with the areas’ officials and staff, and to view the areas’ features and accomplishments first hand. We conducted our work between May 2003 and March 2004 in accordance with generally accepted government auditing standards. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Congress has established, or "designated," 24 national heritage areas to recognize the value of their local traditions, history, and resources to the nation's heritage. These areas, including public and private lands, receive funds and assistance through cooperative agreements with the National Park Service, which has no formal program for them. They also receive funds from other agencies and nonfederal sources, and are managed by local entities. Growing interest in new areas has raised concerns about rising federal costs and the risk of limits on private land use. GAO was asked to review the (1) process for designating heritage areas, (2) amount of federal funding to these areas, (3) process for overseeing areas' activities and use of federal funds, and (4) effects, if any, they have on private property rights. No systematic process currently exists for identifying qualified sites and designating them as national heritage areas. While the Congress generally has designated heritage areas with the Park Service's advice, it designated 10 of the 24 areas without a thorough agency review; in 6 of these 10 cases, the agency recommended deferring action. Even when the agency fully studied sites, it found few that were unsuitable. The agency's criteria are very general. For example, one criterion states that a proposed area should reflect "traditions, customs, beliefs, and folk life that are a valuable part of the national story." These criteria are open to interpretation and, using them, the agency has eliminated few sites as prospective heritage areas. According to data from 22 of the 24 heritage areas, in fiscal years 1997 through 2002, the areas received about $310 million in total funding. Of this total, about $154 million came from state and local governments and private sources and another $156 million came from the federal government. Over $50 million was dedicated heritage area funds provided through the Park Service, with another $44 million coming from other Park Service programs and about $61 million from 11 other federal sources. Generally, each area's designating legislation imposes matching requirements and sunset provisions to limit the federal funds. However, since 1984, five areas that reached their sunset dates had their funding extended. The Park Service oversees heritage areas' activities by monitoring their implementation of the terms set forth in the cooperative agreements. These terms, however, do not include several key management controls. That is, the agency has not (1) always reviewed areas' financial audit reports, (2) developed consistent standards for reviewing areas' management plans, and (3) developed results-oriented goals and measures for the agency's heritage area activities, or required the areas to adopt a similar approach. Park Service officials said that the agency has not taken these actions because, without a program, it lacks adequate direction and funding. Heritage areas do not appear to have affected property owners' rights. In fact, the designating legislation of 13 areas and the management plans of at least 6 provide assurances that such rights will be protected. However, property rights advocates fear the effects of provisions in some management plans. These provisions encourage local governments to implement land use policies that are consistent with the heritage areas' plans, which may allow the heritage areas to indirectly influence zoning and land use planning in ways that could restrict owners' use of their property. Nevertheless, heritage area officials, Park Service headquarters and regional staff, and representatives of national property rights groups that we contacted were unable to provide us with any examples of a heritage area directly affecting--positively or negatively--private property values or use.
The Deputy Assistant Secretary of Defense (Installation Energy), under the Office of the Assistant Secretary of Defense (Energy, Installations and Environment), has the role and responsibility for, among other things, overseeing DOD’s installation energy program. The office also is responsible for issuing installation energy policy and guidance to the DOD components and serving as the primary adviser for matters regarding facility energy policy. In addition, the office provides management for energy conservation and resources, including establishing goals for the department’s energy conservation program, developing procedures to measure energy conservation, and developing policy guidance for reporting energy use and results of conservation accomplishments against goals for federal energy conservation and management. These goals and requirements are found in, but are not limited to, the Energy Independence and Security Act of 2007, the Energy Policy Act of 2005, and Executive Order 13693, Planning for Federal Sustainability in the Next Decade. Also, the military departments have established goals related to developing renewable energy projects. For example, the Secretary of the Navy has established goals to obtain half of the Navy’s energy from alternative sources and to produce at least half the shore- based energy requirements from renewable sources, such as solar, wind, and geothermal. Further, each military department has issued department-level guidance to develop 1 gigawatt of renewable energy, for a total of 3 gigawatts by 2025. In addition, DOD’s instruction on energy management states that the Secretary of a military department is responsible for developing an energy program management structure to meet DOD requirements, with the primary objectives of improving energy efficiency and eliminating energy waste, while maintaining reliable utility service. Each military service has assigned a command or headquarters to provide guidance and funding, with regional commands or military installations managing site-specific energy programs. According to DOD’s instruction, DOD component heads are to provide facilities with trained energy program managers, operators, and maintenance personnel for lighting, heating, power generating, water, ventilating, and air conditioning plants and systems. At the installation level, the departments of public works, general facilities, or civil engineering oversee and manage the day-to-day energy operations. Each year, DOD is to submit the Annual Energy Management Report to the congressional defense committees, as required by section 2925 of title 10 of the United States Code. This report describes the department’s progress toward meeting its energy performance goals and, among other items, provides information on all energy projects financed with alternative financing arrangements. The Annual Energy Management Report is required to contain the following information on DOD’s alternatively financed projects: the length of the contract, an estimate of the financial obligation incurred over the contract period, and the estimated payback period. The DOD components maintain a utility energy reporting system to prepare the data for submission of this report, which DOD describes as the primary vehicle by which it tracks and measures its performance and energy efficiency improvement. DOD has used partnerships with the private sector as a tool for alternative financing arrangements to further energy efficiency efforts and allow installations to improve infrastructure through upgrades to existing systems and the purchasing of new equipment. Each financing arrangement leveraged by private capital has distinct requirements and legal authorities, and sometimes DOD components combine arrangements to finance the same project. Table 1 summarizes the main alternative financing arrangements that are available to DOD for funding its energy projects. In December 2011, the President challenged federal agencies to enter into $2 billion in performance-based contracts, including ESPCs and UESCs, through the President’s Performance Contracting Challenge to meet the administration’s goals of cutting energy costs in agency facilities as part of a broader effort to reduce energy costs, cut pollution, and create jobs in the construction and energy sectors. In May 2014, the President expanded the challenge by an additional $2 billion, bringing the total goal to $4 billion in performance-based contracts across the federal government by the end of calendar year 2016. According to DOD, as of December 31, 2016, the three military departments and the other defense agencies combined had awarded 194 ESPCs and UESCs that totaled over $2.28 billion. DOD reported that these results exceeded its target of awarding over $2.18 billion in such contracts over this period. DOD and military service audits have examined the development and management of DOD’s alternative financing arrangements. For example, in May 2016, the DOD Inspector General found that the Air Force Civil Engineer Center did not effectively manage the Air Force’s existing ESPCs and made recommendations to improve controls and validate energy savings. In January 2017, the DOD Inspector General found that the Naval Facilities Engineering Command did not effectively manage the Navy’s 38 ongoing ESPCs that were in the performance phase. The DOD Inspector General stated that management was not effective because the command did not appoint contracting officer’s representatives for 31 of the ongoing projects and did not develop a quality assurance surveillance plan for any of them. Additionally, the DOD Inspector General reviewed five other projects in more detail and found questionable contract payments. The DOD Inspector General recommended the appointment of contracting officer’s representatives for ESPCs and that the Naval Facilities Engineering Command Expeditionary Warfare Center—which oversees the Navy’s ESPCs—document the validity of prior year energy savings for the selected ESPCs. In addition, in September 2014, the Army Audit Agency found that the Army’s renewable energy projects were generally operational and contributed to renewable energy goals. However, the audit also identified the need for improvements to ensure projects were performing as intended and that installations were reporting renewable energy output sufficiently to help the Army meet federal mandates and the DOD goals for renewable energy. At the time of the Army Audit Agency review, the Army was not meeting the federal and DOD renewable energy goals. Since 2005, the military services have used alternative financing arrangements for hundreds of energy projects to improve energy efficiency, save money, and meet energy goals; however, the military services have not collected and provided DOD complete and accurate data to aid DOD and congressional oversight of alternatively financed energy projects. Based on the data provided by the military services, the services have used alternative financing arrangements for 464 energy projects or contracts since 2005, entering into about 38 contracts annually from fiscal year 2005 through fiscal year 2016. The Army entered into the most alternatively financed contracts (305), followed by the Navy (90), the Air Force (50), and the Marine Corps (19). Military service officials attributed the continued use of alternative financing to three separate factors. First, officials cited the President’s Performance Contracting Challenge, issued in December 2011, which challenged federal agencies to enter into $2 billion in performance contracts, such as ESPCs and UESCs. Second, officials stated that they did not have sufficient appropriated funds to accomplish many of these projects, making alternative financing an attractive option for addressing needed repairs, obtaining new equipment designed to improve operations, and reducing energy consumption. Third, service officials stated that alternative financing reduces the risk for equipment maintenance and budgeting. Specifically, many contracts include a cost-savings guarantee, which requires that the contractor maintain the equipment in good working order over the life of the contract. Additionally, many contracts have fixed annual payments regarding projects, so the services have certainty in terms of budgeting for portions of an installation’s annual energy costs. The alternative financing contracts the military services awarded have obligated the government to pay billions of dollars to contractors over the next 25 years, as shown in table 2. According to military service officials, these contractual obligations are must-pay items from their annual budgets. In order to account for these must-pay items in their budgets, they said that the military service headquarters must have visibility into certain data, such as the costs of such projects. We found that from fiscal years 2005 through 2016, the military services have used UESCs more often than the other types of alternative financing arrangements we reviewed. Specifically, the military services have entered into contracts for 245 UESCs compared to 201 ESPCs. In addition to ESPCs and UESCs, the military services have also entered into financing agreements through PPAs. We found that of the 18 PPAs that either we or the military services identified as being awarded since 2005, 10 have been awarded since 2014. In these military power purchase agreements, a private entity will purchase, install, operate and maintain renewable energy equipment, and the military service will purchase the electricity generated by the equipment. Since 2005, the Army, Navy, and Marine Corps have reported contractor project investment costs totaling almost $1.46 billion for PPAs. Some of these projects, such as solar arrays, can have significant project investment costs to the contractor, and the military services compensate the contractors over time, either in part or in full, through payments for their energy usage. However, we identified challenges in determining the true costs of these PPA projects to the government for several reasons. First, the future cost to the government could exceed $1.46 billion because some of the PPAs are still in the design and construction phase and cost data are not known. Second, minimum purchase agreements are typically set in the contracts, but in some cases the service could purchase more than the minimum amount of energy required, which would increase the costs. Third, the energy providers have other ways of recouping their project investment costs, which means the military services may not be responsible for repaying all of the costs. In addition to possible rebates and tax incentives, energy developers may be able to take advantage of renewable energy credits, which can lower the up-front costs of projects by reimbursing either the military or the energy provider. Lastly, in some cases, the energy provider can take excess energy produced by the equipment and sell it to other customers as another means of recouping its investment costs and reducing the costs to the military services. Since 2005, the military services have not collected and provided complete and accurate project data to DOD on alternatively financed energy projects. Specifically, the military services provided partial data on total contract costs, savings, and contract length related to their respective alternatively financed energy projects during this time frame. However, we were unable to identify and the military services could not provide complete data on the range of their alternatively financed projects, to include data on total contract cost for 196 of 446 ESPC and UESC projects. In particular: The Army could not provide total contract costs for about 42 of its 131 ESPCs. Moreover, the Army could not provide total contract costs for about 142 of its 167 UESCs. The Navy could not provide total contract costs for 1 of its 59 UESCs and 2 of its 27 ESPCs. The Navy also provided a list of Marine Corps projects and relevant data related to total contract costs for those projects. However, we identified discrepancies between the list of projects provided to us and those that DOD reported receiving. The Air Force could not provide total contract costs for 8 of its 38 ESPCs and 1 of its 8 UESCs. Additionally, the military services could not provide data related to either cost savings for 195 contracts or contract length for 232 contracts. Furthermore, some of the data provided by the Army and Navy on their alternatively financed energy projects did not include the level of accuracy needed for better or improved planning and budgeting purposes. For example, we contacted three installations where the Army had identified UESCs in the Annual Energy Management Report to Congress, but officials from two of those installations told us that no UESC existed. The Navy provided data on most of its projects, but Navy headquarters officials acknowledged that they had low confidence in the accuracy of the data on three specific ESPCs because they had not actually reviewed the contract documents, which were awarded by one of the Navy’s subordinate commands. Also, cost and other data reported by Navy headquarters for UESC projects at one of its installations did not match the cost data and project details provided by the regional command overseeing the installation’s contracts. Additionally, military service headquarters and installations or other service entities provided information that did not always match. For example, an Army official told us about discrepancies in the service’s internal tracking documents that officials had to resolve prior to providing their data to us. According to the DOD instruction, the military services are required to track and store data on energy projects, including data on all estimated and actual costs, interest rates, and mark-ups, among others, as well as any changes to project scope that may affect costs and savings. Moreover, section 2925 of title 10 of the United States Code requires DOD to report to Congress after each fiscal year on its alternatively financed energy projects, to include information on the projects’ duration and estimated financial obligations, among other things, which requires that DOD have reliable information about these projects so that DOD and the Congress will be better able to conduct oversight and make informed decisions on programs and funding. Furthermore, Standards for Internal Control in the Federal Government state that management should obtain quality information that is, among other things, complete and accurate in order to make informed decisions. During the course of this review, military service and DOD officials stated that one reason that the military departments and DOD headquarters levels did not always have complete and accurate data is because the military services have decentralized authority for entering into alternatively financed projects and for maintaining associated data. Given this decentralized authority, the data maintained are not always tracked in a manner that captures the full range of data needed at the headquarters level for oversight, nor are they consistently reported to the headquarters level. Therefore, the military departments and DOD do not have complete and accurate information on the universe of active alternatively financed energy projects to aid oversight and to inform Congress. Specifically, complete and accurate data are also necessary for DOD to meet its requirement to report annually to Congress on the department’s alternatively financed energy projects through the Annual Energy Management Report, to include data on projects’ respective duration, financial obligation, and payback period. Having complete data on total contract costs, cost savings, and contract length are all necessary data points in order for the military departments to also formulate accurate cost estimates for annual budget requests and project expenses. Without these data, the military departments also will not have a full understanding of the cumulative impacts of these alternative financing arrangements on their installations’ utility budgets over periods of up to 25 years. Furthermore, if the military departments do not provide complete and accurate data to DOD, decision makers within the department and in Congress may not have all information needed for effective oversight of the projects, which could hinder insight into future budgetary implications of the projects. DOD reported achieving expected savings or efficiencies on the operational alternatively financed energy projects we reviewed; however, the military services have not consistently verified project performance on its ESPC and UESC projects to confirm that the reported savings were achieved. Without more consistent verification of performance for all alternatively financed projects, DOD cannot be certain that all projects are achieving their estimated savings. In our review of a nongeneralizable sample of 17 alternatively financed energy projects across the military services, we found DOD reported that 13 were considered operational and that all 13 of these projects—8 ESPCs, all 3 of the UESCs, and 2 PPAs—achieved their expected savings. Installation officials measured savings for these projects differently, depending on which type of alternatively financed arrangement was used. For the 8 ESPCs in our sample, we reviewed the most recent measurement and verification reports provided by the contractors and found that each project reported achieving its guaranteed savings. The measurement and verification reports provided by the contractors reported guaranteed savings of between 100 and 145 percent for the ESPC projects we reviewed, as shown in table 3. For the three UESCs in our sample, we found that installation officials used various performance assurance methods to maintain energy savings by ensuring that the installed equipment was operating as designed. For UESCs, identification of project savings can include either annual measurement and verification or performance assurance. The authority for UESCs, unlike that for ESPCs, does not have a requirement for guaranteed savings, but the agency’s repayments are usually based on estimated cost savings generated by the energy-efficiency measures. At Fort Irwin, California, officials stated that they used efficiency gauges installed on the equipment to verify that the equipment was operating properly. At Naval Air Weapons Station China Lake, California, officials stated that operation and maintenance personnel performed systems checks to ensure the installed equipment is functioning properly. At Naval Base Kitsap- Bangor, Washington, officials used the energy rebate incentives issued by the utility company as a proxy to ensure that savings were being met. The officials stated that if they received the rebate, then they were achieving the requisite energy and cost savings. For the two PPA solar array projects that we reviewed, officials reported that purchasing power through the contract remained cheaper than if they had to purchase power from non-renewable energy sources. For PPAs, savings measurement does not include annual measurement and verification. PPAs are an agreement to purchase power at an agreed-upon price for a specific period of time, and as such, they do not require continuous measurement and verification. However, DOD officials informed us that their contracts require such projects to be metered so that they can validate the receipt of electrical power before payment for the service. Officials also reported that they periodically assessed and reported on whether the utility rates remained at levels that were profitable for the project. Projects remained profitable when the prices to generate electricity from the solar panels were below the market rate for electricity obtained through the utility company. For example, through monitoring of utility rates, one official reported that the installation’s PPA project obtained a favorable rate for electricity of 2.2 cents per kilowatt hour, well below the prevailing market rate of 7.2 cents per kilowatt hour. This rate is fixed over the course of this 20-year contract, whereas the current market rate fluctuates, and the official estimated that the project saved the installation approximately $1 million in utility payments in fiscal year 2016. For the other PPA project we reviewed, installation officials reported that the project terms were still profitable and estimated that without the PPA, the cost for electricity would be about 80 percent higher than the cost they were getting through the PPA. However, according to the officials, a state regulation governing electricity usage requires that the installation obtain a specific amount of electricity from the utility company. Installation officials told us that they had to curtail some of the project’s own energy production to meet this requirement, which resulted in the project not always operating at the capacity they had planned. For example, contractor documents show that from September 2013 through August 2014, the electricity generated monthly by the project was curtailed by between 0.1 percent to 14.2 percent. In our review of eight military service ESPC projects that had reported achieving or exceeding their guaranteed cost savings, we found that the cost savings may have been overstated or understated in at least six of the eight projects. Expected cost and energy savings for ESPC projects are established during project development, finalized when the contract is awarded, and measured and verified over the course of a project’s performance period. These savings can include reductions in costs for energy, water, operation and maintenance, and repair and replacement directly related to a project’s energy conservation measures. ESPC projects generally include two types of expected savings: (1) proposed cost and energy savings, which contractors estimate will result from the energy conservation measures installed, and (2) guaranteed cost savings, which must be achieved for the contractor to be fully paid. For five of the six projects where we found that cost savings may have been overstated or understated, we identified two key factors that have affected reported savings—project modifications and agency operation and maintenance actions. Project modifications—We found that the installations had modified some of the energy conservation measures in at least 4 of the ESPC projects for which cost savings may have been overstated or understated. Specifically, we found instances where officials had completed demolitions or renovations to facilities where energy conservation measures were installed or had demolished equipment. For example, at one installation, the most recent measurement and verification report indicated that buildings associated with the project savings were demolished or scheduled to be demolished in four of the nine years following the project’s completion. Based on the contractor’s report, we calculated that the building demolitions, closings, and renovations negated approximately 30 percent of the project’s annual cost savings for 2016. According to the report, these changes have compromised the project to such an extent that the contractor recommended the service modify the contract with a partial termination for convenience to buy out portions of the project where changes have occurred and savings were affected. At another installation, the measurement and verification report indicated that cost savings came directly from those cost savings that were established in the contract and did not reflect equipment that had been demolished or required repair. In its report, the contractor verified that the equipment was in place and documented issues that negatively affected the energy conservation measures, but it did not adjust the savings to account for those issues. In both of these cases, the contractor continues to report meeting guaranteed cost savings levels and the service is required to continue making its full payment. Project modifications can occur, such as when missions change at an installation, but we found that these changes may prevent the project’s cost savings from being fully realized. Agency operation and maintenance actions—We found that such agency actions were identified as an issue for the ESPCs in our review and may have reduced the savings realized for five of the six ESPC projects for which cost savings may have been overstated or understated. Specifically, we found instances where the measurement and verification reports identified that some replacement items were installed incorrectly or left uninstalled and some light fixtures and sensors were poorly maintained or removed. For example, at one installation, the most recent measurement and verification report showed that base personnel disabling installed energy conservation measures, such as installing incorrect lamps or removing lighting control sensors, coupled with abandoned or faulty equipment, have reduced cost savings for this project. Contractors are not generally required to reduce the amount of savings they report or measure the effect of project changes for which the contractor is not responsible. The contractor stated that the energy and cost savings in its measurement and verification report were derived directly from the calculated energy and cost savings negotiated as a part of the original contract and do not reflect reductions due to abandoned equipment or other factors outside of the contractor’s control. At another installation, the most recent contractor measurement and verification report indicated that some bulbs were burned out and lighting fixtures were dirty. As a result, the contractor lowered the calculated savings for the lighting energy conservation measure for that year, while also noting that the savings still exceed the proposed savings for that measure. Officials at one installation described the challenge of preventing installation personnel from acting in ways that detract from the projects’ energy savings, such as by removing low-flow shower head controls, adjusting water temperatures, or removing or adjusting heating and cooling controls. In June 2015, we described similar factors as potentially reducing energy savings on select ESPC projects in seven federal agencies, including the Air Force, Army, and Navy. In that review, we found that the contractor is generally not required to either reduce the amount of savings they report or to measure the effects of such factors on reported savings when factors beyond their control reduce the savings achieved. Further, we reported that agencies were not always aware of the amount of expected savings that were not being achieved among their projects, in part because contractors generally do not provide this information in their measurement and verification reports. In the 2015 report, the savings estimates that were reported but not achieved ranged from negligible to nearly half of a project’s reported annual savings. As a result, we recommended then that the Secretary of Defense specify in DOD guidance or ESPC contracts that measurement and verification reports for future ESPC projects are to include estimates of cost and energy savings that were not achieved because of agency actions, and DOD agreed with our recommendation. Given similar findings with respect to the ESPC projects we examined as part of this review, we continue to believe that our 2015 recommendation is valid. The military departments have varying approaches for verifying whether all of their alternatively financed UESCs are achieving expected savings. Army, Navy, and Air Force officials described their processes and guidance for verifying savings for their UESCs, and we found that they did not consistently follow all requirements in both DOD and Office of Management and Budget guidance. Alternatively financed UESCs must meet certain requirements in order to allow the use of private sector funding to develop the project and to have the ability to repay the project, generally using appropriated funds over the contract term instead of having to fund the entire project cost up front. Additionally, according to DOD’s 2009 instruction, repayments for UESCs are based on estimated cost savings generated by the energy conservation measures, although energy savings are not necessarily required to be guaranteed by the contractors. This instruction further requires DOD components to verify savings to validate the performance of their energy efficiency projects, thereby providing assurance that such projects are being funded with generated savings or as agreed to in specified contracts. Specifically, the instruction requires the military departments to track all estimated and verified savings and measurement and verification information for its energy projects. Tracking and verifying savings associated with such alternatively financed energy projects is necessary because the projects require a long-term investment from the department—in some cases allowing the military services to budget for these projects for a period of up to 25 years—and it is not until contractors have been fully repaid for the costs of the energy conservation measures and related contract costs that agencies retain any savings the project continues to generate. In addition, DOD uses guidance issued by the Office of Management and Budget. Specifically, in 2012, the Office of Management and Budget updated guidance, stating that UESCs may be scored on an annual basis if the UESC requires performance assurance or savings guarantees and measurement and verification of savings through commissioning or retro-commissioning. According to officials from the Office of the Secretary of Defense, the department has interpreted the Office of Management and Budget guidance as giving federal agencies the option of requiring either performance assurance, savings guarantees, or measurement and verification for UESCs. Each of the various techniques provides a different level of assurance that the installed equipment is functioning as designed and the project is performing as expected, but the Office of Management and Budget’s guidance does not specify the type of measurement technique required. Also, Office of the Secretary of Defense officials stated that the military services are required to adhere to the Office of Management and Budget’s guidance in order to determine whether they can enter into an alternatively financed agreement, and then adhere to the requirements for determining whether the project is performing as expected. As noted earlier, energy savings for UESCs are not necessarily required to be guaranteed by contractors, and repayments are usually based on estimated cost savings generated by the energy conservation measures. We found that the guidance issued by both DOD and the Office of Management and Budget require a verification of savings for UESCs, though the requirements differ. The DOD instruction requires the military services to track all estimated and verified savings and measurement and verification information for its energy projects, while the Office of Management and Budget requirement is for measurement and verification through commissioning and retro-commissioning rather than ongoing through the life of the project. We found that DOD’s interpretation of this Office of Management and Budget requirement—which DOD officials said gives the military departments the option of having either performance assurance, savings guarantees, or measurement and verification at certain points for UESCs—differs from the department’s own guidance. Additionally, DOD’s interpretation of this guidance has resulted in the military departments developing varying approaches for verifying savings of their UESC projects. The Navy has taken and the Air Force is taking steps to require that all UESC projects be assessed to determine actual savings, with approaches focused more on measurement and verification as opposed to performance assurance, whereas Army officials told us that they do not plan to require measurement and verification for their UESCs. Specifically, Navy: The Commander, Navy Installations Command, issued guidance in March 2015 requiring Navy installation officials to assess all Navy UESC projects to verify energy project savings through measurement and verification. According to the guidance, the installations will report on their energy and cost savings each year to enable the Commander, Navy Installations Command, to monitor the effectiveness of UESC projects because the Navy has significantly increased its investment in ESPC and UESC projects and will use this analysis to help manage risk. The Navy’s assessment will be conducted with the Navy’s energy return-on-investment tool, which is a set of project tools used to conduct analysis and track project requirements. Air Force: The Air Force has engineering guidance that addresses management of UESCs, but headquarters officials told us that this guidance requires only the standard requirement of performance assurance for these projects. According to officials, the Air Force is developing a UESC manual, which it expects to complete in September 2017, to replace the existing guidance. These officials stated that the manual will include a measurement and verification requirement for UESCs that will adhere to the same levels required for ESPCs. However, headquarters and engineering center officials stated that the two alternative financing arrangements may continue to have some differences in requirements. Army: The Army had not issued guidance for its UESCs at the time of our review, according to an Army headquarters official, and instead was relying on its ESPC policy manual to guide its UESC projects. The official told us that the Army is working to issue UESC guidance that is similar to that for ESPCs, but stated that the guidance will not include a requirement to perform measurement and verification of these projects. The official stated that although the Army cannot be completely certain that savings levels are being achieved using performance assurance, the current approach provides an acceptable level of assurance while avoiding the increased costs associated with performing measurement and verification. We found that the military services have taken different approaches to verifying the savings associated with UESCs because DOD has not clarified requirements in guidance that reflect the intent of the department and the Office of Management and Budget. Verification of savings to validate project performance of all alternatively financed energy projects across the department is necessary to ensure that the projects are meeting expected energy and costs savings required to fulfill DOD’s requirement that these projects be paid for entirely through the projects’ generated cost savings. This verification would help the military services ensure they are appropriately budgeting for the projects over the life of the contract, which are expected to increase in number. Specifically, in 2016, DOD issued a rule amending the Defense Federal Acquisition Regulation Supplement that authorizes a contract term limit for UESCs for a period up to 25 years, which is also the limit allowed for ESPCs. Without updated and clear guidance about requirements on how the military departments should verify savings associated with UESCs, the military services will likely continue to interpret DOD guidance differently and are likely to take inconsistent approaches to assuring the performance of UESC projects, which could limit DOD’s visibility over projects that commit the departments to long-term payments. Alternative financing arrangements provide the military services the opportunity to partner with the private sector to finance energy projects; however, there are benefits and disadvantages to these projects. DOD and military service officials we contacted regarding their renewable energy generation, energy efficiency, power generation, and energy security projects identified benefits to financing energy projects through alternative arrangements, including funding projects that otherwise would not be funded through appropriated funding, shorter time frames, and the availability and expertise of personnel to implement and manage such projects, as described below. Funding Projects—At the military department level, officials told us that alternative financing arrangements enabled them to fund energy projects they might not otherwise have been able to pay for due to limited appropriated funding for developing and implementing such projects and the need to use their service budgets for mission requirements. We previously reported that implementing projects to meet energy requirements and goals can be costly, and obtaining up- front appropriations for such projects has been particularly challenging for agencies because of constrained federal budgets. The military services’ reliance on alternative financing arrangements has enabled them to more easily take on larger projects and combine several different energy conservation measures or installations into one contract rather than undertaking them individually over time. Of the eight installations we contacted whose contracts had a renewable energy component, officials at six of those installations told us that they would not have been able to undertake those projects without the use of alternative financing arrangements or would have had to scale down the scope of the projects. For example, one official told us that the military service’s ability to fund its large solar arrays, which cost over $1 million to develop, would not have been a viable option for the installation with up-front appropriations because mission requirements take priority over energy conservation or renewable energy production. Some officials also said that power purchase agreements are useful from a budget standpoint because the installation does not have to provide financing for the project but rather pays for the energy that is produced through its energy bill. According to agency officials, alternative financing arrangements may also save operation and maintenance costs because, in many cases, using alternative financing arrangements results in the contractor installing new equipment and sustaining that equipment during the contract performance period. Officials from one service told us that energy efficiency does not decline over the life of the project because the contractor brings the project to industry standards and then maintains the project over the course of the contract. Officials at two military service headquarters told us that it would be challenging to operate and fully maintain the equipment installed for energy projects funded through up-front appropriations because funds for maintaining equipment are also limited. Some officials at the installation level stated that alternatively financed energy projects can assist with budget certainty, as many of the contracts require the utility or energy company to cover operation and maintenance costs for installed equipment and equipment replacement costs over the life of the contract, compared to funding those ongoing costs each year through their appropriated funding. Further, some installation officials noted that the initial assessments for large energy projects were generally rolled into the costs of the contracts and the installation would have had to pay those costs up front if they had to fund those aspects of the projects. Other installation officials commented on the budgeting certainty these alternative financing arrangements provide. For example, according to one Marine Corps official, ESPCs provide a benefit during the utility budgeting and programming process. With an ESPC, a large portion of the utility budget is constant for many years out, which decreases the number of variables, such as weather conditions and usage, in the utility budget that must be considered in the budget forecasting process, resulting in more accurate budgeting. Time frames—We previously reported that officials told us, for renewable energy projects funded through military construction appropriations, it can take a military service three to five years from project submission through the beginning of construction because of the length of the budget and appropriations cycle. Some officials representing installations in our sample also considered the reduced time frames for developing an energy project to be another benefit of using alternative financing arrangements. For example, through these arrangements, officials can bundle several smaller projects together into a single package as opposed to implementing the projects individually over the course of several years. In addition, according to some officials, working with a local utility or energy company to develop large energy-saving projects can take much less time than attempting to achieve the same results through the military construction process. For example, officials at Naval Base Kitsap- Bangor, Washington, said its multiphase UESC, which includes replacing exterior, street, and parking lot lighting on several installations with new energy-efficient technology, has been implemented faster than it could through another approach. Some other installation officials said that using the indefinite-delivery, indefinite-quantity ESPC contract vehicles awarded through the Department of Energy or the U.S. Army Corps of Engineers or working through the service engineering commands for ESPCs and UESCs took much less time and is less cumbersome than going through the services’ acquisition process for new equipment. Expertise or Availability of Personnel—Officials we met with at six installations said they often did not have personnel at the installation level with the needed expertise or in sufficient numbers to assist in the development, operation, and maintenance of such projects. For example, officials at one installation said that the energy service company had personnel with the technical expertise to do some things, such as development of life-cycle cost analyses and measurement and verification, better than installation officials. Officials at another installation cited a shortage of personnel at the time the contract was awarded that made it challenging to operate and maintain energy projects. Installation energy managers were able to work around some of these personnel constraints by including requirements for contractors to operate and maintain the installed energy conservation measures, including repairing and replacing equipment as needed during the performance period. We reported in 2016 that working with private developers allows DOD to leverage private companies’ expertise in developing and managing projects and limits the number of personnel DOD has to commit to projects. In addition to some of the benefits they described, officials identified some disadvantages of using alternative financing arrangements for their energy projects, including higher overall costs, a delay in their ability to take advantage of savings initially through funding with up-front appropriations, and risks associated with long-term financial obligations. First, some officials said that the overall costs over the contract term are generally higher than those funded using up-front appropriations. For example, for one of the ESPCs we reviewed, we found that the estimated cost for using alternative financing was about 15 percent higher than if the project had been funded using up-front appropriations. In 2004, we reported that alternative financing arrangements may be more expensive over time than full, up-front appropriations since the federal government’s cost of capital is lower than that of the private sector. Second, some officials noted that they would prefer to use appropriated funds for projects because with alternative financing arrangements, the installation pays the energy service company out of the savings rather than retaining those savings. As a result, when relying on alternative financing for energy projects, installations do not actually realize the savings until after the contract is completed, which could be up to 25 years later for ESPCs and UESCs. Similarly, although spreading costs over 25 years may provide greater certainty for installation utility budgets, these arrangements also tie up those funds over that period, resulting in less flexibility in managing future budgets. Third, officials at one service headquarters stated that the risk associated with a 20- to 25-year contract can pose a disadvantage, such as in cases where a base realignment or closure action occurs. There are different costs associated with the implementation of the ESPCs and UESCs we selected for our review, and some potential costs may affect the overall cost of a project or may not always be included in total contract payments. However, we found some potential costs that may add to the overall cost of a project or may not always be included in total contract payments. In our review of the life-cycle cost analyses and contract documentation for the selected ESPCs and UESCs in our sample, we found that contracts varied in how they funded other potential costs associated with the projects, such as operation and maintenance and the repair and replacement of installed equipment, as well as some other energy project costs that may or may not be included in the payment to contractors. Operation and maintenance costs—Officials representing installations in our sample identified different approaches for how they manage the costs for operation and maintenance of their alternatively financed energy projects, and those costs may not always be included in the total contract costs. As noted earlier, one benefit of alternative financing arrangements that military service officials identified is the reduced risk and savings in operation and maintenance costs that can be achieved when a contractor installs and sustains the energy conservation measures. According to officials, the ongoing and periodic maintenance of the equipment by the contractor that is generally provided by ESPCs can free limited installation budgets for other maintenance requirements. Further, with UESCs, operation and maintenance costs associated with the project may decrease, but it is usually the responsibility of the agency, not the utility, to pay for these decreased costs. Further, depending on the contract terms, contractors are not always responsible for operation and maintenance of all of the energy conservation measures for a project. In these cases, an installation would provide manpower, spare parts, and potentially replacement equipment during the life of the contract. Based on our review of select projects, we found different ways in which the installations approached the funding for these costs. For example, officials at one installation decided not to include the costs for operation and maintenance services in the contract. The officials instead opted to have the contractor that was already providing operation and maintenance support for the facility continue to provide these services for all of the equipment. They also reported that the cost of that operation and maintenance contract was reduced due to the efficiencies that came with some of the measures installed through the ESPC, which resulted in manpower savings. At another installation, however, officials opted to have the ESPC contractor take over maintenance not only of equipment installed as part of the contract, but also of existing equipment in the same buildings that had been maintained by the base operating support contractor so that the installation would not have two contractors maintaining different parts of systems within the same building. Repair and replacement funds—Some contracts also establish and manage repair and replacement accounts using either an installation’s operation and maintenance funding or the savings from the energy conservation measures. These accounts may allow the installation to ensure the continued operation and maintenance of equipment installed as part of the alternatively financed project for which the contractor or utility may not have responsibility, such as items not covered under their warranty or that are manufactured by another company, by setting aside funds to cover the costs to repair or replace equipment that fails during the contract performance period. These accounts are included in the total contract costs. In our review of select ESPC projects, we found different ways in which these accounts were established and operated. For example, at one installation, officials told us they set up two repair and replacement accounts that are part of their monthly payments, which cover repairs to installed equipment not covered under the contract, such as equipment that was not manufactured by the contractor and controls components that were integrated onto the existing system. Funding for these two accounts is included in the installation’s annual payment to the contractor, and unused funds in the larger contractor equipment repair and replacement account roll over into the next year to cover any required maintenance as well as the replacement of equipment at the end of the contract term, if needed. At another installation, the ESPC was established with an account for repair and replacement funds to cover costs other than normal preventive maintenance, and this account is also funded annually as part of the payment to the contractor. According to installation officials, the purpose of this account is to have funding available to pay for a larger piece of equipment in case it needed to be replaced, and unused funding for this account is also expected to roll over and be available in future years. Installation officials told us that because labor is a large part of the repair and replacement of equipment, the account has generally been drawn down in full each year and there generally have not been funds available to roll over into the next year. Other energy project costs—There are some costs associated with energy projects that installations may incur regardless of the funding arrangement used—some of which may not be included in the total contract costs—and potential other costs installations may pay to bring down the total contract payments. For example, despite the funding source used, a project may require land valuations or environmental assessments. There are also project development costs, such as design and engineering services, as well as preliminary energy surveys for identifying potential energy conservation measures, which the contractor or utility may prepare and fund. Additionally, officials from the military service headquarters told us that some alternatively financed energy projects are managed by other DOD or federal entities, such as the Army Corps of Engineers or the Department of Energy, which may require contract administration fees that are paid either through a one-time up-front payment or at least annually through the life of the contract. For example, at two installations we visited, officials told us they would be paying either the U.S. Army Corps of Engineers or the Naval Facilities Engineering Command for items such as developing a request for proposal; conducting life-cycle cost analyses; and providing supervision, inspection, and overhead services whether they used up-front appropriations or alternative financing arrangements for their energy projects. These costs would not be included in the total costs because they are paid to the contracting officer at the federal agency rather than to the energy service company or utility. For example, officials at one installation told us they paid approximately $23,000 to the Naval Facilities Engineering Command for project management and oversight of the installation’s UESC. Finally, we found instances where installations used some up-front funds to reduce the amount financed for their projects. These up-front payments were still included in the total payments to the contractors, but the installations were able to reduce the amount on which they had to pay interest, thereby reducing the total amount they would have owed had they not made the up-front payments. For example, at one installation, we found that the total amount financed for the project was less than the cost to implement the project because the installation paid almost $2 million up front in pre-performance payments. According to an installation official, the installation had planned to repair some mechanical systems and had already set aside Facilities Sustainment, Restoration, and Modernization funds for this project. With the ESPC, the installation was able to use those funds to instead pay for more energy-efficient technologies to replace rather than repair those systems, using the funds to reduce the amount to be financed for the ESPC. DOD has taken various actions to meet its needs as the largest energy consumer in the federal government, including diversifying power sources, implementing conservation and other efficiency actions to reduce demand, and relying on private-sector contracts through alternative financing arrangements in lieu of using up-front appropriations to fund energy projects. Since 2005, DOD has awarded 464 contracts for alternatively financed energy projects. While DOD guidance requires the military services to track and store data related to energy projects, the military services have not collected complete and accurate data or consistently provided the data to the military department or DOD headquarters level on an annual basis to aid DOD oversight and to inform Congress. If DOD does not require the military services to provide DOD with complete and accurate data on all alternatively financed energy projects, decision makers within the department and Congress may not have all information needed for effective oversight of these projects, which represent long-term budgetary commitments for periods of up to 25 years. Confirming savings and validating project performance of all alternatively financed energy projects are necessary to ensure that the projects are meeting expected energy and costs savings and that the military services are appropriately budgeting for the projects over the life of the contract. The military services have taken some steps to verify project performance and confirm savings, and the alternatively financed energy projects we reviewed that were operational reported achieving expected savings or efficiencies. However, because guidance on when verification of savings is required is not clear, the military services have taken varying approaches for confirming UESC savings and lack full assurance that expected savings are being realized for the entirety of their UESC projects. DOD’s guidance requires the military departments to track estimated and verified savings and measurement and verification information for all energy projects, whereas the Office of Management and Budget guidance states that UESCs may be scored on an annual basis if the UESC requires performance assurance or guarantees and measurement and verification of savings at specific points in time— commissioning and retro-commissioning—rather than ongoing through the life of the project. However, DOD’s interpretation of this Office of Management and Budget requirement assumes that the military departments have the option of conducting either performance assurance, savings guarantees, or measurement and verification for UESCs, which differs from the department’s own guidance on verification of savings for all energy projects. Without updated and clear guidance on how the military departments should be taking steps to verify savings associated with UESC projects to validate project performance, the military services will likely continue to interpret DOD guidance differently and are likely to take inconsistent approaches to assuring the performance of UESC projects, which could limit DOD’s visibility over projects that commit the departments to long-term payments. To assist DOD and Congress in their oversight of DOD’s alternatively financed energy projects, we recommend that the Secretary of Defense direct the military services to collect complete and accurate data on their alternatively financed energy projects, including data on the services’ financial obligations and cost savings, and provide the data to DOD at least annually to aid departmental oversight. To help ensure that the military departments conduct the level of assessment required to assure the performance of their UESC projects over the life of the contract, we recommend that the Secretary of Defense direct the Office of the Assistant Secretary of Defense (Energy, Installations and Environment) to update its guidance to clarify the requirements for the verification of savings for UESC projects. We provided a draft of this report for review and comment to DOD and the Department of Energy. In written comments, DOD concurred with our first recommendation and nonconcurred with our second recommendation. DOD’s comments on this report are summarized below and reprinted in their entirety in appendix II. In an e-mail, the audit liaison from the Department of Energy indicated that the department did not have formal comments. DOD and the Department of Energy also both provided technical comments, which we incorporated as appropriate. DOD concurred with our first recommendation that the Secretary of Defense direct the military services to collect complete and accurate data on their alternatively financed energy projects, including data on the services’ financial obligations and cost savings, and provide the data to DOD at least annually to aid departmental oversight. DOD nonconcurred with our second recommendation that the Secretary of Defense direct the Office of the Assistant Secretary of Defense (Energy, Installations and Environment) to update its guidance to clarify the requirements for the verification of savings for UESC projects. In its response, DOD stated that UESCs are service contracts for utility services and that the only financial requirement on federal agencies is the obligation of the annual costs for these contracts during each year that the contract is in effect. The department stated that there is no statutory requirement for annual measurement and verification of the energy, water, or cost savings, or a contractual guarantee of those savings as there is for ESPCs. However, the department noted that DOD will continue to require its components to accomplish necessary tasks to assure continuing performance of the equipment or systems installed in a UESC to ensure expected energy and/or water consumption and cost reductions. We agree that UESCs do not include guaranteed cost savings. In response to DOD’s comments, we made changes to the draft report to emphasize that, while UESCs do not include guaranteed cost savings, repayments for UESCs—which can commit the department to a contract term limit for a period of up to 25 years—are based on estimated cost savings generated by the energy conservation measures. Thus, verification of savings to validate project performance is necessary to ensure that the projects are meeting expected energy and costs savings required to fulfill the requirement that these projects be paid for entirely through the projects’ generated cost savings. We further noted in our report that guidance from DOD does not align with that of the Office of Management and Budget, and this misalignment results in the military services taking different approaches to validating achievement of benefits expected from these UESC projects. In addition, we did not recommend that the department annually measure and verify UESC projects. Rather, we recommended that DOD clarify and update its guidance for verifying savings for these projects to help the military services appropriately budget for the projects over the contract’s life. Without updated and clear guidance about requirements on how to verify savings associated with UESCs, the military services will likely continue to interpret DOD guidance differently and take inconsistent approaches to assuring the performance of UESC projects. Doing so could limit DOD’s visibility over projects that commit the departments to long-term payments. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; the Commandant of the Marine Corps; and the Secretary of Energy. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To evaluate the extent to which the military services have financed energy projects with alternative financing arrangements since 2005 and collected and provided the Department of Defense (DOD) complete and accurate data on those projects, we reviewed available data and documentation on the alternatively financed energy projects that had previously been either reported by DOD or the Department of Energy in its published documents or collected by us or other audit agencies during previous reviews. Based on these criteria, we scoped our review to focus on the following types of alternatively financed energy projects for which the military services had awarded contracts from fiscal years 2005 through 2016: Energy Savings Performance Contracts (ESPC), Utility Energy Service Contracts (UESC), and Power Purchase Agreements (PPA). We focused on this time frame because with the passage of the Energy Policy Act of 2005, the military services began contracting for more alternatively financed energy projects. Moreover, we reported in 2005 that data prior to this time was incomplete. We included 2016 data as they capture the most recent full fiscal year of data. We reviewed data on projects awarded for installations in the United States and excluded the territories and other overseas installations. We developed a data collection instrument to confirm the completeness and accuracy of data we already had on existing alternatively financed energy projects, obtain any missing or revised data on those projects, and gather information on projects that had been awarded since our previous reviews. We pre-populated our data collection instrument for each of the military services using data from the following sources: Project level data from DOD’s Annual Energy Management Reports for fiscal years 2011 through 2015; A list of 10 USC 2922a PPA projects provided by Office of the Secretary of Defense officials; Data from our prior reviews on renewable energy project financing using both appropriated funding and alternative financing arrangements and on ESPCs for the military services; and Publicly available data from the Department of Energy on DOD projects funded using its indefinite-delivery, indefinite-quantity contract. In order to obtain consistent data among the services, for each spreadsheet in the data collection instrument, we developed separate tabs containing the pre-populated data on the three types of alternative financing arrangements on which we focused our review. For each type of arrangement, we also developed a separate definitions sheet that explained the data we were requesting so that the services would be responding with consistent data. We provided these pre-populated spreadsheets to the military services and requested that they verify existing information, provide additional information, and add new projects, as appropriate, in order to obtain data on the universe of these projects for the specified time period. We then discussed with those officials any questions we had about the quality and completeness of the data that were provided. While we took these steps to identify all of DOD’s alternatively financed energy projects since 2005, the data reflected may not represent the entire universe of projects. In addition to the data above, we reviewed key guidance that DOD provides to the DOD components on managing installation energy, including DOD Instruction 4170.11, Installation Energy Management, and DOD guidance letters on developing energy projects. We also reviewed the DOD instruction to learn about the requirement for the military departments to maintain a utility energy reporting system to prepare data, including data on energy consumption and costs, for the Annual Energy Management Report to determine DOD’s visibility over the energy projects. We reviewed guidance from the Department of Energy’s Federal Energy Management Program on alternative financing arrangements, including its overviews of the different arrangements and national lab reports on agencies’ use of these arrangements. We reviewed the relevant statute to determine what, if any, requirements applied to DOD’s data collection efforts related to energy projects. We then reviewed the project information provided by the military services for the presence of certain data points, such as total contract costs, estimated cost savings, and the length of the contract, and compared the military services’ tracking of their data on alternatively financed energy projects to DOD’s guidance and statutory requirements for tracking such data. We reviewed the data we collected for completeness and accuracy and estimated the total number of ESPCs, UESCs, and PPAs for each of the military services as well as the total contract costs, where available. We excluded from our analysis those UESCs for which the military departments had identified a contract term of one year or less or for which a project had previously been identified in DOD reporting but had not ultimately been funded as an alternatively financed energy project. We assessed the reliability of the data we received by interviewing DOD officials and comparing the multiple data sets we received from the military services with data reported in the Annual Energy Management Report and obtained through prior reviews to ensure that there was consistency in the data provided. We determined that the data were sufficiently reliable for meeting our objective. We compared DOD’s data collection efforts with Standards for Internal Control in the Federal Government, which identify standards for collecting and providing accurate and complete data. We also reviewed guidance documentation from the military services on developing and managing energy projects, including the Army’s guide for developing renewable energy projects, the Air Force’s instructions on cost analyses and business case analyses, and the Navy and Marine Corps energy project management guide. We met with officials from Office of the Secretary of Defense; the military departments; and the military departments’ engineering, installation, or contracting commands to discuss their guidance and policies on how they managed and tracked their alternatively financed energy projects and the availability of data on such projects. Finally, we spoke with Office of the Secretary of Defense officials about the President’s Performance Contracting Challenge, which challenged federal agencies to enter into a total of $4 billion in performance-based contracts, including ESPCs and UESCs, by the end of calendar year 2016, to gain an understanding of the results of DOD’s participation in this effort. To assess the extent to which the military services reported achieving expected savings and verified the reported performance of selected projects, we reviewed agency-level guidance on the different levels of measurement and verification or performance assurance that are required for alternatively financed energy projects, such as DOD’s instruction on installation energy management and the Department of Energy’s most recent guidelines for measurement and verification and performance assurance, to determine requirements for measuring savings for the different types of projects. Using the data on the alternatively financed energy projects that we obtained from the military services, we selected a nongeneralizable sample of 17 projects as case studies to discuss during our site visits and to evaluate how those projects reported achieving their estimated savings and the extent to which installation officials verified those reported savings. We then compared measurement and verification efforts for the 13 projects in our nongeneralizable sample that were already in operation with DOD guidance requiring measurement and verification for all energy projects to determine the extent to which installation officials followed guidance requiring verification of savings. We also collected and analyzed data and documentation on the expected and reported savings for the 17 projects in our sample to assess the extent to which the estimated savings compared to the savings that were reported and we documented reasons for any differences. We assessed the reliability of the project data by reviewing the internal controls DOD officials used to observe and corroborate the data contractors reported in their annual measurement and verification reports; the data collection and monitoring the officials did for performance assurance; and the data the officials used to assess project savings. We determined that the data were sufficiently reliable for our purposes of describing the extent to which the military services reported achieving expected savings and verified the reported performance of selected projects. For the eight ESPC projects in our sample that were operational, we collected and analyzed the most recent measurement and verification report to identify the guaranteed savings that were expected and the savings that were being reported by the contractor. We then interviewed military service officials at the installations we visited to discuss these projects and the reported results of their latest measurement and verification report or other assessment. We also talked with officials from the installations and, in some cases, also with officials from the installations’ supporting engineering or contracting commands, about how they verified the savings for the three UESC projects and the two PPAs in our sample and to learn about how they developed, managed, and tracked these alternatively financed projects. For UESCs, we reviewed DOD guidance outlining requirements to conduct measurement and verification and compared that with the requirements outlined in Office of Management and Budget guidance. We also contacted officials from the Office of the Secretary of Defense and the military departments to discuss their current and planned guidance related to measuring, verifying, and reporting the performance of UESCs during the contract performance period to assure that savings are being achieved. To describe the benefits and disadvantages reported by the military services, as well as potential other costs, of using alternative financing arrangements for selected energy projects rather than using up-front appropriations, we reviewed previously discussed DOD, Department of Energy, and military service guidance on the use of alternative financing arrangements and their cost-effectiveness to determine the requirements for life-cycle cost analyses and how project costs are identified in contracts and other documents. For the 17 selected projects in our nongeneralizable sample, we collected project planning documentation and reviewed available life-cycle cost analyses and contract documentation for those projects to obtain information on how costs were identified and where they were documented. Additionally, we interviewed officials at the installations in our sample, their contracting or engineering commands, or their military service headquarters to discuss the projects in our sample, including the benefits and disadvantages of using alternative financing arrangements for those energy projects. We also discussed how the individual contracts identified the costs to operate and maintain the energy conservation measures or power-generating equipment for the selected energy projects in our sample as well as any costs associated with the projects that might not be reflected in the total contract costs. For one ESPC in our sample, we also compared the costs of the alternative financing arrangement with the use of up-front appropriations by calculating the present value of the costs had the government directly incurred the debt to finance the amount that had instead been financed by the energy service company. In addition, the team interviewed officials from the Department of Energy’s Federal Energy Management Program about federal policies and guidance related to alternative financing arrangements for energy projects and from the General Services Administration about that agency’s area-wide contracts with utility companies to gain an understanding of issues related to the benefits and costs of such projects. In order to select installations and identify case studies from which we gathered information for our objectives, we used data collected in response to our request to the military services. We developed a nongeneralizable sample representing 17 projects at 11 installations that had awarded an ESPC, a UESC, or a PPA between fiscal years 2005 and 2016. Our case studies included 11 ESPCs, 3 UESCs, and 3 PPAs. We selected our case studies to identify projects representing: Each of the military services, including one from the reserve The different types of alternative financing arrangements (ESPC, UESC, and PPA); The year the contract was awarded; The different types of contracting vehicles (Army or Department of Energy, General Services Administration area-wide, or standalone contract); and Different project types (energy efficiency, energy cost savings, and power generation). We included at least three large-scale renewable energy projects, which we defined as projects with a generating capacity of 10 megawatts or greater. We also attempted to include projects that were both operational and had not been included in other recent audits by us or other audit agencies. Finally, we considered geographic variation when selecting sites. In addition to discussing these alternatively financed projects with installation officials, we also observed selected energy conservation measures that had been installed. Table 4 outlines the installations we visited or contacted during our review. In addition, for each of our objectives, we contacted officials and, when appropriate, obtained documentation from the organizations listed below: Office of the Secretary of Defense: Office of the Assistant Secretary of Defense, Energy, Installations and Assistant Secretary of the Army (Installations, Energy and Environment), Deputy Assistant Secretary of the Army for Energy and Sustainability and its Office of Energy Initiatives Assistant Chief of Staff for Installation Management Headquarters, U.S. Army Corps of Engineers Director, Shore Readiness Division (N46) Assistant Secretary of the Navy for Energy, Installations and Environment, Deputy Assistant Secretary of the Navy (Energy) Renewable Energy Program Office Headquarters, Naval Facilities Engineering Command, and two Facilities Engineering Commands–Northwest and Southwest Air Force Installations, Environment and Energy Air Force Civil Engineer Center Marine Corps Installations Command, Facility Operations and Energy We conducted this performance audit from June 2016 to June 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Kristy Williams (Assistant Director), Edward Anderson, Karyn Angulo, Michael Armes, Tracy Barnes, William Cordrey, Melissa Greenaway, Carol Henn, Amanda Miller, Richard Powelson, Monica Savoy, Matthew Spiers, Karla Springer, and Jack Wang made key contributions to this report. Defense Infrastructure: Actions Needed to Strengthen Utility Resilience Planning. GAO-17-27. Washington, D.C.: November 14, 2016. Renewable Energy Projects: Improved Guidance Needed for Analyzing and Documenting Costs and Benefits. GAO-16-487. Washington, D.C.: September 8, 2016. Defense Infrastructure: Energy Conservation Investment Program Needs Improved Reporting, Measurement, and Guidance. GAO-16-162. Washington, D.C.: January 29, 2016. Defense Infrastructure: Improvement Needed in Energy Reporting and Security Funding at Installations with Limited Connectivity. GAO-16-164. Washington, D.C.: January 27, 2016. Defense Infrastructure: DOD Efforts Regarding Net Zero Goals. GAO-16-153R. Washington, D.C.: January 12, 2016. Defense Infrastructure: Improvements in DOD Reporting and Cybersecurity Implementation Needed to Enhance Utility Resilience Planning. GAO-15-749. Washington, D.C.: July 23, 2015. Energy Savings Performance Contracts: Additional Actions Needed to Improve Federal Oversight. GAO-15-432. Washington, D.C.: June 17, 2015. Electricity Generation Projects: Additional Data Could Improve Understanding of the Effectiveness of Tax Expenditures. GAO-15-302. Washington, D.C.: April 28, 2015. Defense Infrastructure: Improved Guidance Needed for Estimating Alternatively Financed Project Liabilities. GAO-13-337. Washington, D.C.: April 18, 2013. Renewable Energy Project Financing: Improved Guidance and Information Sharing Needed for DOD Project-Level Officials. GAO-12-401. Washington, D.C.: April 4, 2012. Defense Infrastructure: DOD Did Not Fully Address the Supplemental Reporting Requirements in Its Energy Management Report. GAO-12-336R. Washington, D.C.: January 31, 2012. Defense Infrastructure: The Enhanced Use Lease Program Requires Management Attention. GAO-11-574. Washington, D.C.: June 30, 2011. Defense Infrastructure: Department of Defense Renewable Energy Initiatives. GAO-10-681R. Washington, D.C.: April 26, 2010. Defense Infrastructure: DOD Needs to Take Actions to Address Challenges in Meeting Federal Renewable Energy Goals. GAO-10-104. Washington, D.C.: December 18, 2009. Energy Savings: Performance Contracts Offer Benefits, but Vigilance Is Needed to Protect Government Interests. GAO-05-340. Washington, D.C.: June 22, 2005. Capital Financing: Partnerships and Energy Savings Performance Contracts Raise Budgeting and Monitoring Concerns. GAO-05-55. Washington, D.C.: December 16, 2004.
DOD, the largest energy consumer in the federal government, has been addressing its power needs by diversifying its power resources, reducing demand, and implementing conservation projects. To address its goals for energy projects, DOD also has been using alternative financing from private-sector contracts rather than relying solely on annual federal appropriations to fund projects upfront. The House and Senate reports accompanying their respective bills for the National Defense Authorization Act for 2017 included provisions that GAO review DOD's alternatively financed energy projects. This report (1) evaluates the military services' use of alternative financing arrangements since 2005 and data collected and provided to DOD on those projects; (2) assesses reported project savings and verification of reported performance, and (3) describes benefits and disadvantages and potential other costs of using alternative financing rather than up-front appropriations. GAO analyzed and reviewed DOD data, relevant guidance, and project documentation; interviewed cognizant officials; and reviewed a nongeneralizable sample of projects. The military services have used alternative financing arrangements—entering into about 38 private-sector contracts annually from 2005 through 2016—to improve energy efficiency, save money, and meet energy goals. However, the military services have not collected and provided the Department of Defense (DOD) complete and accurate data, such as total contract costs and savings. For example, GAO was unable to identify and the military services could not provide total contract costs for 196 of the 446 alternatively financed energy projects since 2005. Furthermore, some data provided on select projects did not include the level of accuracy needed for planning and budgeting purposes. According to officials, the military services did not always have complete and accurate data because authority for entering into these projects has been decentralized and data have not been consistently maintained. As such, neither the military departments, which include the military services, nor DOD have complete and accurate data on the universe of these projects. Without complete and accurate data on all alternatively financed energy projects, decision makers will not have the information needed for effective project oversight or insight into future budgetary implications of the projects, including impacts on utility budgets. DOD's alternatively financed energy projects that GAO reviewed reported achieving expected savings. Specifically, GAO's review of 13 operational alternatively financed energy projects found that all 13 projects reported achieving their expected savings. However, the military services have varying approaches for verifying whether projected savings were achieved for all utility energy service contracts (UESC)—an arrangement in which a utility arranges financing to cover the project's costs, which are then repaid by the agency over the contract term. DOD guidance requires the military services to track estimated and verified savings and measurement and verification information for all energy projects, but DOD's guidance is inconsistent with more recent Office of Management and Budget guidance. This inconsistency and DOD's interpretation of Office of Management and Budget guidance have resulted in the military departments developing varying approaches for verifying savings of UESC projects. Without clear guidance from DOD on how the military services should be taking steps to verify savings associated with UESC projects, the military services will continue to interpret guidance differently and are likely to take inconsistent approaches to verifying the savings of UESC projects spanning potentially a 25-year duration. DOD and military service officials identified benefits and disadvantages, as well as other potential costs, of using alternative arrangements to finance energy projects rather than using up-front appropriations. According to officials, benefits include the ability to fund projects that would not otherwise be funded due to budgetary constraints, to complete projects more quickly, and to have expert personnel available to implement and manage such projects. However, officials also identified disadvantages, including higher costs and the risks associated with long-term financial obligations. In addition, GAO found that some potential costs for these alternatively financed energy projects, such as costs associated with operation and maintenance and repair and replacement of equipment, add to overall project costs and may not be included in the total contract payments. GAO recommends that the military services collect and provide DOD complete and accurate data on all alternatively financed energy projects and that DOD update its guidance to clarify requirements for verifying UESC savings. DOD concurred with the first recommendation and nonconcurred with the second. GAO continues to believe its recommendation is valid, as discussed in this report.
Since plutonium production ended at the Hanford Site in the late 1980s, DOE has focused on cleaning up the radioactive and hazardous waste accumulated at the site. It has established an approach for stabilizing, treating, and disposing of the site’s tank wastes. Its planned cleanup process involves removing, or retrieving, waste from the tanks; treating the waste on site; and ultimately disposing of the lower-activity radioactive waste on site and sending the highly radioactive waste to a geologic repository for permanent disposal. As cleanup has unfolded, however, the schedule has slipped, and the costs have mounted. According to DOE’s latest estimate in June 2008, treatment of the waste is not expected to begin until late 2019 and could continue until 2050 or longer. The following two figures show a tank farm and construction of waste treatment plant facilities at the Hanford Site. Most of the cleanup activities at Hanford, including the emptying of the underground tanks, are carried out under the Hanford Federal Facility Agreement and Consent Order among DOE, Washington State’s Department of Ecology, and the federal Environmental Protection Agency. Commonly called the Tri-Party Agreement, this accord lays out legally binding milestones for completing the major steps of Hanford’s waste treatment and cleanup processes. The agreement was signed in May 1989 and has been amended a number of times since then. A variety of local and regional stakeholders, including county and local governmental agencies, citizen and advisory groups, and Native American tribes, also have long- standing interests in Hanford cleanup issues. Two primary contractors are carrying out these cleanup activities; one is responsible for managing and operating the tank farms, and the other for constructing the facilities to treat the tank waste and prepare it for permanent disposal. During our review, these contractors were CH2M Hill and Bechtel, respectively. Both contracts are cost-reimbursement contracts, which means that DOE pays all allowable costs. In addition, the contractors can also earn a fee, or profit, by meeting specified performance objectives or measures. Applicable DOE orders and regulations are incorporated into these contracts, either as distinct contract clauses or by reference. For example, contractors are required to use an accounting system that provides consistency in how costs are accumulated and reported so that comparable financial transactions are treated alike. Such a system is to include consistent practices for determining how various administrative costs are assessed or how indirect costs for labor are calculated. Contractors also are required to implement an integrated safety management system, a set of standardized practices that allow the contractor to identify hazards associated with a specific scope of work, to establish controls to ensure that work is performed safely, and to provide feedback that supports continuous improvement. The system, which allows contractors to stop work when conditions are unsafe, is intended to instill in everyone working at the site a sense of responsibility for safety. This policy is reinforced by labor agreements between the contractor and its workforce that explicitly allow work stoppages as needed for safety and security reasons. With few exceptions, DOE’s sites and facilities are not regulated by the Nuclear Regulatory Commission or by the Occupational Safety and Health Administration. Instead, DOE provides internal oversight at several different levels. DOE’s Office of River Protection oversees the contractors directly. In addition, the Office of Environmental Management provides funding and program direction. DOE’s Office of Enforcement and other oversight groups within the Office of Health, Safety, and Security oversee contractors’ activities to ensure nuclear and worker safety. Finally, the Defense Nuclear Facilities Safety Board, an independent oversight organization created by Congress in 1988, provides advice and recommendations to the Secretary of Energy to help ensure adequate protection of public health and safety. DOE officials reported that from January 2000 through December 2008, work on the Hanford tank farms and the waste treatment plant temporarily stopped at least 31 times to address various safety or construction concerns. These work stoppages ranged in duration from a few hours to more than 2 years, yet little supporting documentation of these occurrences exists. DOE reported that of the 31 work stoppages, 12 occurred at the tank farms and 19 at the waste treatment plant. Sixteen of the work stoppages reportedly resulted from concerns about safety. A complete listing of these work stoppages is included in appendix II. These work stoppages were initiated to respond directly to an event in which property was damaged or a person injured, or they addressed an unsafe condition with the potential to harm workers in the future. Four of these work stoppages were relatively brief, lasting less than 2 days, and were characterized by DOE and officials as proactive safety “pauses.” For example, in October 2007, after a series of slips, trips, or falls during routine activities, contractor managers stopped work at the waste treatment plant site for 1 hour to refresh workers’ understanding of workplace hazards. The following two examples, for which supporting documentation was available, illustrate the types of work stoppages occurring at the Hanford Site because of safety concerns: Controlling worker exposure to tank farm vapors. Beginning in 2002, as activities to transfer waste from leak-prone, single-shell tanks to more secure double-shell tanks disturbed tank contents, the number of incidents increased in which workers complained of illnesses, coughing, and skin irritation after exposure to the tank vapors. The Hanford underground storage tanks contain a complex variety of radioactive elements and chemicals that have been extensively mixed and commingled over the years, and DOE is uncertain of the specific proportions of chemicals contained in any one tank. These constituents generate numerous gases, such as ammonia, hydrogen, and volatile organic compounds, which are purposely vented to release pressure on the tanks, although some gases also escape through leaks. During the 1990s, the tank farm contractor evaluated potential hazards and determined that if workers around the tanks used respirators, they would be sufficiently protected from harmful gases. DOE reported in 2004, however, that disturbing the tank waste during transfers had changed the concentration of gases released in the tanks and that no standards for human exposure to some of these chemicals existed. To protect workers’ health, in 2004 the tank farm contractor equipped workers with tanks of air like those used by firefighters. Work at the tank farms stopped intermittently for about 2 weeks as a result, in part because the contractor had to locate and procure sufficient self-contained air and equipment for all workers. Accidental spill of radioactive and chemical wastes at tank S-102. In July 2007, as waste was being pumped out of a single-shell to a double-shell tank, about 85 gallons of waste was spilled. DOE has been gradually emptying waste from Hanford’s single-shell tanks into double-shell tanks in preparation for treatment and permanent disposal, but because the tank waste contains sludge and solids, waste removal has been challenging. Because the tanks were not designed with specific waste retrieval features, waste must be retrieved through openings, called risers, in the tops of the tanks; technicians must insert specially designed pumps into the tanks to pump the waste up about 45 to 60 feet to ground level. DOE has used a variety of technologies to loosen the solids, including sprays of acid or water to help break up the waste and a vacuum-like system to suck up and remove waste through the risers at the top. On July 27, 2007, during retrieval of radioactive mixed waste from a 758,000-gallon single-shell tank, a pump failed, spilling 85 gallons of highly radioactive waste to the ground. At least two workers were exposed to chemical vapors, and later several workers reported health effects they believed to be related to the spill. Retrieval operations for all single-shell tanks were suspended after the accident, and DOE did not resume operations until June 2008, a delay of 1 year, while the contractor cleaned up the spill and DOE and the contractor investigated the accident to evaluate the cause, the contractor’s response, and appropriate corrective action. DOE officials reported that the remaining 15 work stoppages resulted from concerns about construction quality and involved rework to address nuclear safety or technical requirements that had not been fully met, such as defective design, parts fabrication and installation, or faulty construction. For example: Outdated ground-motion studies supporting seismic design of the waste treatment plant. In 2002, the Defense Nuclear Facilities Safety Board began expressing concerns that the seismic standards used to design the waste treatment facilities were not based on the most current ground- motion studies and computer models or on the geologic conditions present directly beneath the construction site. After more than 2 years of analysis and discussion, DOE contracted for an initial seismic analysis, which confirmed the Defense Nuclear Facilities Safety Board’s concerns that the seismic criteria were not sufficiently conservative for the largest treatment facilities—the pretreatment facility and the high-level waste facility. Revising the seismic criteria caused Bechtel to recalculate thousands of engineering estimates and to rework thousands of design drawings to ensure that tanks, piping, cables, and other equipment in these facilities were adequately anchored. Bechtel determined that the portions of the building structures already constructed were sufficiently robust to meet the new seismic requirements. By December 2005, however, Bechtel estimated that engineering rework and other changes to tanks and other equipment resulting from the more conservative seismic requirement would increase project costs substantially and add as much as 26 months to the schedule. Ultimately, work on the two facilities was suspended for 2 years, from August 2005 until August 2007. About 900 workers were laid off as a result. DOE does not routinely collect or formally report information about work stoppages, in part because federal regulations governing contracts do not require contractors to track work stoppages and the reasons for them. While federal acquisition regulations do require that contractors implement a reliable cost-accounting system, the regulations do not require contractors to centrally collect information on the specific circumstances surrounding a work stoppage. Without a centralized system for collecting explanatory data on work stoppages, the majority of information DOE reported to us is based on contractors’ and DOE officials’ recollections of those events or on officials’ review of detailed logs maintained at each of the facilities. Officials expressed concern that systematically monitoring all work stoppages could send the message that work stoppages should be avoided, possibly hampering effective implementation of DOE’s integrated safety management policy. This policy explicitly encourages any employee to “stop work” to address conditions that raise safety concerns. Officials said they believe that work stoppages help bolster workplace safety and construction quality because work can be halted and corrective action taken before someone is seriously injured, property is seriously damaged, or poor workmanship has compromised the quality and functionality of a facility. Officials said that systematically monitoring all types of work stoppages could ultimately discourage workers from halting activities when unsafe conditions or construction problems emerge in the workplace. Under the terms of the cost-reimbursement contracts for the tank farms and the waste treatment plant, DOE generally pays the costs for corrective action or construction rework associated with temporary work stoppages and does not require the contractor to separately track these costs. Various categories of costs can be associated with work stoppages, with some easier to measure or separately identify than others. The category of costs related to correcting a problem that precipitates a work stoppage, such as the cost of investigating and cleaning up a hazardous waste spill or the cost of rework to address improper construction, is usually more easily measured. In contrast, lost productivity—expenditures for labor during periods workers were not fully engaged in productive work or the difference between the value of work that should have been accomplished against the value of work that was accomplished—is more difficult to quantify. Most of the work stoppages reported by DOE officials involved some corrective action or construction rework to address the problem precipitating the work stoppage. These are costs that tend to be easier to separately identify and track, and DOE has directed contractors to do so in certain instances, as it did for the July 2007 tank waste spill. For the work stoppages at the tank farms, corrective actions encompassed such activities as investigating and cleaning up the July 2007 spill, monitoring and testing vapors escaping from the tanks to determine the constituents, and training contractor employees on required new procedures or processes. For the work stoppages at the waste treatment plant, corrective actions at times involved retraining workers or developing new procedures to prevent future problems, although many of the work stoppages at the waste treatment plant involved construction rework. Construction rework can include obtaining new parts to replace substandard parts or labor and materials to undo installations or construction, followed by proper installation or construction—pouring new concrete, for example, or engineering and design work to address nuclear safety issues. The cost of lost productivity associated with a work stoppage can be more difficult to measure or separately identify, although under a cost- reimbursement contract, the government would generally absorb the cost. While no generally accepted means of measuring lost productivity exists, two methods have been commonly used. The first, a measure of the cost of idleness, or doing nothing, calculates the expense incurred for labor and overhead during periods that no productive work is taking place. These were the types of costs associated with a July 2004 suspension, or “stand- down,” of operations at the Los Alamos National Laboratory, where a pattern of mishaps led the contractor to stop most work at the facility for many months to address safety and security concerns. Laboratory activities resumed in stages, returning to full operations in May 2005. Although officials with both the National Nuclear Security Administration, which oversees the laboratory, and the Los Alamos contractor, tried to measure lost productivity at the laboratory, each developed widely differing estimates—of $370 million and $121 million, respectively—partly because of difficulties measuring labor costs. According to DOE officials, when work stopped at the Hanford Site tank farms, CH2M Hill reassigned workers to other productive activities. Therefore, according to DOE officials, no costs of idleness were incurred as a result of those work stoppages. We were unable to verify, however, that tank farm workers had been reassigned to other productive work after the S-102 tank waste spill or during other tank farm work stoppages. During the period that work stopped on the pretreatment and high-level waste facilities of the waste treatment plant, in contrast, the contractor substantially reduced its workforce. According to Bechtel officials and documents, about 900 of 1,200 construction workers were laid off during the work stoppage, and the remaining workers were employed on the other facilities under construction. An alternative means of measuring lost productivity associated with suspension of work activities is to measure the value of work planned that should have been accomplished but was not. This method concentrates on the work that was not done, as opposed to the cost of paying workers to do little or nothing. This method of measuring lost productivity is typically undertaken as part of a formal earned value management system, a project management approach that combines the technical scope of work with schedule and cost elements to establish an “earned value” for a specific set of tasks. If the earned value of work accomplished during a given period is less than the earned value of work planned for that period, then a loss in productivity has occurred, and the cost is equal to the difference in value between planned and finished work. DOE officials were unable to provide this measure for the three work stoppages that had supporting documentation, partly because the analyses of productivity under earned value management techniques did not disaggregate activities in a manner that could capture the three work stoppages. For example, with regard to the tank farms, DOE measures the overall progress made on waste stabilization and retrieval for all 177 storage tanks in aggregate but does not measure the direct impact of setbacks at any one storage tank, such as the spill at tank S-102. The contracts for the tank farms and the waste treatment plant do not generally require the contractors to separately track costs associated with work stoppages. Contractors must use an accounting system adequate to allow DOE to track costs incurred against the budget in accordance with federal cost-accounting standards. These standards permit a contractor to establish and use its own cost-accounting system, as long as the system provides an accurate breakdown of work performed and the accumulated costs and allows comparisons against the budget for that work. For the tank farm and waste treatment plant contracts, the contractors must completely define a project by identifying discrete physical work activities, essentially the steps necessary to carry out the project. This “work breakdown structure” is the basis for tracking costs and schedule progress. Corrective action and rework associated with work stoppages are generally not explicitly identified as part of a project’s work breakdown structure, although these costs are generally allowable and contractors do not have to account for them separately. Despite the lack of a requirement to track costs associated with work stoppages, DOE and contractors sometimes do track these costs separately, as in the following three circumstances: DOE can request the contractor to separately track costs associated with corrective action when DOE officials believe it is warranted. DOE specifically asked CH2M Hill to separately track costs associated with addressing the July 2007 tank spill because of the potential impacts on tank farm operations, workers, and the environment and because of heightened public and media attention to the event. Contractors may voluntarily track selected costs associated with a work stoppage if they believe that a prolonged suspension of work will alter a project’s cost and schedule. Contractors may want to collect this information for internal management purposes or to request an adjustment of contract terms in the future. For example, Bechtel estimated costs for both redesign work and lost productivity resulting from a change in seismic standards for the waste treatment plant. DOE may require a contractor to track particular costs associated with investigating an incident that it believes may violate DOE nuclear safety requirements or the Atomic Energy Act of 1954, as amended (these violations are referred to as Price-Anderson Amendment Act violations). DOE’s Office of Enforcement notifies the contractor in a “segregation letter” that an investigation of the potential violation will be initiated and that the contractor must segregate, or separately identify, any costs incurred in connection with the investigation. These are not costs of corrective action or rework. The costs incurred in connection with the investigation are generally not allowable. Not all such investigations involve a work stoppage, however. Of the 31 work stoppages reported to us by DOE officials, costs are available only for the July 2007 spill at the tank farm, since DOE specifically required the contractor to separately identify and report those costs. The costs of that incident totaled $8.1 million and included expenditures for cleaning up contamination resulting from the spill, investigating the causes of the accident, investigating health effects of the accident on workers, administrative support, and oversight of remediation activities. These were all considered allowable costs, and DOE has reimbursed the contractor for them. Although a subsequent investigation took place to determine whether nuclear safety rules had been violated, the costs to participate in that investigation ($52,913) were segregated as directed by DOE’s Office of Enforcement and were not billed to the government. Although DOE officials said that none of the reported work stoppages involved lost-productivity costs, the work stoppage to address the tank spill could well contribute to delays and rising costs for tank waste retrieval activities over the long run. Given that DOE was emptying only about one tank per year when we reported on Hanford tanks in June 2008, the 1-year suspension of waste retrieval activities, without additional steps to recover lost time, may contribute to delayed project completion. Many factors already contribute to delays in emptying the tanks. DOE has acknowledged that it will not meet the milestones agreed to with Washington State and the Environmental Protection Agency in the Tri-Party Agreement. We found that DOE’s own internal schedule for tank waste retrieval, approved in mid-2007, reflects time frames almost 2 decades later than those in the agreement. Ultimately, delays contribute to higher costs because of ongoing costs to monitor the waste until it is retrieved, treated, and permanently disposed of, and estimated costs for tank waste retrieval and closure have been growing. DOE estimated in 2003 that waste retrieval and closure costs from 2007 onward—in addition to the $236 million already spent to empty the first seven tanks—would be about $4.3 billion. By 2006, this estimate had grown to $7.6 billion. Because of limitations in DOE’s reporting systems, however, we were unable to determine the specific effect of the tank spill on overall tank retrieval costs beyond the $8.1 million in corrective action costs. In addition, although specific costs were not available for the 2-year suspension of construction activities at two of the facilities in the waste treatment plant, we have previously reported on some of the potential impacts. In an April 2006 testimony, we reported on the many technical challenges Bechtel had encountered during design and construction of the waste treatment plant. These ongoing technical challenges included changing seismic standards that resulted in substantial reengineering of the design for the pretreatment and high-level waste facilities, problems at the pretreatment plant with “pulse jet mixers” needed to keep waste constituents uniformly mixed while in various tanks, and the potential buildup of flammable hydrogen gas in the waste treatment plant tanks and pipes. In December 2005, Bechtel estimated that these technical problems could collectively add nearly $1.4 billion to the project’s estimated cost. Under the cost-reimbursement contracts for the tank farms and the waste treatment plant, costs associated with work stoppages, such as the costs of corrective action or construction rework, generally are allowable costs. As such, DOE generally pays these costs, regardless of whether they are separately identified or whether they are included in the overall costs of work performed. Even though the contractors are being reimbursed for the costs associated with work stoppages, they can experience financial consequences, either through loss of performance fee or fines and penalties assessed by DOE or its regulators. For example, DOE may withhold payment of a performance award, called a fee, from contractors for failure to meet specified performance objectives or measures or to comply with applicable environmental, safety, and health requirements. The tank farm and waste treatment plant contractors both lost performance fee because of work stoppages as follows: For the July 2007 spill at the tank farms, under CH2M Hill’s “conditional payment of fee” provision, DOE reduced by $500,000 the performance fee the contractor could have earned for the year. In its memo to the contractor, DOE stated that the event and the contractor’s associated response were not consistent with the minimum requirement for protecting the safety and health of workers, public health, and the environment. Nevertheless, DOE did allow CH2M Hill to earn up to $250,000, or half the reduction amount, provided the contractor fully implement the corrective action plan developed after the accident investigation, with verification of these actions by DOE personnel. Bechtel also lost performance fee because of design and construction deficiencies at the waste treatment plant facilities and the 2-year delay on construction of the pretreatment and high-level waste facilities. Overall, DOE withheld $500,000 in Bechtel’s potential performance fee for failure to meet construction milestones. In addition, DOE withheld $300,000 under the “conditional payment of fee” provision in the contract after a number of serious safety events and near misses on the project. Furthermore, in addition to having potential fee reduced for safety violations and work stoppages, DOE and other federal and state regulators may also assess fines or civil penalties against contractors for violating nuclear safety rules and other legal or regulatory requirements. These fines and penalties are one of the categories of costs that are specifically not allowed under cost-reimbursement contracts, and these costs are borne solely by the contractor. For example, DOE’s Office of Enforcement can assess civil penalties for violations of nuclear safety and worker safety and health rules. Both contractors were assessed fines or civil penalties for the events associated with their work stoppages. Fines and penalties assessed against CH2M Hill for the July 2007 tank spill totaled over $800,000 and included (1) civil penalties of $302,500 assessed by DOE’s Office of Enforcement for violation of nuclear safety rules, such as long-standing problems in ensuring engineering quality and deficiencies in recognizing and responding to the spill; (2) a Washington State Department of Ecology fine of $500,000 for inadequacies in design of the waste retrieval system and inadequate engineering reviews; and (3) a fine of $30,800 from the Environmental Protection Agency for delays in notification of the event. The contractor was required to notify the agency within 15 minutes of the spill but instead took almost 12 hours. From March 2006 through December 2008, DOE’s Office of Enforcement issued three separate notices of violation to Bechtel, with civil penalties totaling $748,000. These violations of nuclear safety rules were associated with procurement and design deficiencies of specific components at the waste treatment plant. In its December 2008 letter to the contractor, DOE stated that significant deficiencies in Bechtel’s quality-assurance system represented weaknesses that had also been found in the two earlier enforcement actions. For the majority of DOE’s reported work stoppages, no supporting documentation was available to evaluate whether better oversight or regulation could have prevented them. For two incidents for which documentation was available—internal investigations and prior GAO work—a lack of oversight contributed to both. These two work stoppages occurred at the tank farms and the waste treatment plant, and both resulted from engineering-design problems. In a third case—efforts to address potentially hazardous vapors venting from underground waste storage tanks—DOE’s efforts to enforce worker protections were found to have been inadequate, although this lack of oversight does not appear to have directly caused the work stoppage associated with the vapors problem. Insufficient oversight was a factor in these three events as follows: Accidental spill of radioactive and chemical wastes at tank S-102. Specifically, the accident investigation report for the tank farm spill found that oversight and design reviews by DOE’s Office of River Protection failed to identify deficiencies in CH2M Hill’s tank pump system, which did not meet nuclear safety technical requirements. The Office of River Protection failed to determine that this pump system did not have a needed backflow device to prevent excessive pressure in one of the hoses serving a tank, ultimately causing it to fail and release waste, which then overflowed from the top of this tank and spilled to the ground. In addition, the investigation found that CH2M Hill failed to respond to the accident in a timely manner and failed to ensure that nuclear safety requirements had been met. Outdated ground-motion studies supporting seismic design of the waste treatment plant. Lax oversight was also a factor in a second event at the waste treatment plant. GAO in 2006 found that DOE’s failure to effectively implement nuclear safety requirements, including requirements that all waste treatment plant facilities would survive a potential earthquake, contributed substantially to delays and growing costs at the plant. The Defense Nuclear Facilities Safety Board first expressed concerns with the seismic design in 2002, believing that the seismic standards followed had not been based on then-current ground-motion studies and computer models or on geologic conditions directly below the waste treatment plant site. It took DOE 2 years to confirm that the designs for two of the facilities at the site—the pretreatment and the high-level waste facilities— were not sufficiently conservative. Revising the seismic criteria required Bechtel to recalculate thousands of design drawings and engineering estimates to ensure that key components of these facilities would be adequately anchored. Work was halted at the two facilities for 2 years as a result. Controlling worker exposure to tank farm vapors. In 2004, DOE’s then Office of Independent Oversight and Performance Assurance (today reorganized as DOE’s Office of Health, Safety, and Security) investigated vapor exposures at the Hanford tank farms and the adequacy of worker safety and health programs at the site, including the adequacy of DOE oversight. Investigators were unable to determine whether any workers had been exposed to hazardous vapors in excess of regulatory limits but found several weaknesses in the industrial hygiene (worker safety) program at the site, in particular, hazard controls and DOE oversight. According to the investigation, the Office of River Protection had not effectively overseen the contractor’s worker safety program; had failed to provide the necessary expertise, time, and resources to adequately perform its management oversight responsibilities at the tank farms; and had failed to ensure corrective action for identified problems. After the investigation, DOE stepped up its monitoring efforts at the tank farms, and the contractor provided tank farm workers with supplied air, an action that slowed or halted work at the tank farms for about 2 weeks while supplied air equipment was secured and workers were trained to use it. With regard to regulations, however, officials we interviewed from DOE, the Defense Nuclear Facilities Safety Board, and the Office of Inspector General said they did not believe that insufficient regulation was a factor in these two events. Officials from the Nuclear Regulatory Commission declined to comment on the sufficiency of regulations. The final cost to the American public of cleaning up the Hanford Site is expected to reach tens of billions of dollars. Consequently, factors that can potentially escalate costs—including work stoppages—matter to taxpayers, DOE, and Congress. Depending on what causes a work stoppage and how long it lasts, some stoppages could increase already substantial cleanup costs. Although prudent oversight would seem to call for DOE to understand the reasons for work stoppages and the effects of these work stoppages on costs, neither law nor regulation requires that this information be systematically recorded and reported. DOE and other stakeholders have expressed reservations that collecting information on work stoppages could send a message that work stoppages should be minimized, thus discouraging managers or workers from reporting potential safety or construction quality issues. We recognize that the opportunity for any manager or worker to call a work stoppage when worker safety or construction quality is at stake is an integral part of DOE’s safety and construction management strategies and should not be stifled. Yet DOE has also recognized the importance of cost information and in one recent case—the 2007 tank waste spill—required the contractor to separately track detailed cost information. In addition, we previously recommended that DOE require contractors to track the costs associated with future work stoppages, similar to the one at Los Alamos National Laboratory in 2004, and DOE agreed with this recommendation. While acknowledging these competing pressures, we believe that systematically collecting cost information on selected work stoppages can increase transparency and yet balance worker and public safety. To provide a more thorough and consistent understanding of the potential effect of work stoppages on project costs, we recommend that the Secretary of Energy take the following two actions: (1) establish criteria for when DOE should direct contractors to track and report to DOE the reasons for and costs associated with work stoppages, ensuring that these criteria fully recognize the importance of worker and nuclear safety, and (2) specify the types of costs to be tracked. We provided a draft of this report to the Secretary of Energy for review and comment. In written comments, the Chief Operations Officer for Environmental Management generally agreed with our recommendations, stating that they will be accepted for implementation within the Environmental Management program. The comments (which are reproduced in app. III) were silent on whether the recommendations will be implemented in other DOE programs. In its comments, DOE expressed concern that readers of appendix II could misconstrue the information in the column labeled “Duration” as representing a delay in the entire listed project, not simply the time required to resolve the specific issue in question; DOE maintains that during this time, workers were shifted to other work activities. We found, however, that some of the short work stoppages, which DOE termed “safety pauses,” were specifically called to allow the contractor to refresh workers’ understanding of workplace hazards; in these cases, which were essentially training exercises, workers were not reassigned to other work activities. Other work stoppages may have led to workers’ assignment to other activities, but we were unable to verify to what extent reassignment occurred because the documentation available on work stoppages was limited. Finally, during the 2-year delay due to seismic concerns in waste treatment plant construction, work on two facilities—the pretreatment plant and high-level waste facility—was ultimately suspended from August 2005 until August 2007, and about 900 workers were laid off, not reassigned. We added a footnote to table 1 to clarify the “Duration” column. Regarding our discussion of the role of oversight in several work stoppages, DOE acknowledged that inadequate oversight was a factor in the cited work stoppages and stated that the Office of Environmental Management has implemented corrective actions to address these contributing factors. Evaluating these actions and the resulting outcomes, if any, however, was beyond the scope of our report. We incorporated other technical comments in our report as appropriate. As agreed with our offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Energy and interested congressional committees. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To determine the number of times work was suspended at the Hanford site, we obtained from the Department of Energy’s (DOE) Office of River Protection officials a listing of work stoppages occurring from January 2000 through December 2008 at either the waste treatment plant or the tank farms. We did not review other work stoppages that may have occurred elsewhere at the Hanford Site during this period. We sought to independently verify the 31 work stoppages identified by DOE and to uncover additional information about them, including the nature of the event and the duration and the scope of each, by reviewing the following: DOE’s Occurrence Reporting and Processing System, a database of reportable accidents and other incidents affecting worker, public, and environmental safety; DOE’s database of investigation reports on accidents causing serious injury to workers or serious damage to the facility or the environment; DOE citations issued against contractors for violating nuclear safety Defense Nuclear Facilities Safety Board reports addressing Hanford Site Bechtel National Inc. and CH2M Hill Hanford Group Problem Evaluation Requests, internal reports of incidents or accidents involving safety issues. We were unable to independently verify DOE’s list of work stoppages from these sources, however, because in most cases, the reporting systems did not indicate whether safety incidents had halted work or, if so, for how long. In addition, these reporting systems focus on safety incidents and do not specifically address construction rework and design problems, which represent about half the work stoppages reported by DOE. Of the 31 work stoppages reported, however, we were able to obtain additional information from other sources for three specific events. These were (1) ongoing problems protecting workers from potentially harmful vapors venting from the tank farms, (2) a radioactive waste spill from tank S-102 in July 2007, and (3) the seismic redesign from August 2005 to August 2007 of the waste treatment plant pretreatment and high-level waste facilities. To obtain a more thorough understanding of these three work stoppages, what caused them, and how problems were corrected, we reviewed DOE, contractor, and Office of the Inspector General evaluations of these events, including official accident reports, external independent investigations, and our 2006 testimony on cost and schedule problems at the Hanford waste treatment plant. To determine the types of costs associated with work stoppages, we reviewed Federal Acquisition Regulation reporting requirements for cost- reimbursement contracts and Defense Contract Audit Agency guidance on auditing incurred costs. To gain a better understanding of the costs associated with lost productivity resulting from a work stoppage, we reviewed cost-estimating guidance from the Association for the Advancement of Cost Engineering International and earned value management guidance by GAO and by the National Research Council. To develop an understanding of the costs paid by the government, compared with those absorbed by the contractor, we reviewed Bechtel National Inc. and CH2M Hill Hanford Group requests to DOE for equitable adjustments to their respective contracts to recover lost productivity and other costs linked to work stoppages. We reviewed the Atomic Energy Act of 1954, as amended, and the letters sent from DOE to contractors requesting that they segregate costs incurred in connection with investigations of potential violations of the law and DOE nuclear safety requirements. We reviewed assessments by Washington State, DOE, and federal regulators fining Bechtel and CH2M Hill Hanford Group for safety violations and other problems at the Hanford Site since 2000. Finally, we interviewed contractor and Office of River Protection finance officials to determine cost-accounting requirements and practices. To determine whether more-effective regulation or oversight might have prevented the work stoppages, we relied primarily on Office of River Protection and Bechtel officials’ assessments of these events because supporting documentation was generally unavailable. For 3 of the 31 work stoppages, we reviewed numerous internal DOE, external independent, and contractor evaluations to assess whether lack of oversight was a contributing factor. To gain further perspective on how lack of oversight or regulations might have played a role in these work stoppages, we interviewed DOE headquarters officials with the Offices of Environmental Management; Health, Safety, and Security; and General Counsel. We interviewed officials with regulatory and oversight entities, including the Defense Nuclear Facilities Safety Board, the Occupational Safety and Health Administration, and the Nuclear Regulatory Commission. We also interviewed union representatives at the Hanford Site to obtain the union’s and workers’ perspectives on work stoppages and safety. We conducted this performance audit from June 2008 to April 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We obtained and reviewed information on 31 work stoppages that occurred at the Hanford Site from January 2000 to December 2008; these are summarized in table 1. In addition to the individual named above, Janet Frisch, Assistant Director; Carole Blackwell; Ellen W. Chu; Brenna McKay; Mehrzad Nadji; Timothy M. Persons, Chief Scientist; Jeanette Soares; Ginny Vanderlinde; and William T. Woods made key contributions to this report.
The Department of Energy's (DOE) Hanford Site in Washington State stores 56 million gallons of untreated radioactive and hazardous wastes resulting from decades of nuclear weapons production. DOE is constructing facilities at the site to treat these wastes before permanent disposal. As part of meeting health, safety, and other standards, work at the site has sometimes been suspended to address safety or construction quality issues. This report discusses (1) work stoppages from January 2000 through December 2008 and what is known about them, (2) the types of costs associated with work stoppages and who paid for them, and (3) whether more effective regulation or oversight could have prevented the work stoppages. GAO interviewed knowledgeable DOE and contractor officials about these events. When documentation was available, GAO obtained DOE and contractor accident and safety incident reports, internal DOE and independent external evaluations, and costs. DOE officials reported that from January 2000 through December 2008, activities to manage hazardous wastes stored in underground tanks and to construct a waste treatment facility have been suspended at least 31 times to address safety concerns or construction quality issues. Federal regulations governing contracts do not require contractors to formally report work stoppages and the reasons for them, and DOE does not routinely collect information on them. As a result, supporting documentation on work stoppages was limited. DOE reported that work stoppages varied widely in duration, with some incidents lasting a few hours, and others lasting 2 years or more. Officials reported that about half the work stoppages resulted from concerns about worker or nuclear safety and included proactive safety "pauses," which typically were brief and taken to address an unsafe condition that could potentially harm workers. The remainder of the work stoppages occurred to address concerns about construction quality at the waste treatment plant. Under the terms of the cost-reimbursement contracts for managing the tanks and constructing the waste treatment plant, DOE generally pays all costs associated with temporary work stoppages and does not require the contractor to separately track these costs, although DOE and the contractors do track some costs under certain circumstances. For example, the costs for cleaning up, investigating, and implementing corrective actions were collected for a July 2007 hazardous waste spill at one of the tank farms; these costs totaled over $8 million. The contractors, too, can face financial consequences, such as reduction in earned fee or fines and penalties assessed by DOE or outside regulators. For example, DOE may withhold payment of a performance award, called a fee, from contractors for failure to meet specified performance objectives or to comply with applicable environmental, safety, and health requirements. For the majority of DOE's reported work stoppages, supporting documentation was not available to evaluate whether better oversight or regulation could have prevented them. For 2 of 31 work stoppages where some information was available--specifically, accident investigations or prior GAO work--inadequate oversight contributed to the work stoppages. For example, the accident investigation report for the tank farm spill found that oversight and design reviews by DOE's Office of River Protection failed to identify deficiencies in the tanks' pump system design, which did not meet nuclear technical safety requirements. Similarly, in 2006, GAO found that DOE's failure to effectively implement nuclear safety requirements contributed substantially to schedule delays and cost growth at Hanford's waste treatment plant. With regard to regulations, however, officials from DOE, the Defense Nuclear Facilities Safety Board, and DOE's Office of Inspector General said they did not believe that insufficient regulation was a factor in these events.
DON’s primary mission is to organize, train, maintain, and equip combat- ready naval forces capable of winning the global war on terror and any other armed conflict, deterring aggression by would-be foes, preserving freedom of the seas, and promoting peace and security. To support this mission, DON performs a variety of interrelated and interdependent business functions (e.g., acquisition and financial management), relying heavily on IT systems. In fiscal year 2008, DON’s IT budget was about $2.7 billion, of which $2.2 billion was allocated to operations and maintenance of existing systems and the remaining $500 million to systems in development and modernization. Of the approximately 3,000 business systems that DOD reports in its current inventory, DON accounts for 904, or about 30 percent, of the total. The Navy Cash system is one such system investment. In 2001, DON initiated Navy Cash in partnership with Treasury’s FMS to enable sailors and marines to use smart cards that store monetary value, also known as stored value cards, to make retail purchases and conduct banking transactions while on ships and ashore. The program builds upon capabilities that have been incrementally introduced from previously deployed systems. (Table 1 summarizes these systems and their capabilities and limitations.) According to DOD, Navy Cash’s key objectives include introducing workload efficiencies and improving the quality of life for sailors and marines by reducing the amount of currency on ships, which lowers costs associated with cash handling activities; enabling sailors and marines to conduct ashore banking transactions from enabling sailors and marines to conduct banking or retail transactions while ashore (wherever these branded debit cards are accepted). Navy Cash consists of various equipment and devices, including servers that connect to the ship’s local area network as well as point-of-sale terminals and ATMs that communicate with Navy Cash smart cards. These cards contain an electronic chip that stores monetary value and interacts with the various devices for conducting electronic retail purchases and personal banking transactions on the ships. On shore, cardholders can access their Navy Cash accounts via ATMs worldwide or conduct retail purchases using the card’s magnetic stripe, which provides a debit card feature. According to program officials, while ashore, sailors and marines have access to over 1,000,000 ATMs and 23 million merchants worldwide. Navy Cash uses a ship’s Automated Digital Network System to access satellite communications systems, and then transmits transaction files off the ship through fleet network operations centers to a financial agent (i.e., bank) ashore. To do so, it uses a store-and-forward process to batch transactions together and transmit them off the ship typically during non- peak evening hours. These transactions are then processed in a manner similar to personal check processing through the Automated Clearing House. Figure 1 is a simplified illustration of the Navy Cash network used to transmit these transactions. Originally, the program was expected to be fully deployed and reach full operational capability by December 2008 at an estimated cost of about $100 million over a 6-year life cycle. The program office now expects the program to reach full operational capability in fiscal year 2011, and it estimates the program’s 14-year life cycle cost to be about $320 million, of which about $100 million is to be funded by FMS. Of the $320 million, about $136 million is for development and modernization, and about $184 million is for operations and maintenance. From fiscal year 2002 to 2007, DON and FMS reported that approximately $132 million has been spent on the program, of which $47 million is FMS’s cost. Of the $188 million expected to be spent (fiscal years 2008-2015), about $57 million is for development and modernization. (See fig. 2 for a breakdown of the actual and planned costs.) When fully deployed, the program office estimates that Navy Cash could process over $350 million annually in transactions initiated by about 170,000 sailors and marines worldwide on approximately 160 ships. As of April 2008, the program has been deployed to approximately 130 ships. To manage the acquisition and deployment of Navy Cash, DON established a program management office within the Naval Supply Systems Command (NAVSUP). As authorized by statute and because of its experience in developing stored value card programs for other military departments, NAVSUP has partnered with FMS to develop Navy Cash. In February 2001, NAVSUP and FMS signed a memorandum of agreement that, among other things, delineated their respective program roles and responsibilities. According to the agreement, NAVSUP, through the Navy Cash program office, is responsible for managing the acquisition of the program, including managing system requirements and developing program cost and benefit estimates. According to DOD and other relevant guidance, acquisition management includes, among other things, such key IT management control areas as architectural alignment, economic justification, requirements management, risk management, security management, and system quality measurement. Also according to the agreement, FMS, through a designated financial agent, is to (1) provide for all financial services (i.e., manage the funds distributed through Navy Cash) and (2) develop, test, operate, and maintain the system’s software (e.g., terminal and accounting applications) and hardware (e.g., accounting servers, smart cards). In short, the financial agent acts as the depository bank, holding and managing the pool of sailor and marine funds, including accounting for the funds and settling transactions processed. FMS is also responsible for tracking and overseeing the financial agent’s provision of services, as defined in a financial agency agreement between FMS and the agent. (See fig. 3 for DON and FMS roles and relationships for Navy Cash.) In addition, various other organizations share program oversight and review activities. A listing of key entities and their roles and responsibilities can be found in table 2. Effective IT management controls are grounded in tried and proven methods, processes, techniques, and activities that organizations define and use to minimize program risks and maximize the chances of a program’s success. Using such best practices can result in better outcomes, including cost savings, improved service and product quality, and a better return on investment. For example, two software engineering analyses of nearly 200 systems acquisitions projects indicate that teams using systems acquisition best practices produced cost savings of at least 11 percent over similar projects conducted by teams that did not employ the kind of rigor and discipline embedded in these practices. In addition, our research shows that best practices are a significant factor in successful acquisition outcomes, including increasing the likelihood that programs and projects will be executed within cost and schedule estimates. We and others have identified and promoted the use of a number of best practices associated with acquiring IT systems. See table 3 for a description of several of these activities. We have previously reported that DOD has not effectively managed a number of business system investments. Among other things, our reviews of individual system investments have identified weaknesses in such things as architectural alignment and informed investment decision making, which are also the focus areas of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 business system provisions. Our reviews have also identified weaknesses in other system acquisition and investment management areas—such as economic justification, requirements management, and risk management. Recently, for example, we reported that the Army’s approach for investing about $5 billion over the next several years in its General Fund Enterprise Business System, Global Combat Support System-Army Field/Tactical, and Logistics Modernization Program did not include alignment with Army enterprise architecture or use of a portfolio-based business system investment review process. Moreover, we reported that the Army did not have reliable processes, such as an independent verification and validation function, or analyses, such as economic analyses, to support its management of these programs. We concluded that until the Army adopts a business system investment management approach that provides for reviewing groups of systems and making enterprise decisions on how these groups will collectively interoperate to provide a desired capability, it runs the risk of investing significant resources in business systems that do not provide the desired functionality and efficiency. Accordingly, we made recommendations aimed at improving the department’s efforts to achieve total asset visibility and enhancing its efforts to improve its control and accountability over business system investments. The department agreed with our recommendations. We also reported that DON had not, among other things, economically justified its ongoing and planned investment in the Naval Tactical Command Support System (NTCSS) and had not invested in NTCSS within the context of a well-defined DOD or DON enterprise architecture. In addition, we reported that DON had not effectively performed key measurement, reporting, budgeting, and oversight activities, and had not adequately conducted requirements management and testing activities. We concluded that without this information, DON could not determine whether NTCSS as defined, and as being developed, is the right solution to meet its strategic business and technological needs. Accordingly, we recommended that the department develop the analytical basis to determine if continued investment in NTCSS represents prudent use of limited resources and to strengthen management of the program, conditional upon a decision to proceed with further investment in the program. The department largely agreed with these recommendations. In addition, we reported that the Army had not defined and developed its Transportation Coordinators’ Automated Information for Movements System II—a joint services system with the goal of helping to manage the movement of forces and equipment within the United States and abroad— in the context of a DOD enterprise architecture. We also reported that the Army had not economically justified the program on the basis of reliable estimates of life cycle costs and benefits and had not effectively implemented risk management. As a result, we concluded that the Army did not know if its investment in this program, as planned, is warranted or represents a prudent use of limited DOD resources. Accordingly, we recommended that DOD, among other things, develop the analytical basis needed to determine if continued investment in this program, as planned, represents prudent use of limited defense resources. In response, the department largely agreed with our recommendations, and has since reduced the program’s scope by canceling planned investments. DOD acquisition policies and related federal guidance provide a framework within which to manage system investments, like Navy Cash. Effective implementation of this framework can minimize program risks and better ensure that system investments are defined in a way to optimally support mission operations and performance, as well as deliver promised system capabilities and benefits on time and within budget. Thus far, key IT management controls associated with this framework have not been implemented on Navy Cash. In particular, the program’s overlap with and duplication of other DOD programs has not been assessed, and the program has not been economically justified on the basis of reliable estimates of life cycle costs and benefits. As a result, the program, as defined, has not been shown to be the most cost-effective investment option. Even if investment in the proposed Navy Cash solution is shown to be a wise and prudent course of action, the manner in which Navy Cash is being acquired and deployed is not adequate because (1) requirements have not been adequately developed and managed; (2) program risks have not been effectively managed; (3) security has not been effectively managed; and (4) system quality has not been adequately measured. As a result, the system will likely experience performance shortfalls and cost more and take longer to implement and maintain than necessary. Program officials acknowledged these weaknesses and attributed them to, among other things, turnover of staff in key positions and their focus on deploying the system. Further, they stated that addressing these weaknesses has not been a top program priority because Navy Cash has been deployed to and is operating on about 80 percent of the ships. Nevertheless, about $60 million in development and modernization funding remains to be spent on this program. As a result, it is important that all these weaknesses be addressed to reduce the risk of delivering a system solution that falls short of expectations. Investment in the proposed Navy Cash solution has not been adequately justified. Specifically, the system solution has not been assessed relative to other DOD programs that employ smart cards for electronic retail transactions. Moreover, it has not been economically justified on the basis of reliable estimates of cost and benefits over the system’s expected life. As a result, planned investment in the system, as defined, may not be a cost-effective course of action. DOD’s acquisition policies and guidance, as well as federal and best practice guidance, recognize the importance of investing in business systems within the context of an enterprise architecture. Moreover, the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 requires that defense business systems be compliant with the federated BEA. Our research and experience in reviewing federal agencies show that making investments without the context of a well- defined enterprise architecture often results in systems that are, among other things, duplicative of other systems. Navy Cash has not been assessed and defined in a way to ensure that it is not duplicative of the Eagle Cash and EZpay programs, both of which provide for the use of smart card technology for electronic retail transactions in support of the Air Force and the Army. Within DOD, the means for avoiding business system duplication and overlap is the department’s process for assessing compliance with the DOD BEA and its associated investment review and decision making processes. In 2005, 2006, and 2007, Navy Cash was evaluated for compliance with the BEA. However, the BEA does not contain business activities that Navy Cash supports. According to officials from DOD’s Business Transformation Agency, which is responsible for DOD’s BEA, these business activities are not included nor are they planned for inclusion in the BEA, because the capabilities provided by Navy Cash relate strictly to personal banking, which is outside of the current scope of the BEA. As a result, compliance could not be assessed beyond concluding that Navy Cash was compliant because it did not conflict with the BEA. Moreover, even if the BEA included the business activities that Navy Cash supports, the program’s ability to assess BEA compliance would have been limited because the program office did not develop a complete set of system-level architecture products needed to perform a meaningful compliance assessment. Thus, Navy Cash’s potential overlap and duplication with similar programs is not sufficiently understood. According to program officials, Navy Cash is not duplicative of Eagle Cash and EZpay because it is designed to operate on ships at sea, which do not maintain constant network connectivity with on shore networks. Therefore, they said that it requires different communications and financial transaction capabilities than the other two stored value card programs. We agree that there are important differences between the programs. However, they all perform chip-based financial transactions, and thus opportunities may exist for them to provide or reuse shared system services, as well as to merge into a DOD-wide stored value card program. According to program officials, overlap and duplication among the programs was not assessed. This means that aspects of Navy Cash could be potentially duplicative of these other programs, and thus DOD may not be pursuing the most cost-effective solution to meet its mission needs. In this regard, the program’s Milestone Decision Authority told us that the differences between Navy Cash and other stored value card programs are minimal and stated that officials with the three stored value card programs have recently begun discussions with FMS on how to collaborate and possibly move towards one system solution. Investment in Navy Cash has not been economically justified on the basis of a reliable analysis of estimated system costs and expected benefits over the life of the program. Specifically, according to the latest economic analysis, the program is expected to produce estimated benefits of about $133 million for an estimated cost of about $100 million. However, the cost estimate is not reliable, because the program’s 2002 economic analysis is 6 years old and is based on a cost estimate of about $100 million that was not derived in accordance with effective estimating practices, such as including all costs over the system’s life cycle, and adjusting the estimate to account for program risks and material program changes. Further, this economic analysis did not comply with applicable federal guidance. For example, it did not adequately consider all relevant alternatives, and it erroneously counted $40 million as cost savings rather than transfers (i.e., shift of control over spending of resources from one group to another that do not result in an economic gain). Further, the economic analysis has yet to be validated using actual data on the accrual of benefits. Without an economic analysis that is reliable, DON’s ongoing and planned investment in Navy Cash lacks justification as a cost-effective course of action. Economic Analysis Used a Cost Estimate That Omits Relevant Costs and Was Not Derived Using Key Estimating Practices A reliable cost estimate is an essential element for informed investment decision making, realistic budget formulation and program resourcing, meaningful progress measurement, proactive course correction, and accountability for results. According to the Office of Management and Budget (OMB), programs must maintain current and well-documented estimates of program costs, and these estimates must span the full expected life of the program. Without reliable estimates, programs cannot be adequately justified on the basis of reliable costs and benefits and they are at increased risk of experiencing cost overruns, missed deadlines, and performance shortfalls. Our research has identified a number of best practices for effective program cost estimating, and we have issued guidance that associates these practices with four characteristics of a reliable cost estimate. Specifically, estimates need to be: Comprehensive: The cost estimates should include both government and financial agent costs over the program’s full life cycle, from the inception of the program through design, development, deployment, and operation and maintenance to retirement. They should also provide a level of detail appropriate to ensure that cost elements are neither omitted nor double counted, and include documentation of all cost-influencing ground rules and assumptions. Well-documented: The cost estimates should have clearly-defined purposes, and be supported by documented descriptions of key program or system characteristics (e.g., relationships with other systems, performance parameters). Additionally, they should capture in writing such things as the source data used and their significance, the calculations performed and their results, and the rationale for choosing a particular estimating method or reference. Moreover, this information should be captured in such a way that the data used to derive the estimate can be traced back to, and verified against, their sources. Accurate: The cost estimates should provide for results that are unbiased and not be overly conservative or optimistic (i.e., should represent the most likely costs). In addition, the estimates should be updated regularly to reflect material changes in the program, and steps should be taken to minimize mathematical mistakes and their significance. The estimates should also be grounded in a historical record of cost estimating and actual experiences on comparable programs. Credible: The cost estimates should discuss any limitations in the analysis performed that are due to uncertainty or biases surrounding data or assumptions. Further, the estimates’ derivation should provide for varying any major assumptions and recalculating outcomes based on sensitivity analyses, and the estimates’ associated risks and inherent uncertainty should be disclosed. Also, the estimates should be verified based on cross- checks using other estimating methods. The $100 million life cycle cost estimate, as documented in the program’s 6-year old economic analysis, does not reflect many of the practices associated with a reliable cost estimate, including several practices related to being comprehensive and well documented, and all related to being accurate and credible (see table 4). The cost estimate of about $100 million, as documented in the program’s 2002 economic analysis, does not meet all of the practices related to being comprehensive. Specifically, it only includes costs from fiscal years 2003 through 2008 (6-year period), and it does not include both the government and financial agent costs associated with development, acquisition (non- development), implementation, and operations and support over the system’s life cycle. Moreover, it does not include FMS’s portion of the program’s cost, which is estimated to be about $100 million over a 14-year period. In addition, the cost estimate does not clearly describe how the various cost sub-elements are aggregated to produce the amounts associated with the two documented cost categories, system installation costs, and operations and maintenance costs. Therefore, it is not clear that all pertinent costs are included and no costs are double counted. Lastly, although some key assumptions have been identified, such as the ship implementation schedule, other key assumptions, such as labor rates and inflation rates, are not. As a result, the estimate cannot be considered comprehensive. The cost estimate used in the economic analysis also addresses some, but not all, of the practices related to being well-documented. Specifically, the purpose of the cost estimate was clearly defined and a technical baseline has been documented that includes, among others things, the hardware and software specifications and planned performance parameters. However, the calculations used to derive the cost estimate, including descriptions of the methodologies used and traceability back to source data (e.g., vendor quotes, salary data), are not documented. In addition, while program officials described the estimating approach used, such as using market research and historical data to determine the costs associated with hardware, software, and installations, they did not have documentation of the methodology used to arrive at the total costs of each of these elements and how they were combined to produce the overall cost estimate. Therefore, the program’s cost estimate cannot be considered well-documented. In addition, the $100 million documented cost estimate lacks accuracy because it does not reflect an assessment of the costs most likely to be incurred. Specifically, this estimate covers only 6 years of costs (fiscal years 2003 through 2008). In contrast, the program’s current cost estimate is about $320 million over a 14-year life cycle, and according to program officials, the program’s life cycle is being reexamined and will likely be extended. Lastly, the $100 million cost estimate is not credible because a complete uncertainty analysis (i.e., both a sensitivity analysis and a Monte Carlo simulation) was not performed on this estimate. A sensitivity analysis reveals how the cost estimate is affected by a change in a single assumption or cost driver, such as the ship installation schedule, while holding all other parameters constant. A Monte Carlo simulation assesses the aggregate variability of the cost estimate to determine a confidence range around the estimate. Without such analyses of uncertainty, the program office cannot have confidence that the program can be completed within the cost estimate. Program officials acknowledged the limitations in the estimate, and attributed them to turnover of staff and their current focus on deploying the system. Nevertheless, program officials stated that they intend to develop a revised cost estimate when they update the program’s economic analysis, but they had yet to establish a date for accomplishing this. Given that a significant amount of development and modernization funding remains to be invested on the program, it is important that the program office economically justify such investment. Economic Analysis Does Not Satisfy Other Relevant Guidance According to OMB, economic analyses should meet certain criteria to be considered reasonable, such as comparing alternatives on the basis of net present value and conducting an uncertainty analysis of benefits. The program’s December 2002 economic analysis meets one, does not meet four, and partially meets two of the seven OMB criteria governing how to perform such analyses. For example, while the analysis explained why the investment is needed, it did not consider the costs and benefits associated with at least three alternatives to the status quo, such as Eagle Cash, EZpay, or some derivative that provided for reuse of shared services among the programs. Moreover, at least three alternatives to the status quo were not assessed on the basis of net present value, using the proper discount rate to account for inflation. Instead, the analysis only qualitatively evaluated Navy Cash against its predecessor systems. For example, the analysis included evaluation of the capabilities and limitations of the predecessor systems, but did not include evaluating the relative cost and benefits of any alternatives to Navy Cash. In addition, the program’s benefit projections erroneously counted about $40 million in cost transfers as cost savings, thus overstating projected benefits (i.e., projected benefits should only be $93 million). Transfers represent shifts of control over the spending of resources from one group to another and thus do not result in an economic gain. According to OMB guidance, transfers do not produce economic gains because the benefits to those government entities that receive such a transfer are the same as the costs borne by those government entities that provide the transfer. Moreover, no uncertainty analysis was performed on the benefit estimates. (See table 5 for the results of our analyses relative to each of the seven criteria.) Program officials stated that they do not know why the economic analysis was not developed in accordance with OMB guidance. They also stated that they intend to update the economic analysis and, in doing so, intend to address OMB guidance. However, they did not have a date for accomplishing this because their priority is deploying the system. Actual Accrual of Estimated Benefits Has Not Been Validated The Clinger-Cohen Act of 1996 and OMB guidance emphasize the need to develop information to ensure that IT investments are actually contributing to tangible, observable improvements in mission performance. DOD guidance also states that estimated benefits should be validated to ensure that desired outcomes are being achieved. To this end, agencies should define and collect metrics to determine whether expected benefits from a given investment are being accrued, and they should modify subsequent economic analyses to reflect the lessons learned. Despite the fact that Navy Cash has been installed and is operating on approximately 130 ships, DON has yet to determine whether the system is actually producing expected benefits. For example, the 2002 economic analysis stated that Navy Cash would reduce cash on ships, and contribute to man-hour savings as a result of increased productivity. It also stated that it would improve quality-of-life for sailors and marines. While DON has measured the reduction in the cash onboard some ships where Navy Cash is operating, this reduction represents a transfer and is not an actual benefit. Moreover, the extent to which the system is achieving expected man-hour savings, which would constitute a true benefit, has not been measured. Lastly, customer (sailor and marine) satisfaction with the system, which is a legitimate qualitative benefit, has not been determined since a prototype of Navy Cash was installed on two ships in 2001. Program officials stated that DON’s Manpower Analysis Center is responsible for measuring man-hour savings. Further, they said that customer satisfaction with the system was being measured through informal feedback from the sailors and marines, and they recently began a more formal customer satisfaction survey. They also stated that in updating the economic analysis, they plan to assess and reflect the accrual of actual benefits. However, they had not established a date for accomplishing this. DOD policy and related guidance recognizes the importance of implementing a range of management controls associated with ensuring that IT investments are defined, developed, deployed, and operated efficiently and effectively. By implementing these controls, the chances of delivering systems that perform as intended, and not costing more or taking longer than necessary, are increased. These controls include requirements development and management, risk management, security management, and system quality measurement. For Navy Cash, none of these controls have been effectively implemented. Specifically, program requirements have not been adequately developed and managed; program risks have not been effectively managed; security has not been adequately managed; and data needed to measure two aspects of system quality—trends in unresolved change requests and evaluation of user satisfaction with the system—have not been collected and used. As a result, Navy Cash is unlikely to perform in a manner that meets user and operational needs, and it is likely to cost more and take longer than necessary. Well-defined and managed requirements are recognized by DOD guidance and relevant best practices as essential, and can be viewed as a cornerstone of effective system acquisition. Effective requirements development and management includes (1) developing detailed system requirements; (2) establishing policies and plans for managing changes to requirements, including defining roles and responsibilities, and identifying how the integrity of a baseline set of requirements will be maintained; and (3) maintaining bi-directional requirements traceability, meaning that system-level requirements can be traced both backward to higher level business or operational requirements, and forward to system design specifications and test plans. The program office has not satisfied these three aspects of effective requirements development and management. Specifically, The program office has not developed system-level requirements for Navy Cash. System-level requirements are derived from higher-level operational requirements and are specified at a level of detail needed for system developers to design and build to. Without system requirements, the ability of the program office to understand the impact of any system change requests (i.e., cost, schedule, and performance) and thus make informed decisions about such changes, is limited. For example, although the program office identified a high-level requirement for the system to share information with the Retail Operations Management system used in ships’ store operations, the associated system-level requirements were not defined. As a result, the deployed version of the system was not designed and developed to provide this interface. The requirement for this interface was later realized after a number of system and operational problems surfaced. Addressing these problems through a series of changes required additional time and funding. Program officials acknowledged that more effective requirements development and management practices could have avoided these problems. As another example, a system requirement for automatically deploying software patches to operational systems was not defined. Had this requirement been defined, the system design could have provided for developing a capability to minimize the level of effort required to identify, distribute, and install patches. Instead, a less efficient and labor-intensive manual process has been used. The program office does not have a policy or plans for managing requirements. Such policies and plans establish organizational roles and responsibilities for managing requirements, including maintaining and controlling modifications or changes to the baseline sets of requirements, establishing priorities among competing requests for changes, and assessing the impact on cost, schedule, and performance of each change. In lieu of a policy or plans, the program office has established an ad hoc change control process, whereby change proposals are approved or disapproved by a joint DON and FMS change control board based on a change management policy that was drafted in 2003. However, this policy was never finalized or approved and does not define roles and responsibilities or how requirements will be managed. Further, the board has not been chartered. Moreover, program officials told us that the board’s decisions are made primarily on the basis of consensus about the need for the change and the availability of funds. Other than security requirements, Navy Cash requirements cannot be traced from the higher level business or operational requirements to system design specifications and test plans. Specifically, we attempted to trace a sample of Navy Cash system-level requirements backward to high- level requirements and forward to design documents and test plans and results. However, as noted above, no system-level requirements exist. Without this link in the requirements traceability chain, traceability could not be demonstrated. Having requirements traceability is essential for ensuring that developed and deployed system products satisfy operational needs and user expectations. In the case of Navy Cash, where system capabilities are reactive to change requests rather than proactively driven by requirements, such traceability is also essential to understanding the impact to the system of each change request and thus having an informed basis for approving and prioritizing any changes. Program officials acknowledged these weaknesses and recently stated that they intend to address them. To accomplish this, they reported that they have hired a new employee who is to be trained in requirements development and management, and who is to develop a requirements management plan. Until the program office employs fundamental requirements development and management practices, it cannot reliably estimate the program costs and develop schedules needed to accomplish the work associated with delivering predetermined and economically justified system capabilities. The result is an inability to develop and measure performance against meaningful cost, schedule, and capability baselines, and thereby reasonably ensure that the program is meeting expectations and those responsible for it are accountable for results. Proactively managing program risks is a key acquisition management control that, if done properly, can increase the chances of programs delivering promised capabilities and benefits on time and within budget. For Navy Cash, program risks have not been effectively managed. Rather, the program office has reacted to the realization of actual problems. In particular, plans, processes, and procedures are not in place that provide for identifying, controlling, and disclosing risks, and risk management roles and responsibilities have not been assigned to key stakeholders. As a result, the program office is not positioned to proactively avoid the occurrence of cost, schedule, and performance problems. DOD and related guidance recognize the importance of performing effective risk management on programs like Navy Cash. Among other things, effective risk management includes: (1) establishing and implementing a written plan and defined process for risk identification, analysis, and mitigation; (2) assigning responsibility for managing risks to key stakeholders; (3) encouraging program-wide participation in risk management; and (4) examining the status of identified risks during program milestone reviews. The program office has not fully satisfied any of the above cited risk management practices. For example: A written plan or defined process that provides for identifying, analyzing, and mitigating risks has not been established. In the absence of a plan and process, program officials stated that risks are informally addressed during bi-monthly program management reviews that involve key stakeholders, including the program office, FMS, and the financial agent. However, our analysis of minutes of these reviews indicates that they are more focused on reacting to the consequences of actual problems, rather than proactively attempting to avoid the occurrence of potential problems. While program officials stated that responsibility for managing risks rests with the program manager, roles and responsibilities for managing and identifying risks have not been documented for any key stakeholders, including individuals in the program office, and with FMS and the financial agent. Without clearly documenting their roles and responsibilities, proactive identification, disclosure, and mitigation of all key risks is unlikely to occur, and program approval and decision making authorities will not be adequately informed. While program officials stated that attending and participating in program management reviews is encouraged, we have yet to receive any verifiable evidence that risks are addressed in these reviews or that involvement in risk management is encouraged. Program officials have yet to provide any verifiable evidence that program decision making and oversight authorities have been apprised of the status of identified risks. Program officials acknowledged the above weaknesses and attributed them to staff turnover in key positions and their focus on deploying the system rather than establishing management processes and procedures. Nevertheless, program officials stated that they intend to develop a risk plan and process, but said that this would not occur until December 2008. Given that a significant amount of development and modernization investment remains, it is important that mitigating existing risks, including those discussed in this report, as well as future risks be treated as a program priority. A number of Navy Cash security management weaknesses exist. Specifically, the program office has not (1) fully implemented a comprehensive patch management process; (2) followed an adequate process for planning, implementing, evaluating, and documenting remedial actions for known information security weaknesses; (3) obtained adequate assurance that FMS has effective security controls in place to protect Navy Cash applications and data; and (4) developed an adequate contingency plan and conducted effective contingency plan testing. Program officials acknowledged these weaknesses but have yet to provide us with plans for addressing them. As a result, the confidentiality, integrity, and availability of deployed and operating Navy Cash shipboard devices, applications, and financial data are at increased risk of being compromised. Patch Management Has Not Been Fully Implemented DOD guidance states that component organizations should develop a process for patching system vulnerabilities. Further, National Institute of Standards and Technology (NIST) guidance recognizes the importance of implementing comprehensive patch management that includes, among other things, (1) having a complete inventory of system hardware and software assets, (2) automatically deploying vulnerability patches, and (3) measuring patch management performance. Although the program office performs patch management for Navy Cash, key practices have not been fully implemented. Specifically, A complete inventory of system assets does not exist. According to NIST, a system inventory enables organizations to monitor system hardware and software assets for the presence of all threats, vulnerabilities, and patches. While the financial agent maintains a Navy Cash asset database for the 128 ships on which the system is operating, this database is missing 3 hardware inventories and 19 software inventories. According to program officials, the financial agent’s database is incomplete because it was created from purchase orders after the system was in operation. Furthermore, although the program office maintains hardware inventories for each ship in a DON configuration management database, the office does not maintain inventories of Navy Cash software. Until the program office develops a complete inventory of Navy Cash system assets, it will not be able to identify and patch all system threats and vulnerabilities. Vulnerability patches are not deployed in an automated or timely manner. According to NIST guidance, deploying patches automatically minimizes the level of effort and time required to identify, distribute, and install patches. However, patches are currently deployed manually for Navy Cash when ships are in port for maintenance. As a result, the risk of vulnerabilities being exploited before ships return to port is increased. Although the program office plans to introduce the capability to automatically deploy patches as part of the next software release in the first quarter of fiscal year 2009, program officials said that it will take between 18 to 24 months to rollout this capability to the entire fleet. Program officials also stated that they do not know why this capability was not part of the original system requirements and design. Until the program office begins automatically deploying patches, Navy Cash assets and data will be exposed to increased risk. The performance of patch management is not being measured. NIST guidance recommends consistent measurement of the effectiveness of patch management through the use of metrics, such as susceptibility to attack and mitigation response time. Although program officials stated that they maintain patch management metrics, they have yet to provide us with a description of the metrics or an explanation of how they are used. Until the program office develops and uses performance metrics, it will not be able to assess and improve the effectiveness of its patch management effort. To strengthen its patch management efforts, the program office has developed a vulnerability management guide. However, this guide has not been finalized and approved, and according to program officials, it does not follow NIST patch management guidance. Without comprehensive patch management, increased risk exists that system vulnerabilities could be exploited. Remedial Action Plans Have Not Been Documented The Federal Information Security Management Act (FISMA) requires that agencies’ information security programs must include a process for planning, implementing, evaluating, and documenting remedial actions to address any deficiencies in the information security policies, procedures, and practices of the agency. OMB has outlined steps for documenting remedial actions—referred to by OMB as a plan of action and milestones—for systems where IT security weaknesses have been identified. Additionally, NIST guidance states that a plan of action and milestones should be included in a system’s accreditation package and describe how the information system owner intends to address those vulnerabilities by reducing, eliminating, or accepting the identified vulnerabilities. Since the system was accredited in November 2006, the program office has not developed any plans of action and milestones, even though medium and low information security risks were identified during security test and evaluation efforts supporting the certification and accreditation. According to program officials, the risks were accepted by the designated approving authority, rather than corrected, because they involve features that are necessary for the system to operate, such as having certain hardware interfaces and access permissions. While accepting rather than correcting such weaknesses is consistent with DON guidance for developing plans of action and milestones, it is not consistent with NIST guidance. Specifically, DON guidance states that these plans are only required for accreditation decisions that are conditional upon corrective actions being taken. However, NIST guidance specifies that the development of a plan of action and milestones should include instances where risk is being accepted. The lack of plans of action and milestones means that the program office has not adequately addressed information security risks. Moreover, the limitations in DON guidance mean that other Navy programs may not have done so as well. Until the program office fully implements a remedial action process that meets the FISMA requirements and OMB and NIST guidance, program management and oversight officials will not have sufficient assurance that all security weaknesses are being reported and tracked, and that options for addressing them are fully considered. Information Security Requirements Have Not Been Fully Defined FISMA requires each federal agency to develop, document, and implement an agencywide information security program to provide information security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. Among other things, this includes testing system management, operational, and technical security controls. Although the program office has partnered with FMS to develop and support the operation of Navy Cash, it is ultimately responsible for ensuring the security of Navy Cash systems and data. The program office has not taken adequate steps to ensure that security controls are tested. Specifically, the memorandum of agreement between the program office and FMS does not establish requirements for FMS and the financial agent relative to periodic information security control reviews, including reviews of applicable management, operational, and technical controls, and to provide DON with copies of information security control reviews that are performed on the Navy Cash system and its supporting infrastructure. This is important because FMS—through its financial agent—provides services that support Navy Cash that must be secure, such as holding and accounting for funds distributed throughout the system and processing transactions. Although FMS has performed some management and operational control tests, such as periodic personnel and physical security assessments of selected commercial facilities that provide services and support to Navy Cash, these assessments were not designed to evaluate the technical controls of the system’s computing environment because the memorandum of agreement does not include such requirements. Until the program office and FMS establish information security requirements for overseeing the financial agent’s technical information security controls, an increased risk exists that the confidentiality, integrity, and availability of information stored, transmitted, and processed by the financial agent can be compromised. OMB guidance requires agencies to develop contingency plans and to test those plans at least annually. NIST guidance states that contingency plans should include a sequence of recovery activities, which describe system priorities based on business impact and notification procedures, which describe the methods used to notify personnel with recovery responsibilities. In addition, according to NIST, contingency plan tests should include explicit test objectives and success criteria for each planned activity and related procedure and documentation of lessons learned. Although the program office has developed contingency plans for Navy Cash, it did not identify the sequence of recovery activities and notification procedures for recovery personnel in them. The sequence of activities should prioritize the recovery of system components by criticality and the notification procedures should describe the methods used to notify recovery personnel during business and non-business hours. Until the program office includes these areas in the contingency plans, it cannot ensure that system components will restore in a logical manner and that ship recovery personnel will be notified promptly when a system disruption is detected. In addition, while the program office has largely included explicit test objectives and success criteria in all the test procedures, they did not document the lessons learned. According to NIST, lessons learned can improve contingency plan effectiveness and this should be incorporated into the plan. According to program officials, NIST was not used for developing and conducting tests of the contingency plan. Without lessons learned, the program office will not be able to properly maintain and improve the contingency planning guide. Until DON develops sufficient contingency plans and testing procedures, increased risk exists that Navy Cash systems, data, and operations will not be able to fully recover from a disruption or disaster. Effective management of programs like Navy Cash depends in part on the ability to measure the quality of the system being acquired and operated. One measure of system quality is the trend in the number of unaddressed, high-priority system change requests. Sufficient data to measure trends in open (i.e., unresolved) system change requests, which is a recognized indicator of a system’s stability and quality are not being collected. To the program’s credit, it has formed a group consisting of program office, FMS, and financial agent representatives to review and decide whether to approve requests for changes to the system. However, this group is not consistently collecting data as to when a change request is opened or closed and what the priority level of each change request is. Thus, it does not know at any given time, for example, how many change requests are pending, the significance of pending change requests, and the age of these change requests. Program officials acknowledged these weaknesses but stated that their focus has been on deploying the system. This means that the program office cannot know and disclose to DOD decision makers whether the system’s stability and maturity are moving in the right direction. In addition, the program office has not consistently collected data on user and operator satisfaction with the system. Specifically, the program office conducted two surveys in the last 6 years—a user satisfaction survey and a shipboard merchant satisfaction survey—but neither of these surveys is meaningful. More specifically, the user satisfaction survey was done in 2002 and thus is dated; and it covered only two ships and a prototype version of Navy Cash and thus its scope is limited. In addition, neither survey produced a response rate that can be generalized and projected (about 50 percent and 20 percent for the two ships in the user survey, and about 30 percent for the merchant survey). Program officials stated that they have relied on informal user feedback from disbursing officers, who have indicated overall satisfaction with the system. Nevertheless, they said that a survey of users and operators is being planned and expected to be completed by the fall of 2008. Without meaningful data about Navy Cash’s stability and the satisfaction of those who use it, it is not clear Navy Cash is a quality system. Navy Cash’s potential duplication of other DOD programs that perform similar functions, combined with its lack of meaningful economic justification, together mean that the department does not have an adequate basis for knowing whether Navy Cash, as defined, is the most cost-effective solution to meeting its strategic business and technological needs. Because such a basis is absolutely fundamental to informed investment decision making, a compelling case exists for the department to reevaluate current plans for investing almost $60 million of additional modernization funding to further develop the system. Even if reevaluation supports current or modified investment plans, the manner in which the program is being executed remains a source of considerable cost, schedule, and performance risk. In particular, without employing fundamental requirements development and management practices, the department cannot reliably estimate program costs and develop schedules needed to accomplish the work associated with delivering predetermined and economically justified system capabilities. In addition, without effective risk management, the department is not positioned to proactively avoid the occurrence of cost, schedule, and performance problems. Furthermore, the lack of adequate security management puts the confidentiality, integrity, and availability of deployed and operating Navy Cash shipboard devices, applications, and financial data at increased risk of being compromised. Moreover, without meaningful data about the Navy Cash’s stability and the satisfaction of those who use it, it is not clear that Navy Cash is a quality system. To overcome each of these weaknesses, it is important to not only acknowledge them, which the program office has done, but to also treat them as program priorities, including developing and implementing plans for addressing them, which the program office has largely not done. Because of the uncertainty surrounding whether Navy Cash, as defined, represents a cost-effective solution, we recommend that the Secretary of Defense direct the Secretary of the Navy to limit further investment of modernization funding in the program to only (1) deployment to remaining ships of already developed and tested capabilities; (2) correction of information security vulnerabilities and weaknesses on ships where it is deployed and operating; and (3) development of the basis for an informed decision as to whether further development and modernization is economically justified and in the department’s collective best interests. To develop the basis for an informed decision about further Navy Cash development, we further recommend that the Secretary of Defense, direct the appropriate DOD organizations to (1) examine the relationships among DOD’s programs for delivering military personnel with smart card technology for electronic retail and banking transactions; (2) identify, in coordination with the respective program offices, alternatives for optimizing the relationships of these programs in a way that minimizes areas of duplication, maximizes reuse of shared services across the programs, and considers opportunities for a consolidated stored value card program across the military services; and (3) share the results with the appropriate organizations for use in making an informed decision about planned investment in Navy Cash. To further develop this basis for an informed decision about Navy Cash development, we also recommend that the Secretary of Defense direct the Secretary of the Navy to ensure that the appropriate Navy organizational entities prepare a reliable economic analysis that encompasses the program’s total life cycle costs, including those of FMS, and that (1) addresses cost-estimating best practices and complies with relevant OMB cost-benefit guidance and (2) incorporates data on whether deployed Navy Cash capabilities are actually producing benefits. To address Navy Cash information security management weaknesses and improve the operational security of the system, we recommend that the Secretary of Defense direct the Secretary of the Navy to ensure that the Navy Cash program manager, in collaboration with the appropriate organizations, take the following five actions: Develop and implement a patch management approach based on NIST guidance, which includes a complete Navy Cash systems inventory; an automated patch deployment capability; and a patch management performance vulnerability measurement capability, including metrics for susceptibility to attack and mitigation response time. Institute a process to plan, implement, evaluate, and document remedial actions for deficiencies in Navy Cash information security policies, procedures, and practices, and ensure that this process meets FISMA requirements, as well as applicable OMB and NIST guidance. Update the NAVSUP/FMS memorandum of agreement, in collaboration with FMS, to establish specific security requirements for FMS and the financial agent to periodically perform information security control reviews, including applicable management, operational, and technical controls, of the Navy Cash system, and to provide NAVSUP with copies of the results of these reviews that pertain to the Navy Cash system and its supporting infrastructure. Develop a complete contingency plan to include a (1) sequence of recovery activities and (2) procedures for notifying ship personnel with contingency plan responsibilities to begin recovery activities; and to test the contingency plan in accordance with NIST guidance, including documenting lessons learned from testing. To address DON information security guidance limitations, we also recommend that the Secretary of Defense direct the Secretary of the Navy to ensure that the Navy Operational Designated Approving Authority, as part of the Naval Network Warfare Command, updates its certification and accreditation guidance to require the development of plans of action and milestones for all above identified security weaknesses. If further investment in development of Navy Cash can be justified, we then recommend that the Secretary of Defense direct the Secretary of the Navy, through the appropriate chain of command, to ensure that the Navy Cash program manager takes the following actions. With respect to requirements development and management, (1) develop detailed system requirements; (2) establish policies and plans for managing changes to requirements, including defining roles and responsibilities, and identifying how the integrity of a baseline set of requirements will be maintained; and (3) maintain bi-directional requirements traceability. With respect to risk management, (1) establish and implement a written plan and defined process for risk identification, analysis, and mitigation; (2) assign responsibility for managing risk to key stakeholders; (3) encourage program-wide participation in risk management; (4) include and track the risks discussed in this report as part of a risk inventory; and (5) apprise decision making and oversight authorities of the status of risks identified during program reviews. With respect to system quality measurement, collect and use sufficient data for (1) determining trends in unresolved change requests and (2) understanding users’ satisfaction with the system. Both DOD and FMS provided written comments on a draft of this report. In DOD’s comments, signed by the Deputy Under Secretary of Defense (Business Transformation) and reprinted in appendix II, the department stated that it concurred with 9 of our 11 recommendations, partially concurred with 1, and non-concurred with the remaining 1. In non- concurring with our recommendation for limiting further investment in the program, the department actually concurred with two out of three aspects of the recommendation. Nevertheless, for the aspect of our recommendation aimed at limiting further investment in the program to certain types of spending, it stated that it did not concur with limiting investment to the exclusion of needed maintenance (e.g., technology refresh) of operational systems. We agree with this comment, as it is consistent with statements in our report, including the recommendation summary on the report’s highlights page and the report’s conclusions, both of which focus on limiting investment of modernization funding only, and not operations and maintenance funding. To avoid any misunderstanding as to our intent, we clarified our report. With respect to our recommendation for optimizing the relationships among DOD’s programs that provide smart card technology for electronic retail and banking transactions, the department stated that, while it concurs with the overall intent of the recommendation, it believes that the Office of the Under Secretary of Defense (Comptroller) is the appropriate organization to implement it. Since our intent was not to prescribe the only DOD organization that should be responsible for implementing the recommendation, we have slightly modified the recommendation to provide the department flexibility in this regard. Notwithstanding DOD’s considerable agreement with our recommendations, the department provided additional comments on the findings that underlie several of the recommendations, which it described as needed to clarify and avoid confusion about the program. For various reasons discussed below, we either do not agree with most of these additional comments or do not find them germane to our findings and recommendations. First, the department stated that the report’s overall findings understate the program’s discipline and conformance with applicable guidance and best practices. We do not agree. Our review extended to six key acquisition control areas, all of which are reflected in DOD’s own acquisition policies as well as other federal guidance. Effective implementation of these controls can minimize program risks and better ensure that system investments are defined in a way to optimally support mission operations and performance, as well as deliver promised system capabilities and benefits on time and within budget. However, we found that none of these key IT management controls were being effectively implemented on Navy Cash, and the department agreed with our recommendations aimed at correcting this. Second, the department stated that the report’s findings do not accurately capture the program’s maturity since the system has been deployed to over 80 percent of its user base. While we do not question the extent to which the system has been deployed to date, and in fact state in our report that the system has been deployed to about 80 percent of the fleet, we do not agree that the program is mature, as evidence by the numerous IT management control weaknesses that we found and the fact that about $60 million in modernization funding remains to be spent on the system. Third, the department stated that it recognizes that some security management limitations exist, but added that these limitations do not pose a serious risk to the confidentiality, integrity, or availability of the deployed system, and that our report may cause cardholders to become unnecessarily concerned. We do not agree that these limitations do not pose a serious risk. Our report details a number of serious security management weaknesses relative to both DOD and NIST guidance, such as not following an adequate process for planning, implementing, evaluating and documenting remedial actions for known information security vulnerabilities, as well as not obtaining adequate assurance that FMS has effective security controls in place to protect Navy Cash applications and data. As a result, we appropriately conclude in our report that such failures to effectively manage Navy Cash security places the confidentiality, integrity, and availability of deployed and operating shipboard devices, applications, and financial data at increased risk of being compromised. Swift implementation of our recommendations is the best solution to alleviating any cardholder concerns that may arise from these weaknesses. In FMS’s comments, signed by the Commissioner of FMS and reprinted in appendix III, the service stated that our recommendations will help strengthen Navy Cash and that it has begun addressing our findings and recommendations. In addition, it stated that it will support DOD in implementing the recommendations, and consistent with DOD, commented that it did not agree with one part of one of our recommendations, adding that limiting investment in Navy Cash beyond fielding and maintaining already tested system capabilities would place future operations at risk. As stated above, this recommendation is focused on limiting further investment in modernization funding, not operations and maintenance funding. To avoid any confusion about this, we have added language to other parts of the report to emphasize this focus. In addition to the above, and notwithstanding its overall agreement with our recommendations, FMS provided other comments relative to several of the findings that underlie our recommendations. As discussed below, we either do not agree with these additional comments or do not find them to be germane to our findings and recommendations. First, FMS stated that our report does not identify a security breach, loss of cardholder or government funds, unauthorized release of personal or other sensitive information, or any other compromise of system integrity. We agree that our report does not identify these things, as the scope of work was not intended to identify them. Rather, our scope focused on the program’s implementation of key security management controls outlined in DOD and NIST guidance. In this regard, we found serious information security management control weaknesses and concluded that these weaknesses increased the risk to the confidentiality, integrity, and availability of information stored, transmitted, and processed by the financial agent. Second, FMS stated that the issue of whether Navy Cash is duplicative of other similar DOD smart card programs was addressed before Navy Cash was initiated in 2001, when DON and FMS determined that for technical and cost reasons it could not alter the other DOD programs to meet Navy Cash requirements. We do not find this comment relevant to our recommendation because our point is not that one of the other DOD programs should be altered and used in place of Navy Cash. Rather, our point is that these smart card programs need to be looked at collectively to decide whether it is in the department’s best interest to continue investing in separate smart card programs or to invest in a single department-wide solution. This point is consistent with FMS’s stated goal of having a single smart card for DOD. Third, FMS stated that it disagreed with our finding that the Navy Cash benefits projection erroneously counted $40 million as cost savings rather than cost transfers, adding that this value represents not merely a transfer between agencies but actual savings to the United States. While we do not disagree that this interest savings represents a benefit to the United States government, it also represents a cost—interest foregone—to holders of Treasury debt. Therefore, the interest savings represents a transfer rather than savings from one member or sector to another. We are sending copies of this report to interested congressional committees; the Director, Office of Management and Budget; the Congressional Budget Office; the Secretary of Defense; the Secretary of the Treasury; and the Department of Defense Office of the Inspector General. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site http://www.gao.gov. If you or your staffs have any questions on matters discussed in this report, please contact Randolph C. Hite at (202) 512-3439 or hiter@gao.gov, or Gregory C. Wilshusen at (202) 512-3789 or wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objective was to determine whether the Department of the Navy (DON) is effectively implementing information technology management controls on Navy Cash. We selected Navy Cash primarily because the Department of Defense’s (DOD) inventory of DON systems identified the program as one of DON’s five largest development and modernization investments. To address the objective, we focused on the following management areas (1) architectural alignment; (2) economic justification; (3) requirements development and management; (4) risk management; (5) security management; and (6) system quality measurement. In doing so, we analyzed a range of program documentation, such as the acquisition strategy, business case, economic analysis, agreements between the partnering organizations, and interviewed cognizant officials, such as the Milestone Decision Authority, program manager, and Financial Management Service (FMS) and financial agent officials responsible for Navy Cash. To address architectural alignment, we reviewed the program’s business enterprise architecture (BEA) compliance assessments and system architecture products as well as versions 4.0, 4.1, and 5.0 of the BEA and compared them to the BEA compliance requirements described in the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 and DOD’s BEA compliance guidance and evaluated the extent to which the compliance assessments addressed all relevant BEA products. We also reviewed DOD guidance for program architecture development, such as DOD’s Business Transformation Guidance, and compared Navy Cash’s program architecture development activities to this guidance. In addition, we interviewed Navy Cash and FMS officials, as well as Navy Cash’s Milestone Decision Authority, and requested related documentation on the potential duplication between Navy Cash and other DOD programs that involve the use of smart card functionality, such as the Air Force’s and Army’s Eagle Cash and EZpay programs. To address the program’s economic justification, we reviewed the latest economic analysis to determine the basis for the cost and benefit estimates. This included evaluating the analysis against Office of Management and Budget guidance and GAO’s Cost Assessment Guide. In addition, we interviewed cognizant program officials, including the Navy Cash program manager and FMS, regarding their respective roles, responsibilities, and actual efforts in developing and/or reviewing the economic analysis and the extent to which measures and metrics showed that projected benefits in the economic analysis were actually being realized. We also interviewed cognizant officials such as the Milestone Decision Authority about the purpose and use of the program’s economic analysis for managing the investment in the Navy Cash program. To address requirements development and management, we reviewed relevant program documentation, such as the concept of operations document, and interviewed relevant program officials and evaluated this information against relevant best practices. We also reviewed interface requirements documents, minutes of program management meetings, and traceability of security requirements. In addition, we interviewed program officials involved in the requirements management process to discuss the change control process they use and their roles and responsibilities for managing requirements. To address risk management, we reviewed relevant risk management documentation, such as program management review meeting minutes and compared the program office’s activities with DOD’s risk management guidance and related best practices. We analyzed the effectiveness of the program’s management reviews in terms of managing risks. In doing so, we interviewed cognizant program officials responsible, such as the program manager, Milestone Decision Authority, and FMS officials to discuss their roles and responsibilities and obtain clarification on the program’s approach to managing risks associated with acquiring and implementing Navy Cash. To address security management, we reviewed relevant security documentation, such as DOD and National Institute of Standards and Technology information security guidance, and the Navy Cash afloat and ashore system security authorization agreements. In addition, we observed the system in operation aboard the USS Theodore Roosevelt and discussed security issues with ship personnel, program office, FMS, and financial agent officials. We also reviewed USS Harry S. Truman contingency plan test results. Additionally, we reviewed a database used to maintain the inventory of Navy Cash hardware and software assets as a part of our analysis on the Navy Cash vulnerability management program. Furthermore, we interviewed cognizant DON, FMS, and financial agent officials to discuss their roles and responsibilities and obtain clarification on the program’s approach to protecting the confidentiality, integrity, and availability of Navy Cash systems and information. To address system quality measurement, we reviewed program documentation, such as change request logs, and a plan of action and milestones for change requests. We also compared the program’s data collection and analysis practices relative to these areas to program guidance and best practices. We reviewed the plans for and results of surveys that were performed on user and shipboard merchant satisfaction with Navy Cash, and we interviewed program management and technical officials. We conducted our work at DOD offices and program office and ship facilities in the Washington, D.C. metropolitan area, Norfolk, Virginia, and Mechanicsburg, Pennsylvania, between June 2007 and September 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objective. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. In addition to the contact persons named above, key contributors to this report were Neelaxi Lakhmani (Assistant Director), Jenniffer Wilson (Assistant Director), Ed Glagola (Assistant Director), Monica Anatalio, Carolyn Boyce, Harold Brumm, West Coile, Neil Doherty, Cheryl Dottermusch, Joshua Hammerstein, Mustafa Hassan, Michael Holland, James Houtz, Ethan Iczkovitz, Rebecca LaPaze, Anh Le, Josh Leiling, Mary Marshall, Karen Richey, Melissa Schermerhorn, Karl Seifert, Jonathan Ticehurst, and Adam Vodraska.
GAO has designated the Department of Defense's (DOD) multi-billion dollar business systems modernization efforts as high risk, in part because key information technology (IT) management controls have not been implemented on key investments, such as the Navy Cash program. Initiated in 2001, Navy Cash is a joint Department of the Navy (DON) and Department of the Treasury Financial Management Service (FMS) program to create a cashless environment on ships using smart card technology, and is estimated to cost about $320 million to fully deploy. As requested, GAO analyzed whether DON is effectively implementing IT management controls on the program, including architectural alignment, economic justification, requirements development and management, risk management, security management, and system quality measurement against relevant guidance. Key IT management controls have not been effectively implemented on Navy Cash, to the point that further investment in this program, as it is currently defined, has not been shown to be a prudent and judicious use of scarce modernization resources. In particular, Navy Cash has not been (1) assessed and defined in a way to ensure that it is not duplicative of programs in the Air Force and the Army that use smart card technology for electronic retail transactions and (2) economically justified on the basis of reliable analyses of estimated costs and expected benefits over the program's life. As a result, DON cannot demonstrate that the investment alternative that it is pursuing is the most cost-effective solution to satisfying its mission needs. Moreover, other management controls, which are intended to maximize the chances of delivering defined and justified system capabilities and benefits on time and within budget, have not been effectively implemented. System requirements have not been effectively managed. For example, neither policies nor plans that define how system requirements are to be managed, nor an approved baseline set of requirements that are justified and needed to cost-effectively meet mission needs, exist. Instead, requirements are addressed reactively through requests for changes to the system based primarily on the availability of funding. Program risks have not been effectively managed. In particular, plans, processes, and procedures that provide for identifying, mitigating, and disclosing risks have not been defined, nor have risk-related roles and responsibilities for key stakeholders. System security has not been effectively managed, thus putting the confidentiality, integrity, and availability of deployed and operating shipboard devices, applications, and data at increased risk of being compromised. For example, the mitigation of system vulnerabilities by applying software patches has not been effectively implemented. Key aspects of system quality are not being effectively measured. For example, data for determining trends in unresolved system change requests, which is an indicator of system stability, as well as user feedback on system satisfaction, are not being collected and used. Program oversight and management officials acknowledged these weaknesses and cited turnover of staff in key positions and their primary focus on deploying Navy Cash as reasons for the state of some of these IT management controls. Collectively, this means that, after investing about 6 years and $132 million on Navy Cash and planning to invest an additional $60 million to further develop the program, the department has yet to demonstrate through verifiable analysis and evidence that the program, as currently defined, is justified. Moreover, even if further investment was to be demonstrated, the manner in which the delivery of program capabilities is being managed is not adequate. As a result, the program is at risk of delivering a system solution that falls short of cost, schedule, and performance expectations.
Biomonitoring—one technique for assessing people’s exposure to chemicals—involves measuring the concentration of chemicals or their by- products in human specimens, such as blood or urine. While, biomonitoring has been used to monitor chemical exposures for decades, more recently, advances in analytic methods have allowed scientists to measure more chemicals, in smaller concentrations, using smaller samples of blood or urine. As a result, biomonitoring has become more widely used for a variety of applications, including public health research and measuring the impact of certain environmental regulations, such as the decline in blood lead levels following declining levels of gasoline lead. CDC conducts the most comprehensive biomonitoring program in the country under its National Biomonitoring Program and published the first, second, third and fourth National Report on Human Exposure to Environmental Chemicals—in 2001, 2003, 2005, and 2009, respectively— which reported the concentrations of certain chemicals or their by- products in the blood or urine of a representative sample of the U.S. population. For each of these reports, the CDC has increased the number of chemicals studied—from 27 in the first report, to 116 in the second, to 148 in the third, and to 212 in the fourth. Each report is cumulative (containing all the results from previous reports). These reports provide the most comprehensive assessment to date of the exposure of the U.S. population to chemicals in our environment including such chemicals as acrylamide, arsenic, BPA, triclosan, and perchlorate. These reports have provided a window into the U.S. population’s exposure to chemicals, and the CDC continues to develop new methods for collecting data on additional chemical exposures with each report. For decades, government regulators have used risk assessment to understand the health implications of commercial chemicals. Researchers use this process to estimate how much harm, if any, can be expected from exposure to a given contaminant or mixture of contaminants and to help regulators determine whether the risk is significant enough to require banning or regulating the chemical or other corrective action. Biomonitoring research is difficult to integrate into this risk assessment process, since estimates of human exposure to chemicals have historically been based on the concentration of these chemicals in environmental media and on information about how people are exposed. Biomonitoring data, however, provide a measure of internal dose that is the result of exposure to all environmental media and depend on how the human body processes and excretes the chemical. EPA has made limited use of biomonitoring data in its assessments of risks posed by chemicals. As we previously reported, one major reason for the agency’s limited use of such data is that, to date, there are no biomonitoring data for most commercial chemicals. The most comprehensive biomonitoring effort providing data relevant to the entire U.S. population includes only 212 chemicals, whereas EPA is currently focusing its chemical assessment and management efforts on the more than 6,000 chemicals that companies produce in quantities of more than 25,000 pounds per year at one site. Current biomonitoring efforts also provide little information on children. Large-scale biomonitoring studies generally omit children because it is difficult to collect biomonitoring data from them. For example, some parents are concerned about the invasiveness of taking blood samples from their children, and certain other fluids, such as umbilical cord blood or breast milk, are available only in small quantities and only at certain times. Thus, when samples are available from children, they may not be large enough to analyze. A second reason we reported for the agency’s limited use of biomonitoring data is that EPA often lacks the additional information needed to make biomonitoring studies useful in its risk assessment process. In this regard, biomonitoring provides information only on the level of a chemical in a person’s body but not the health impact. The detectable presence of a chemical in a person’s blood or urine does not necessarily mean that the chemical causes harm. While exposure to larger amounts of a chemical may cause an adverse health impact, a smaller amount may be of no health consequence. In addition, biomonitoring data alone do not indicate the source, route, or timing of the exposure, making it difficult to identify the appropriate risk management strategies. For most of the chemicals studied under current biomonitoring programs, more data on chemical effects are needed to understand whether the levels measured in people pose a health concern, but EPA’s ability to require chemical companies to develop such data is limited. As a result, EPA has made few changes to its chemical risk assessments or safeguards in response to the recent proliferation of biomonitoring data. For most chemicals, EPA would need additional data on the following to incorporate biomonitoring into risk assessment: health effects; the sources, routes, and timing of exposure; and the fate of a chemical in the human body. However, as we have discussed in prior reports, EPA will face difficulty in using its authorities under TSCA to require chemical companies to develop health and safety information on the chemicals. In January 2009, we added transforming EPA’s process for assessing and controlling toxic chemicals to our list of high-risk areas warranting attention by Congress and the executive branch. Subsequently, the EPA Administrator set forth goals for updated legislation that would give EPA the mechanisms and authorities to promptly assess and regulate chemicals. EPA has used some biomonitoring data in chemical risk assessment and management, but only when additional studies have provided insight on the health implications of the biomonitoring data. For example, EPA was able to use biomonitoring data on methylmercury—a neurotoxin that accumulates in fish—because studies have drawn a link between the level of this toxin in human blood and adverse neurological effects in children. EPA also used both biomonitoring and traditional risk assessment information to take action on certain perfluorinated chemicals. These chemicals are used in the manufacture of consumer and industrial products, including nonstick cookware coatings; waterproof clothing; and oil-, stain-, and grease-resistant surface treatments. EPA has several biomonitoring research projects under way, but the agency has no system in place to track progress or assess the resources needed specifically for biomonitoring research. For example, EPA awarded grants that are intended to advance the knowledge of children’s exposure to pesticides through the use of biomonitoring and of the potential adverse effects of these exposures. The grants issued went to projects that, among other things, investigated the development of less invasive biomarker than blood samples—such as analyses of saliva or hair samples—to measures of early brain development. Furthermore, EPA has studied the presence of an herbicide in 135 homes with preschool-age children by analyzing soil, air, carpet, dust, food, and urine as well as samples taken from subject’s hands. The study shed important light on how best to collect urine samples that reflect external dose of the herbicide and how to develop models that simulate how the body processes specific chemicals. Nonetheless, EPA does not separately track spending or staff time devoted to biomonitoring research. Instead, it places individual biomonitoring research projects within its larger Human Health Research Strategy. While this strategy includes some goals relevant to biomonitoring, EPA has not systematically identified and prioritized the data gaps that prevent it from using biomonitoring data. Nor has it systematically identified the resources needed to reach biomonitoring research goals or the chemicals that need the most additional biomonitoring-related research. Also, EPA has not coordinated its biomonitoring research with that of the many agencies and other groups involved in biomonitoring research, which could impair its ability to address the significant data gaps in this field of research. In addition to the CDC and EPA, several other federal agencies have been involved in biomonitoring research, including the U.S. Department of Health and Human Service’s Agency for Toxic Substances and Disease Registry, entities within the U.S. Department of Health and Human Service’s NIH, and the U.S. Department of Labor’s Occupational Safety and Health Administration. Several states have also initiated biomonitoring programs to examine state and local health concerns, such as arsenic in local water supplies or populations with high fish consumption that may increase mercury exposure. Furthermore, some chemical companies have for decades monitored their workforce for chemical exposure, and chemical industry associations have funded biomonitoring research. Finally, some environmental organizations have conducted biomonitoring studies of small groups of adults and children, including one study on infants. As we previously reported, a national biomonitoring research plan could help better coordinate research and link data needs with collection efforts. EPA has suggested chemicals for future inclusion in the CDC’s National Biomonitoring Program but has not gone any further toward formulating an overall strategy to address data gaps and ensure the progress of biomonitoring research. We have previously noted that to begin addressing the need for biomonitoring research, federal agencies will need to strategically coordinate their efforts and leverage their limited resources. Similarly, the National Academies of Science found that the lack of a coordinated research strategy allowed widespread exposures to go undetected, including exposure to flame retardants known as polybrominated diphenyl ethers—chemicals which may cause liver damage, among other things, according to some toxicological studies. The academy noted that a coordinated research strategy would require input from various agencies involved in biomonitoring and supporting disciplines. In addition to EPA, these agencies include the CDC, NIH, the Food and Drug Administration, and the U.S. Department of Agriculture. Such coordination could strengthen efforts to identify and possibly regulate the sources of the exposure detected by biomonitoring, since the most common sources—that is, food, environmental contamination, and consumer products—are under the jurisdiction of different agencies. We have recommended that EPA develop a comprehensive research strategy to improve its ability to use biomonitoring in its risk assessments. However, though EPA agreed with our recommendation, th agency still lacks such a comprehensive strategy to guide its own research efforts. In addition, we recommended that EPA establish an interagency e task force that would coordinate federal biomonitoring research effor across agencies and leverage available resources. If EPA determines that further authority is necessary, we stated that it should request that the Executive Office of the President establish an interagency task force to coordinate such efforts. Nonetheless, EPA has not established such an interagency task force to coordinate federal biomonitoring research, nor has it informed us that it has requested the Executive Office of the President do so. EPA has not determined the extent of its authority to obtain biomonitoring data under TSCA, and this authority is generally untested and may be limited. Several provisions of TSCA are potentially relevant. For example, under section 4 of TSCA EPA can require chemical companies to test chemicals for their effects on health or the environment. However, biomonitoring data indicate only the presence of a chemical in a person’s body and not its impact on the person’s health. EPA told us that biomonitoring data may demonstrate chemical characteristics that would be relevant to a chemical’s effects on health or the environment and that the agency could theoretically require that biomonitoring be used as a methodology for developing such data. EPA’s specific authority to obtain biomonitoring data in this way is untested, however, and EPA is only generally authorized to require the development of such data after meeting certain threshold risk requirements that are difficult, expensive, and time- consuming. EPA may also be able to indirectly require the development of biomonitoring data using the leverage it has under section 5(e) of TSCA, though it has not yet attempted to do so. Under certain circumstances, EPA can use this section to seek an injunction to limit or prohibit the manufacture of a chemical. As an alternative, EPA sometimes issues a consent order that subjects manufacture to certain conditions, including testing, which could include biomonitoring. While EPA may not be explicitly authorized to require the development of such test data under this section, chemical companies have an incentive to provide the requested test data to avoid a more sweeping ban on a chemical’s manufacture. EPA has not indicated whether it will use section 5(e) consent orders to require companies to submit biomonitoring data. Other TSCA provisions allow EPA to collect existing information on chemicals that a company already has, knows about, or could reasonably ascertain. For example, section 8(e) requires chemical companies to report to EPA any information they have obtained that reasonably supports the conclusion that a chemical presents a substantial risk of injury to health or the environment. EPA asserts that biomonitoring data are reportable as demonstrating a substantial risk if the chemical in question is known to have serious toxic effects and the biomonitoring data indicate a level of exposure previously unknown to EPA. Industry has asked for more guidance on this point, but EPA has not yet revised its guidance. Confusion over the scope of EPA’s authority to collect biomonitoring data under section 8 (e) is highlighted by the history leading up to an EPA action against the chemical company E. I. du Pont de Nemours and Company (DuPont). Until 2000, DuPont used the chemical PFOA to make Teflon®. In 1981, DuPont took blood from several female workers and two of their babies. The levels of PFOA in the babies’ blood showed that PFOA had crossed the placental barrier. DuPont also tested the blood of twelve community members, 11 of whom had elevated levels of PFOA in their blood. DuPont did not report either set of results to EPA. After EPA received the results from a third party, DuPont argued that the information was not reportable under TSCA because the mere presence of PFOA in blood did not itself support the conclusion that exposure to PFOA posed any health risks. EPA subsequently filed two actions against DuPont for violating section 8(e) of TSCA by failing to report the biomonitoring data, among other claims. DuPont settled the claims but did not admit that it should have reported the data. However, based on the data it had received, EPA conducted a subsequent risk assessment, which contributed to a finding that PFOA was “likely to be carcinogenic to humans.” In turn, this finding contributed to an agreement by DuPont and others to phase out the use of PFOA by 2015. However, EPA’s authority to obtain biomonitoring data under section 8(e) of TSCA remains untested in court. Given the uncertainties regarding TSCA authorities, we have recommended that EPA should determine the extent of its legal authority to require companies to develop and submit biomonitoring data under TSCA. We also recommended that EPA request additional authority from Congress if it determines that such authority is necessary. If EPA determines that no further authority is necessary, we recommended that it develop formal written policies explaining the circumstances under which companies are required to submit biomonitoring data. However, EPA has not yet attempted a comprehensive review of its authority to require the companies to develop and submit biomonitoring data. The agency did not disagree with our recommendation, but commented that a case-by-case explanation of its authority might be more useful than a global assessment. However, we continue to believe that an analysis of EPA’s legal authority to obtain biomonitoring data is critical. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of this Subcommittee may have. For further information about this testimony, please contact John B. Stephenson at (202) 512-3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Contributors to this testimony include David Bennett, Antoinette Capaccio, Ed Kratzer, and Ben Shouse. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Biomonitoring, which measures chemicals in people's tissues or body fluids, has shown that the U.S. population is widely exposed to chemicals used in everyday products. Some of these have the potential to cause cancer or birth defects. Moreover, children may be more vulnerable to harm from these chemicals than adults. The Environmental Protection Agency (EPA) is authorized under the Toxic Substances Control Act (TSCA) to control chemicals that pose unreasonable health risks. One crucial tool in this process is chemical risk assessment, which involves determining the extent to which populations will be exposed to a chemical and assessing how this exposure affects human health This testimony, based on GAO's prior work, reviews the (1) extent to which EPA incorporates information from biomonitoring studies into its assessments of chemicals, (2) steps that EPA has taken to improve the usefulness of biomonitoring data, and (3) extent to which EPA has the authority under TSCA to require chemical companies to develop and submit biomonitoring data to EPA. EPA has made limited use of biomonitoring data in its assessments of risks posed by commercial chemicals. One reason is that biomonitoring data relevant to the entire U.S. population exist for only 212 chemicals. In addition, biomonitoring data alone indicate only that a person was somehow exposed to a chemical, not the source of the exposure or its effect on the person's health. For most of the chemicals studied under current biomonitoring programs, more data on chemical effects are needed to understand if the levels measured in people pose a health concern, but EPA's authorities to require chemical companies to develop such data is limited. However, in September 2009, the EPA Administrator set forth goals for updated legislation to give EPA additional authorities to obtain data on chemicals. While EPA has initiated several research programs to make biomonitoring more useful to its risk assessment process, it has not developed a comprehensive strategy for this research that takes into account its own research efforts and those of the multiple federal agencies and other organizations involved in biomonitoring research. EPA does have several important biomonitoring research efforts, including research into the relationships between exposure to harmful chemicals, the resulting concentration of those chemicals in human tissue, and the corresponding health effects. However, without a plan to coordinate its research efforts, EPA has no means to track progress or assess the resources needed specifically for biomonitoring research. Furthermore, according to the National Academy of Sciences, the lack of a coordinated national research strategy has allowed widespread chemical exposures to go undetected, such as exposures to flame retardants. While EPA agreed with GAO's recommendation that EPA develop a comprehensive research strategy, the agency has not yet done so. EPA has not determined the extent of its authority to obtain biomonitoring data under TSCA, and this authority is untested and may be limited. The TSCA section that authorizes EPA to require companies to develop data focuses on health and environmental effects of chemicals. However, biomonitoring data indicate only the presence of a chemical in the body, not its impact on health. It may be easier for EPA to obtain biomonitoring data under other TSCA sections, which allow EPA to collect existing information on chemicals. For example, TSCA obligates chemical companies to report information that reasonably supports the conclusion that a chemical presents a substantial risk of injury to health or the environment. EPA asserts that biomonitoring data are reportable if a chemical is known to have serious toxic effects and biomonitoring data indicates a level of exposure previously unknown to EPA. EPA took action against a chemical company under this authority in 2004. However, the action was settled without an admission of liability by the company, so EPA's authority to obtain biomonitoring data remains untested. GAO's 2009 report recommended that EPA clarify this authority, but it has not yet done so. The agency did not disagree, but commented that a case-by-case explanation of its authority might be more useful than a global assessment.
As part of our audit of the fiscal years 2011 and 2010 CFS, we considered the federal government’s financial reporting procedures and related internal control. Also, we determined the status of corrective actions taken by Treasury and OMB to address open recommendations relating to the processes used to prepare the CFS detailed in our previous reports. Based on the scope of our work and the effects of the other limitations on the scope of our audit noted throughout our audit report on the fiscal year 2011 CFS, our internal control work would not necessarily identify all deficiencies in internal control, including those that might be material weaknesses or significant deficiencies. We have communicated each of the new control deficiencies to your staff. We performed our audit of the fiscal years 2011 and 2010 CFS in accordance with U.S. generally accepted government auditing standards. We believe that our audit provided a reasonable basis for our conclusions in this report. We requested comments on a draft of this report from the Acting Director of OMB and the Secretary of the Treasury or their designees. OMB provided oral comments, which are summarized in the Agency Comments section of this report. Treasury’s Fiscal Assistant Secretary provided written comments on June 7, 2012, which are reprinted in their entirety in appendix II and are also summarized in the Agency Comments section. Over the past several years, Treasury and OMB have improved the process for preparing the Financial Report of the United States Government (Financial Report) and have addressed several of the issues underlying prior years’ recommendations, including our recommendation regarding the preparation and review of the Management’s Discussion and Analysis and Citizen’s Guide sections of the Financial Report. However, we continued to identify numerous incorrect amounts and inconsistent and incomplete disclosures in the draft 2011 Financial Report, including the consolidated financial statements, and the Notes and Supplemental Information sections of the Financial Report. These errors, inconsistencies, and omissions occurred more frequently in the relatively new and in more complex areas of the Financial Report. For example, several note disclosures in the draft Financial Report related to social insurance, the Troubled Asset Relief Program, Government-Sponsored Enterprises, and federal employee and veteran benefits were inconsistent with related disclosures in federal agencies’ Performance and Accountability Reports or Agency Financial Reports, and in some cases were not accurate or complete. The errors, inconsistencies, and omissions were not identified through Treasury’s and OMB’s processes for preparing and reviewing the draft Financial Report. We communicated these matters to Treasury and OMB officials who revised the Financial Report, as appropriate. While Treasury maintains standard operating procedures for preparing, reviewing, and approving the Financial Report, the extent of errors, inconsistencies, and omissions we identified is evidence of deficiencies in Treasury’s process for preparing and reviewing the draft Financial Report, particularly with respect to relatively new areas and in more complex areas. Treasury procedures did not provide for key federal entity personnel with technical expertise in the relatively new and the more complex areas to be actively involved in the preparation and review process of the Financial Report. More active involvement of such key federal entity personnel can help prevent or detect and correct incorrect amounts and inconsistent and incomplete disclosures. Further, Treasury’s procedures for preparing and reviewing the Financial Report did not require review and approval of drafts of the Financial Report by appropriate higher-level Treasury officials of the Office of the Fiscal Also, Assistant Secretary before they were provided to GAO for audit.although OMB had certain informal procedures for its Office of Federal Financial Management’s review and approval of drafts of the Financial Report before they were provided to GAO, it had not documented these procedures. Documented procedures that clearly delineate the roles and responsibilities of the appropriate Treasury and OMB officials can help to provide an effective review process. According to Standards for Internal Control in the Federal Government, one of the key objectives of an organization’s internal control over financial reporting is to provide reasonable assurance as to the reliability of its financial reporting, including its financial statements. effectively implemented preparation, review, and approval processes for drafts of the Financial Report, Treasury and OMB are at risk of presenting information that is incorrect, inconsistent, or incomplete. We recommend that the Acting Director of OMB direct the Controller of OMB to develop and implement written procedures specifying the steps required for effectively reviewing and approving the drafts of the Financial Report before they are provided to GAO, to include clear delineation of the review and approval roles and responsibilities of designated appropriate higher-level officials in OMB’s Office of Federal Financial Management, including the Controller of OMB. GAO, Standards for Internal Control in the Federal Government, GAO/AIMD-00-21.3.1 (Washington, D.C.: November 1999). These standards define the minimum level of quality acceptable for internal control in the government and provide the basis against which internal control is to be evaluated. related to preparing and reviewing the drafts of the Financial Report before they are provided to GAO, to include clear delineation of the review and approval roles and responsibilities of designated appropriate higher-level officials in the Office of the Fiscal Assistant Secretary, including the Fiscal Assistant Secretary. We further recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to develop and implement procedures to provide for the active involvement of key federal entity personnel with technical expertise in relatively new areas and more complex areas in the preparation and review process of the Financial Report. For many years, we have reported that Treasury had not established a formal process to reasonably assure that the CFS, including the related notes, were presented in conformity with generally accepted accounting principles (GAAP). Over the past several years, Treasury has developed procedures utilizing a financial reporting disclosure checklist (CFS disclosure checklist) that is intended to significantly improve Treasury’s ability to timely identify GAAP requirements, assess the effect of any omitted disclosures, and document decisions reached with regard to the omission of any disclosures and the rationale for such decisions. However, during our audit of the fiscal year 2011 CFS, we determined that Treasury’s assessment and documentation regarding the reporting of certain financial information required by GAAP continued to be impaired. Specifically, we found that Treasury officials did not complete and document their required review and approval of the CFS disclosure checklist within the time frames established by Treasury’s policies and procedures. In response to our prior recommendation, Treasury developed procedures, including use of its CFS disclosure checklist, to help determine that all disclosures required by GAAP are included in the CFS. Several years ago, Treasury established a standard operating procedure (SOP), entitled “The FR Disclosure List,” to update the CFS disclosure checklist to reflect new disclosures that are required to be included in the CFS. In fiscal year 2011, Treasury further enhanced the SOP to include procedures for periodically updating the CFS disclosure checklist and documenting preparer sign-offs and managerial approvals. Specifically, the fiscal year 2011 enhancement to the SOP requires the CFS disclosure checklist to be revised annually, as necessary, to incorporate any (1) new and amended disclosure requirements effective for the current year’s reporting and (2) additional information necessary to address GAO audit recommendations related to financial disclosure. The SOP was also modified to require a final review and sign-off on the CFS disclosure checklist by the Financial Reports Division (FRD) Director in Treasury’s Financial Management Service (FMS) by December 5, 2011, to help provide reasonable assurance that the disclosures are in However, we found that the final review and conformity with GAAP.approval by the FRD Director was not completed and documented by December 5, 2011, as required by the SOP. We also noted that the SOP did not require Treasury to use the CFS disclosure checklist to assist in preparing the format draft that Treasury prepares in advance of the year- end consolidation. Using the CFS disclosure checklist to assist in preparing the format draft could assist Treasury in completing the final checklist on a timely basis. As a result, Treasury was limited in its ability to rely on the CFS disclosure checklist to reasonably assure that the CFS was prepared in conformity with GAAP as intended by the SOP. Specifically, the lack of timely review and approval of the CFS disclosure checklist limited Treasury’s ability to reasonably assure that all GAAP-required disclosures are included in the CFS. To help to provide reasonable assurance that the information reported in the CFS is complete, accurate, and in conformity with GAAP, the Secretary of the Treasury should direct the Fiscal Assistant Secretary to (1) establish a mechanism to ensure that Treasury’s CFS disclosure checklist is reviewed and approved by the date in Treasury’s policies and procedures and (2) revise the SOP to include requirements for using the CFS disclosure checklist to prepare the format draft of the CFS and to update the CFS disclosure checklist as necessary when subsequent drafts of the CFS are prepared. Over the past several years, Treasury has made progress in developing, documenting, and implementing numerous improvements to its SOPs intended to enhance internal control over the process for preparing the CFS. However, in fiscal year 2011, we identified a control deficiency involving Treasury’s review of audited closing packages. In connection with Treasury’s role as preparer of the CFS, Treasury management is responsible for developing and documenting detailed policies and procedures for preparing the CFS and ensuring that appropriate internal control is built into and is an integral part of the CFS compilation process. Standards for Internal Control in the Federal Government calls for clear documentation of policies and procedures. Treasury’s SOP entitled “Data Analysis” includes procedures for Treasury staff to compare financial information submitted by federal entities through their audited closing packages to the entities’ audited financial statements for consistency and to work with the entities to correct any material inconsistencies identified by Treasury. However, Treasury’s SOP did not include steps to pursue instances where the information provided to Treasury contains indications that federal entities’ financial information submitted through the closing package, even if consistent with the entities’ audited financial statements, may not be in conformity with GAAP. Steps to pursue these instances would be particularly relevant when a new federal accounting standard is implemented to reasonably assure appropriate and consistent application across government. For example, as part of its fiscal year 2011 CFS compilation process, Treasury did not identify federal entities’ potential GAAP exceptions related to Statement of Federal Financial Accounting Standards No. 33, Pensions, Other Retirement Benefits, and Other Postemployment Benefits: Reporting the Gains and Losses from Changes in Assumptions and Selecting Discount Rates and Valuation Dates, which was first implemented in fiscal year 2010. As part of our fiscal year 2011 audit, we raised concerns that the financial information presented at the governmentwide level, which was provided by federal entities, may not be in conformity with GAAP. However, there was not sufficient time for Treasury to pursue and resolve our concerns. As a result, Treasury was unable to reasonably assure that such Federal Employee and Veteran Benefits Payable information in the fiscal year 2011 CFS was presented in conformity with GAAP. Inadequate policies and procedures increase the risk that errors in the compilation process could go undetected and result in misstatements in the financial statements or incomplete and inaccurate disclosure of information within the Financial Report. To help to provide reasonable assurance that financial information is properly reported in the CFS, we recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to enhance the SOP entitled “Data Analysis” to include required steps for pursuing any instances where the information provided to Treasury contains indications that financial information provided by federal entities for inclusion in the CFS may not be in conformity with GAAP, particularly with respect to any recent changes in GAAP. The Treasury Financial Manual section entitled “Agency Reporting Requirements for the Financial Report of the United States Government” requires federal entities to report intragovernmental balances quarterly to Treasury and work with their trading partners to reconcile and resolve intragovernmental differences. Treasury developed the Intragovernmental Reporting and Analysis System (IRAS) to begin to address the long- standing weakness we reported regarding the federal government’s inability to adequately account for and reconcile intragovernmental activity and balances. Using IRAS, Treasury generates reports on a quarterly basis to assist federal entities in identifying, reconciling, and resolving intragovernmental differences with their trading partners prior to year-end reporting. Further, Treasury personnel use IRAS reports to monitor entities’ progress in reconciling their intragovernmental differences both quarterly and at year-end. Treasury’s SOP entitled “Intragovernmental Quarterly Reporting Process and Analysis” includes IRAS validation procedures for Treasury personnel to validate the IRAS reports for accuracy prior to providing them to federal entities. However, during our fiscal year 2011 audit, we found control deficiencies over the design and implementation of the IRAS data validation process. Specifically, during our fiscal year 2011 audit, we found that one individual at Treasury, who was the developer of IRAS, also had several other incompatible roles and responsibilities, including serving as the IRAS administrator as well as a review accountant for one of the federal entities included in the IRAS process. In these various roles, his responsibilities included uploading the federal entities’ reported intragovernmental data into IRAS, using IRAS to process the data and generate the IRAS reports, and monitoring his assigned entity’s progress in reconciling and resolving the intragovernmental differences with its trading partners. As such, his roles included responsibilities for much of the process for identifying, reconciling, and resolving intragovernmental differences. Standards for Internal Control in the Federal Government calls for segregation of duties among different people in order to reduce the risk of error or fraud, thus preventing a single individual from having full control of a transaction or event. Treasury noted that it reduced the risk of error through the required quarterly IRAS validation process, which provides for the IRAS administrator to verify randomly selected federal entities’ submitted data for consistency and completeness with IRAS reports. The process also calls for the IRAS team leader to review and approve the IRAS administrator’s testing documentation prior to the distribution of IRAS reports to the federal entities. In addition, the process includes completing a checklist to document that these procedures have been performed. If properly designed and effectively implemented, this process could reduce the risk of error caused by inadequate segregation of duties. However, we found that although the IRAS validation process calls for testing of randomly selected federal entities’ data with IRAS reports for consistency and completeness, this procedure did not require testing of the federal entity that the administrator was responsible for under his review accountant’s role. Further, we found that since the IRAS validation checklist has been in place—the third and fourth quarters of fiscal year 2011—the completed checklists did not always include the IRAS team leader’s signature to document that the required review took place. We also noted that the data from IRAS were provided to federal entities for use prior to the validations, increasing the risk of entities receiving inaccurate reports. These deficiencies in the design and implementation of the IRAS validation process impair Treasury’s assurance that it has reduced the risk of error in the IRAS reports that Treasury and federal entities depend on to help identify and reconcile intragovernmental differences between federal entities and their trading partners. To help to provide reasonable assurance that appropriate controls are in place to reduce the risk of errors in IRAS reports, we recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to (1) enhance the IRAS validation procedures, at a minimum, to include specific steps for testing intragovernmental data of the administrator’s assigned entity and (2) establish a mechanism for ensuring that all steps in the required validation process are completed, documented, and reviewed prior to the distribution of IRAS reports. Treasury’s SOP entitled “Significant Federal Entities Identification” includes procedures for Treasury to annually assess whether federal entities that were previously determined nonsignificant, have become significant to the Financial Report. The SOP also includes procedures for Treasury, in coordination with OMB, to help provide reasonable assurance that any newly identified significant entities comply with the reporting requirements for significant entities. Treasury’s assessments are based on prior year financial information. In addition, the SOP requires federal entities identified as significant to the Financial Report to submit a closing package to Treasury that includes audited special purpose financial statements that have been appropriately reclassified in accordance with CFS reporting requirements for inclusion in the Financial Report. In fiscal year 2011, we found that Treasury’s and OMB’s processes were not effective in ensuring timely submission of audited closing packages by entities newly identified as significant to the Financial Report because of deficiencies in the design of Treasury’s related policies and procedures. Specifically, we found that Treasury’s SOP did not include procedures to (1) identify any federal entities that became significant to the Financial Report during the fiscal year but were not identified as significant in the prior fiscal year and (2) obtain audited closing packages from newly identified entities in the year they are determined to be significant, including timely written notification to newly identified significant entities. Without these procedures, Treasury is unable to reasonably assure that it has appropriate audit assurance over financial information for all federal entities that are significant to the Financial Report. To help to provide reasonable assurance that Treasury timely receives the audited closing package from those federal entities that are newly identified as being significant to the Financial Report, we recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to enhance the SOP entitled “Significant Federal Entities Identification” to include procedures for (1) identifying any entities that become significant to the Financial Report during the fiscal year but were not identified as significant in the prior fiscal year and (2) obtaining audited closing packages from newly identified significant entities in the year they become significant, including timely written notification to newly identified significant entities. Of our 50 recommendations from our prior reports regarding control deficiencies in the CFS preparation process that were open at the end of the fiscal year 2010 audit, we were able to close 12 during our fiscal year 2011 audit, generally as a result of corrective actions taken by Treasury. The other 38 recommendations remained open as of December 12, 2011, the date of our report on the audit of the fiscal year 2011 CFS. Appendix I summarizes the status as of December 12, 2011, for the 50 open recommendations from our prior years’ reports. Specifically, appendix I includes the status according to Treasury and OMB, as well as our own assessments where appropriate. The status of recommendations per GAO includes explanatory comments on Treasury’s and OMB’s information. We will continue to monitor Treasury’s and OMB’s progress in addressing our recommendations as part of our fiscal year 2012 CFS audit. In oral comments on a draft of this report, OMB generally concurred with the findings in this report. In written comments on a draft of this report, Treasury concurred with our findings and noted that the agency has made significant progress in enhancing its policies and procedures for the CFS preparation since the issuance of the fiscal year 2011 audit report. Also, Treasury stated that it expects to implement additional recommendations by the end of fiscal year 2012, and that it will use our findings to continue to improve the central accounting and compilation activities associated with the CFS. In addition, Treasury stated that it has given great management attention and staff resources to resolving material intragovernmental differences along with developing or improving process controls within FMS and the federal agencies. Treasury also stated that its strategy includes the use of multiple focus groups that have identified both short- and long-term solutions through analysis of material differences and working closely with agencies to identify root causes of differences. Further, Treasury stated that it has developed General Fund accounts that will provide Treasury with the capability to reconcile to federal agency financial reporting data. This report contains recommendations to you. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on our recommendations to the Senate Committee on Homeland Security and Governmental Affairs and to the House Committee on Oversight and Government Reform not later than 60 days after the date of this report. A written statement must also be sent to the Senate and House Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. We are sending copies of this report to interested congressional committees, the Fiscal Assistant Secretary of the Treasury, and the Controller of OMB’s Office of Federal Financial Management. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. We acknowledge and appreciate the cooperation and assistance provided by Treasury and OMB during our audit. If you or your staff have any questions or wish to discuss this report, please contact me at (202) 512-3406 or engelg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. No. As the Department of the Treasury (Treasury) is designing its new financial statement compilation process to begin with the fiscal year 2004 consolidated financial statements of the U.S. government (CFS), the Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of the Office of Management and Budget (OMB), to develop reconciliation procedures that will aid in understanding and controlling the net position balance as well as eliminate the plugs previously associated with compiling the CFS. To eliminate or explain adjustments to net position, Treasury eliminates, at the consolidated level, intragovernmental activity and balances using formal balanced accounting entries (via Reciprocal Categories) and analyzes transactions that contribute to the unmatched transactions and balances adjustment i.e., the plug. Major contributors to the plug are transactions with the General Fund (within Reciprocal Category 29). A Treasury task group is currently developing the Schedule of General Fund Authority, with the goal to enter audited data for fiscal year 2013 into the Governmentwide Financial Report System (GFRS) and to remove General Fund transactions from the plug. In the interim, Treasury will separately identify certain General Fund transactions by providing agencies with monthly STAR/CARS (Central Accounting Reporting System) data to facilitate reconciliation on a quarterly basis. In addition, Treasury has revised the guidance related to the appropriate use of “Trading Partner 99” (General Fund). This guidance will be issued in fiscal year 2012, to be effective in fiscal year 2013. Also throughout fiscal year 2012, Treasury will continue to identify and resolve, via communications to, and assistance from, agencies, material differences related to fiduciary, employee benefits, buy/sell and transfer activity through the continued efforts of Treasury focus groups to identify and mitigate root causes and implement short- and long-term solutions. Open. Recommendation As OMB continues to make strides to address issues related to intragovernmental transactions, the Director of OMB should direct the Controller of OMB to develop policies and procedures that document how OMB will enforce the business rules provided in OMB Memorandum M-07-03, Business Rules for Intragovernmental Transactions. Per Treasury and OMB Treasury has taken the lead role for resolving intragovernmental disputes and major differences between trading partners. In fiscal year 2011, Treasury published the updated Intragovernmental Business Rules, which include dispute resolution procedures for trading partner agencies to follow. Treasury’s dispute resolution process includes a new Intragovernmental Dispute Resolution Request Form to be certified by federal entity chief financial officers (CFO). Treasury as the enforcer of the updated business rules, working with OMB as necessary, will work with agencies, to help ensure the effectiveness of the dispute resolution process during each fiscal year and will document the resolutions. Per GAO Closed. As OMB continues to make strides to address issues related to intragovernmental transactions, the Director of OMB should direct the Controller of OMB to require that significant differences noted between business partners be resolved and the resolution be documented. See the status for recommendation No. 02-6. Closed. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to design procedures that will account for the difference in intragovernmental assets and liabilities throughout the compilation process by means of formal consolidating and elimination accounting entries. Per Treasury and OMB Treasury has designed and implemented formal consolidating and eliminating procedures with regard to intragovernmental assets and liabilities, but some issues remain. During fiscal year 2012, Treasury is revising existing guidance and policies, and developing new guidance and policies as needed, for these remaining issues. Upon implementation of the revised guidance and policies by the agencies, Treasury’s consolidation and elimination accounting entries should effectively account for the difference in intragovernmental assets and liabilities. Final resolution is contingent on fully resolving material intragovernmental differences. See the status for recommendation No. 02-4. Per GAO Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to develop solutions for intragovernmental activity and balance issues relating to federal agencies’ accounting, reconciling, and reporting in areas other than those OMB now requires be reconciled, primarily areas relating to appropriations. See the status for recommendation No. 02-4. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to reconcile the change in intragovernmental assets and liabilities for the fiscal year, including the amount and nature of all changes in intragovernmental assets or liabilities not attributable to cost and revenue activity recognized during the fiscal year. Examples of these differences would include capitalized purchases, such as inventory or equipment, and deferred revenue. Treasury’s consolidating procedures request information from the agencies related to asset capitalization and agency advances or deferred revenue to assist in ensuring the proper reporting of this activity. During fiscal year 2012, Treasury will finalize guidance related to intragovernmental capitalized purchases to be effective in fiscal year 2013. See the status of recommendation No. 02-4 and No. 02-9. Open. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary to develop and implement a process that adequately identifies and reports items needed to reconcile net operating cost and unified budget surplus (or deficit). Treasury should report “net unreconciled differences” included in the net operating results line item as a separate reconciling activity in the reconciliation statement. Per Treasury and OMB These unmatched transactions and balances will be reflected only in the Statements of Operations and Changes in Net Position until intragovernmental differences are materially resolved. At that point, unresolved reconciling items, if any, needed to reconcile net operating cost to the unified budget deficit can be separately identified in the reconciliation statements. See the status of recommendation No. 02- 13. Per GAO Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to develop and implement a process that adequately identifies and reports items needed to reconcile net operating cost and unified budget surplus (or deficit). Treasury should develop policies and procedures to ensure completeness of reporting and document how all the applicable components reported in the other consolidated financial statements (and related note disclosures included in the CFS) were properly reflected in the reconciliation statement. During fiscal year 2012, Treasury is refining its methodology for reconciling operating revenue to budgetary receipts (a component of the unified budget deficit), in collaboration with agencies, and is in the process of readying the methodology for review and comment from all agencies and GAO. In addition, during fiscal year 2012, Treasury will work with OMB to further identify and define all sources of budgetary receipts reported to STAR/CARS. Treasury will take into account GAO review comments, if any, during finalization of the methodology for the CFS and the resolution of unresolved differences. Lastly, a group formed by the Association of Government Accountants (AGA group) is performing an independent review of the compilation of the reconciliation statement and cash statement and will provide short- and long-term solutions for improving the completeness of these statements and consistency with underlying agency financial statement data. Treasury will consider AGA group’s recommendations in revising its reconciliation methodology. Open. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary to develop and implement a process that adequately identifies and reports items needed to reconcile net operating cost and unified budget surplus (or deficit). Treasury should establish reporting materiality thresholds for determining which agency financial statement activities to collect and report at the governmentwide level to assist in ensuring that the reconciliation statement is useful and conveys meaningful information. Per Treasury and OMB During fiscal year 2012, as Treasury works on its reconciliation methodology, it will request more information from agencies related to certain items in the reconciliation statement due to their reporting of certain activity on a net basis instead of on a disaggregated basis. Based on the results of this work, Treasury will determine what additional information is needed from the agencies. Once all disaggregated information is obtained, Treasury can implement its reporting materiality policy to provide more meaningful and useful information in the CFS. See also the status of recommendation No. 02-13. Per GAO Open. If Treasury chooses to continue using information from both federal agencies’ financial statements and Treasury’s central accounting and reporting system (STAR), Treasury should demonstrate how the amounts from STAR reconcile to federal agencies’ financial statements. Treasury has chosen to use information from STAR/CARS and has identified material areas where STAR/CARS data does not reconcile to federal agencies’ financial statements. Treasury will continue to work on its reconciliation methodology during fiscal year 2012 to further resolve these reconciliation issues. In addition, the reconciliation methodology will be revised pending implementation of the AGA group’s recommendations. See also the status of recommendation No. 02-13. Open. If Treasury chooses to continue using information from both federal agencies’ financial statements and from STAR, Treasury should identify and document the cause of any significant differences, if any are noted. See the status of recommendation No. 02-13. Open. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to develop and implement a process to ensure that the Statement of Changes in Cash Balance from Unified Budget and Other Activities properly reflects the activities reported in federal agencies’ audited financial statements. Treasury should document the consistency of the significant line items on this statement to federal agencies’ audited financial statements. Per Treasury and OMB Treasury has chosen to use information from STAR/CARS. During fiscal year 2012, Treasury will continue to work on its reconciliation statement methodology which also affects related line items on the cash statement. Once fully developed, the reconciliation methodology will also provide consistency of significant line items on the cash statement to the underlying federal agencies’ audited financial statements. In addition, the reconciliation methodology will be revised pending implementation of the AGA group’s recommendations. See also the status of recommendation No. 02-13 and No. 02-15. Per GAO Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to develop and implement a process to ensure that the Statement of Changes in Cash Balance from Unified Budget and Other Activities properly reflects the activities reported in federal agencies’ audited financial statements. Treasury should explain and document the differences between the operating revenue amount reported on the Statement of Operations and Changes in Net Position and unified budget receipts reported on the Statement of Changes in Cash Balance from Unified Budget and Other Activities. Treasury will refine its reconciliation methodology for reconciling budgetary receipts to net operating revenue during fiscal year 2012. Treasury will again work with the agencies that contribute to the largest unreconciled differences to identify the causes of the differences and to resolve them. Treasury will take into account GAO comments, if any, as it finalizes the reconciliation methodology for resolving these differences, as well as consider the AGA group’s recommendations related to the compilation of the reconciliation statement and cash statement. Open. Per Treasury and OMB During fiscal year 2012, Treasury will address some of the issues raised by FASAB’s Reporting Entity Task Force and Treasury is supporting and participating in the task force’s efforts to help gain clarity and finality on this issue. Per GAO Open. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to perform an assessment to define the reporting entity, including its specific components, in conformity with the criteria issued by the Federal Accounting Standards Advisory Board (FASAB). Key decisions made in this assessment should be documented, including the reason for including or excluding components and the basis for concluding on any issue. Particular emphasis should be placed on demonstrating that any financial information that should be included but is not included is immaterial. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to provide in the financial statements all the financial information relevant to the defined reporting entity, in all material respects. Such information would include, for example, the reporting entity’s assets, liabilities, and revenues. See the status of recommendation No. 02-22. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to disclose in the financial statements all information that is necessary to inform users adequately about the reporting entity. Such disclosures should clearly describe the reporting entity and explain the reason for excluding any components that are not included in the defined reporting entity. See the status of recommendation No. 02-22. Open. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to help ensure that federal agencies provide adequate information in their legal representation letters regarding the expected outcomes of the cases. Per Treasury and OMB During fiscal year 2011, with few exceptions, the agencies provided Treasury and OMB adequate information in their legal representation letters regarding the expected outcomes of the cases. Treasury and OMB will work with the few agencies to provide all required information in fiscal year 2012. Treasury has already held a “lessons learned” meeting related to the fiscal year 2011 process and will work during the year with the Department of Justice (Justice), OMB and GAO and the agencies to determine if further changes in policy and/or guidance (e.g., OMB Memorandum 01-02) is needed for all agencies to provide the required information regarding the expected outcomes of legal cases in their legal representations. Per GAO Open. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies develop a detailed schedule of all major treaties and other international agreements that obligate the U.S. government to provide cash, goods, or services, or that create other financial arrangements that are contingent on the occurrence or nonoccurrence of future events (a starting point for compiling these data could be the State Department’s Treaties in Force). Per Treasury and OMB Agencies are currently required to report contingencies in their financial statements and notes pursuant to generally accepted accounting principles (GAAP) guidance. In addition, OMB Circular A-136, specifically references the inclusion of treaties and international agreements within “Commitments and Contingencies.” Further, agencies include specific representations with respect to material liabilities or contingencies in their management representations. In addition, the financial statements of most significant entities and many other federal entities received unqualified audit opinions. However, no additional analysis of treaties has been performed to reasonably ensure that all of the federal government’s treaties are considered in agency analyses or that agencies are consistently analyzing treaties for recognition or disclosure. Treasury will annually review agency financial statements, audit reports and management representation letters for any references to treaties and international agreements, and if deemed material will disclose in the CFS. Per GAO Open. Until a comprehensive analysis of major treaty and other international agreement information has been performed, Treasury and OMB are precluded from determining if additional disclosure is required by GAAP in the CFS, and we are precluded from determining whether the omitted information is material. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies classify all such scheduled major treaties and other international agreements as commitments or contingencies. See the status of recommendation No. 02-37. Open. See the status of recommendation No. 02-37. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies disclose in the notes to the CFS amounts for major treaties and other international agreements that have a reasonably possible chance of resulting in a loss or claim as a contingency. Per Treasury and OMB See the status of recommendation No. 02-37. Per GAO Open. See the status of recommendation No. 02-37. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies disclose in the notes to the CFS amounts for major treaties and other international agreements that are classified as commitments and that may require measurable future financial obligations. See the status of recommendation No. 02-37. Open. See the status of recommendation No. 02-37. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies take steps to prevent major treaties and other international agreements that are classified as remote from being recorded or disclosed as probable or reasonably possible in the CFS. See the status of recommendation No. 02-37. Open. See the status of recommendation No. 02-37. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary to ensure that the note disclosure for stewardship responsibilities related to the risk assumed for federal insurance and guarantee programs meets the requirements of Statement of Federal Financial Accounting Standards (SFFAS) No. 5, Accounting for Liabilities of the Federal Government, paragraph 106, which requires that when financial information pursuant to Financial Accounting Standards Board standards on federal insurance and guarantee programs conducted by government corporations is incorporated in general purpose financial reports of a larger federal reporting entity, the entity should report as required supplementary information what amounts and periodic change in those amounts would be reported under the “risk assumed” approach. Per Treasury and OMB This information was requested from federal agencies for disclosure in the fiscal year 2011 CFS. Treasury will work with agencies to improve the consistency of this disclosure in the fiscal year 2012 CFS and will also monitor the work of the FASAB task force that is reviewing the reporting for risk assumed. Per GAO Open. Treasury’s reporting in this area is not complete. The CFS should include all major federal insurance programs in the risk assumed reporting and analysis. Also, since future events are uncertain, risk assumed information should include indicators of the range of uncertainty around expected estimates, including indicators of the sensitivity of the estimate to changes in major assumptions. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to develop a process that will allow full reporting of the changes in cash balance of the U.S. government. Specifically, the process should provide for reporting on the change in cash reported on the consolidated balance sheet, which should be linked to cash balances reported in federal agencies’ audited financial statements. Treasury will analyze cash transactions, and work with its Cash Policy area to achieve complete and consistent reporting of cash transactions, to provide full reporting of the changes in the cash balance of the U.S. government. Open. Treasury has not established and implemented effective processes and procedures for identifying and reporting all items needed to prepare the Statement of Changes in Cash Balance from Unified Budget and Other Activities. The Director of OMB should direct the Controller of OMB, in coordination with Treasury’s Fiscal Assistant Secretary, to work with the Department of Justice (Justice) and certain other executive branch federal agencies to ensure that these federal agencies report or disclose relevant criminal debt information in conformity with generally accepted accounting principles (GAAP) in their financial statements and have such information subjected to audit. OMB, working with Treasury, Justice, and certain other agencies, will continue working to address this recommendation. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to include relevant criminal debt information in the CFS or document the specific rationale for excluding such information. Treasury will disclose criminal debt information in the CFS if material as reflected in the agencies’ financial statements. See the status of recommendation No. 03-8. Open. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to modify Treasury’s plans for the new closing package to (1) require federal agencies to directly link their audited financial statement notes to the CFS notes and (2) provide the necessary information to demonstrate that all of the five principal consolidated financial statements are consistent with the underlying information in federal agencies’ audited financial statements and other financial data. Per Treasury and OMB Treasury’s current CFS compilation process provides for direct linkage from the 35 significant federal agencies audited financial statements to most of the CFS principal statements and to the related note disclosures. However, additional work is needed related to the two budgetary principal financial statements. Treasury will take into account GAO comments, if any, as it develops its reconciliation methodology during fiscal year 2012. Treasury will also consider the AGA group’s comments and recommendations to improve the compilation of the reconciliation statement and cash statement. See the status of recommendation No. 02-13 and No. 02-15. Per GAO Open. Treasury’s process for compiling the CFS generally demonstrated that amounts in the Statement of Social Insurance and the Statement of Changes in Social Insurance Amounts were consistent with the underlying federal entities’ financial statements and that the Balance Sheet and the Statement of Net Cost were also consistent with the 35 significant federal entities’ financial statements prior to eliminating intragovernmental activity and balances. However, Treasury’s process did not ensure that the information in the remaining three principal financial statements was fully consistent with the underlying information in the 35 significant federal entities’ audited financial statements and other financial data. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to require that Treasury employees contact and document communications with federal agencies before recording journal vouchers to change agency audited closing package data. During fiscal year 2012, Treasury will fully implement and enforce its procedures to document communications to the agencies in the supporting documentation for journal vouchers. Open. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary to assess the infrastructure associated with the compilation process and modify it as necessary to achieve a sound internal control environment. Per Treasury and OMB During fiscal year 2011, with the assistance of its contractor, Treasury continued to make improvements to its internal control infrastructure. Treasury updated, and will revise and improve, its standard operating procedures (SOP) to document that key controls are in place at all critical areas of the CFS preparation process. Treasury will monitor and assess its efforts to determine its progress in achieving a sound internal control environment. Also, during fiscal year 2011, Treasury restructured the management of the organization responsible for the CFS compilation process to provide additional oversight and accountability over the year-end CFS process. In addition, during fiscal year 2012, Treasury plans to obtain additional personnel, via details from other agencies, with financial reporting expertise, to assist with enhancements and improvements to internal controls. Per GAO Open. The Director of OMB should direct the Controller of the Office of Federal Financial Management to consider, in order to provide audit assurance over federal agencies’ closing packages, not waiving the closing package audit requirements for any verifying agency in future years, such as Tennessee Valley Authority (TVA). For fiscal year 2011, OMB did not waive the closing package audit requirements for any verifying agency. In addition, over the last 3 fiscal years, TVA progressively moved closer to submitting its closing package by the required financial reporting deadline and is now submitting the closing package timely. Closed. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB’s Office of Federal Financial Management, to establish effective processes and procedures to ensure that appropriate information regarding litigation and claims is included in the governmentwide legal representation letter. Treasury, in coordination with OMB and Justice, will work during fiscal year 2012 to establish effective processes and procedures to require that appropriate information regarding litigation and claims is included in the governmentwide legal representation letter. Open. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB’s Office of Federal Financial Management, to develop a process for obtaining sufficient information from federal agencies to enable Treasury and OMB to adequately monitor federal agencies’ efforts to reconcile intragovernmental activity and balances with their trading partners. This information should include (1) the nature and a detailed description of the significant differences that exist between trading partners’ records of intragovernmental activity and balances, (2) detailed reasons why such differences exist, (3) details of steps taken or being taken to work with federal agencies’ trading partners to resolve the differences, and (4) the potential outcome of such steps. Per Treasury and OMB During fiscal year 2012, Treasury will continue its intragovernmental collection, analysis, and reporting process as enhanced in fiscal year 2011, for obtaining sufficient information from federal agencies to enable Treasury and OMB to adequately monitor federal agencies’ efforts to reconcile intragovernmental activity and balances with their trading partners. The information obtained includes (1) the nature and a detailed description of the significant differences that exist between trading partners’ records of intragovernmental activity and balances, (2) detailed reasons why such differences exist, (3) details of steps taken or being taken to work with federal agencies’ trading partners to resolve the differences, (4) the potential outcome of such steps, and (5) additional information related to their intragovernmental differences that would allow Treasury to correct these differences within GFRS. This effort was successful during fiscal year 2011 in reducing the amount, both on a net and absolute value basis, of the total amount of intragovernmental differences. Per GAO Open. Treasury furthered its commitment to resolve differences in intragovernmental activity and balances, for example, by expanding focus groups’ monitoring and outreach efforts that involved quarterly analysis and ongoing collaboration with entities to resolve intragovernmental differences. However, we found that a significant number of CFOs continue to cite differing accounting methodologies, accounting errors, and timing differences for material differences with their trading partners. Some CFOs indicated that they did not know the reason for the differences. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to enhance and fully document all practices referred to in the standard operating procedure (SOP) entitled “Preparing the Financial Report of the U.S. Government” to better ensure that practices are proper, complete, and can be consistently applied by staff members. Treasury will document all of its current practices for preparing the CFS in the SOP. Open. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary to enhance Treasury’s checklist or design an alternative and use it to adequately and timely document Treasury’s assessment of the relevance, usefulness, or materiality of information reported by the federal agencies for use at the governmentwide level. Per Treasury and OMB Treasury addressed many of GAO’s concerns related to the checklist during fiscal year 2011, specifically the inclusion of disclosure items related to the principal financial statements and management’s discussion and analysis. Per GAO Closed. Over the past few years, Treasury has taken several actions to address this recommendation. To provide recommendations that are better aligned with the current status of remaining deficiencies related to this area, we have (1) closed this recommendation based on Treasury’s significant progress and (2) included in this report under “Timely Review and Approval of the Financial Reporting Disclosure Checklist” new recommendations for corrective actions for the remaining deficiencies. The Director of OMB should direct the Controller of OMB’s Office of Federal Financial Management, in coordination with Treasury’s Fiscal Assistant Secretary, to develop formal processes and procedures for identifying and resolving any material differences in distributed offsetting receipt amounts included in the net outlay calculation of federal agencies’ Statement of Budgetary Resources and the amounts included in the computation of the budget deficit in the CFS. Treasury will work with OMB to perform this analysis and to develop policies and procedures, as part of developing its reconciliation methodology, to resolve these differences. OMB and Treasury, as applicable, will continue their efforts to implement the completed methodology. Treasury will also be working with OMB to further identify and define all sources of distributed offsetting receipts reported to STAR/CARS. Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB’s Office of Federal Financial Management, to develop and implement effective processes for monitoring and assessing the effectiveness of internal control over the processes used to prepare the CFS. Treasury is in the process of reviewing its documentation of internal control procedures, as completed during fiscal year 2011, to determine what internal control design gaps remain and what further controls are needed related to new or revised procedures. See also the status of recommendation No. 04-6. Open. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB’s Office of Federal Financial Management, to develop and implement alternative solutions to performing almost all of the compilation effort at the end of the year, including obtaining and utilizing interim financial information from federal agencies. Per Treasury and OMB Treasury is leading a subgroup with governmentwide participation on the OMB Circular No. A-136 subcommittee to determine what information can be obtained during the third and fourth quarters of fiscal year 2012 to facilitate the year-end CFS preparation process. Depending on the results of this effort, Treasury will consider what additional requirements related to third quarter information in fiscal year 2013 are needed to facilitate the year-end compilation effort. Per GAO Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to design, document, and implement policies and procedures to identify and eliminate intragovernmental payroll tax amounts at the governmentwide level when compiling the CFS. During fiscal year 2012, Treasury will evaluate the adequacy of the payroll tax amounts reported in the Monthly Treasury Statement by discussing the methodology for preparing this amount with the providing agency. If deemed adequate, policies and procedures will be modified to reflect the results of the analysis and document how this amount is identified and used when compiling the CFS. Open. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to develop, document, and implement processes and procedures for preparing and reviewing the Management’s Discussion and Analysis (MD&A) and “The Federal Government’s Financial Health: A Citizen’s Guide to the Financial Report of the United States Government” sections of the Financial Report of the U.S. Government (Financial Report) to help assure that information reported in these sections is complete, accurate, and consistent with related information reported elsewhere in the Financial Report. Per Treasury and OMB The Office of the Fiscal Assistant Secretary (OFAS) first implemented SOPs concerning the preparation of the MD&A and Citizen’s Guide for the fiscal year 2009 Financial Report. Since that time, instances of inaccuracies, inconsistency, and incompleteness have been consistently lower than in prior years. OFAS will update its SOPs as needed to reflect new reporting requirements and/or process improvements. Per GAO Closed. Over the past few years, Treasury, in coordination with OMB, has taken several actions to address this recommendation. To provide recommendations that are better aligned with the current status of remaining deficiencies related to this area, we have (1) closed this recommendation based on Treasury’s and OMB’s significant progress, and (2) included in this report under “Review and Approval of the Financial Report” new recommendations for corrective actions for the remaining deficiencies. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish and document criteria to be used in identifying federal entities as significant to the CFS for purposes of obtaining assurance over the information being submitted by those entities for the CFS. During fiscal year 2011, Treasury and OMB established and documented the criteria to identify all federal entities that are significant to the CFS. Closed. The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to develop and implement policies and procedures for assessing and documenting, on an annual basis, which entities meet the criteria established for identifying federal entities as significant to the CFS. During fiscal year 2011, Treasury used the significant entity criteria to assess which entities are considered significant to the CFS, and will perform this assessment on an annual basis. Open. During fiscal year 2011, Treasury did not properly implement its procedures for assessing which entities meet its criteria for identifying federal entities as significant to the CFS. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to enhance the SOP entitled “Analyzing Agency Restatements” to include procedures for analyzing the overall impact of entities’ restatements on the CFS and documenting the analysis and related conclusion. During fiscal year 2011, Treasury complied with its enhanced procedures for analyzing the impact of entities’ restatements on the CFS, documenting its analysis and related conclusions, and correctly incorporating the impact of the entities’ restatements into the CFS. Closed. Recommendation The Secretary of the Treasury should direct the Fiscal Assistant Secretary to develop, implement, and document procedures for identifying, analyzing, compiling, and reporting all significant accounting policies and related party transactions at the governmentwide level. Per GAO Open. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to enhance the SOP entitled “Statement of Social Insurance, Social Insurance Note, and Required Supplementary Information” to implement and document procedures for assuring the accuracy of staff’s work related to preparing the social insurance information for the CFS. Per Treasury and OMB Treasury will fully implement its procedures for identifying, analyzing, compiling, and reporting all related party transactions at the governmentwide level in fiscal year 2012 by verifying the related party information received in the closing packages to the agencies’ underlying audited financial statements, and will document this verification procedure. In addition, Treasury will monitor the work of FASAB’s Reporting Entity Task Force as it relates to related party disclosures. In fiscal year 2012, Treasury will comply with its enhanced Statement of Social Insurance SOP to document that all the social insurance information in the CFS is consistent with the agencies’ social insurance information. Open. . The Secretary of the Treasury should direct the Fiscal Assistant Secretary to implement and document procedures for assuring the accuracy of staff’s work related to preparing the Schedule of Differences. In fiscal year 2011, Treasury improved the implementation and documentation of its procedures to help assure completeness and accuracy of documenting the inconsistencies between the amounts provided by the agencies. Closed. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to implement and document procedures for assuring the accuracy of staff’s work related to performing analytical procedures. During fiscal year 2012 Treasury will improve the implementation and documentation of its procedures to address remaining GAO issues related to assuring the accuracy of staff’s work related to performing analytical procedures, specifically strengthening the explanations for all significant variances found during the year- end analyses. Open. No. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to enhance the applicable SOPs to include required steps for assuring conformity with SFFAS No. 33 and the consistency of amounts between the Statement of Net Cost and the Federal Employee and Veteran Benefits Payable (FEVBP) note disclosure. During fiscal year 2011, Treasury enhanced and implemented the applicable SOPs to verify the conformity with SFFAS No. 33 and the consistency of amounts between the Statement of Net Cost and FEVBP note disclosure. Closed. Treasury has taken several actions to address this recommendation. To provide recommendations that are better aligned with the current status of remaining deficiencies related to this area, we have (1) closed this recommendation based on Treasury’s significant progress and (2) included in this report under “Review of Federal Entities’ Financial Information for Inclusion in the CFS” a new recommendation for corrective actions for the remaining deficiencies. The Secretary of the Treasury should direct the Fiscal Assistant Secretary to implement and document procedures for assuring conformity with SFFAS No. 33 and the consistency of amounts between the Statement of Net Cost and the FEVBP note disclosure. See the status of recommendation No. 10-01. Closed. See the status of recommendation No. 10-01. The Director of OMB should direct the Controller of OMB to enhance its procedures for preparing the appendixes to include required steps to assure the accuracy and consistency of the accompanying information presented in the appendixes to the Financial Report related to the federal government’s financial management. During fiscal year 2011, OMB reported this information on the newly released Performance.gov website and no longer included this in the Other Accompanying Information section of the Financial Report. Closed. The Director of OMB should direct the Controller of OMB to enhance its procedures for preparing the appendixes to include required steps to maintain documentation supporting the accompanying information presented in the appendixes to the Financial Report related to the federal government’s financial management. See the status of recommendation No. 10-03. Closed.
Treasury, in coordination with OMB, is primarily responsible for preparing the Financial Report , which contains the CFS. Since GAO’s first audit of the fiscal year 1997 CFS, certain material weaknesses and other limitations on the scope of GAO’s work have prevented GAO from expressing an opinion on the CFS, exclusive of the Statement of Social Insurance (SOSI). Also, GAO was unable to express opinions on the 2011 and 2010 SOSI and the 2011 Statement of Changes in Social Insurance Amounts because of significant uncertainties, primarily related to the achievement of projected reductions in Medicare cost growth, reflected in these statements. As part of the fiscal year 2011 CFS audit, GAO identified material weaknesses and other control deficiencies in the processes used to prepare the CFS. The purpose of this report is to (1) provide details on new control deficiencies GAO identified related to the preparation of the CFS, (2) recommend improvements, and (3) provide the status of corrective actions taken to address GAO’s prior recommendations in this area that remained open at the end of the fiscal year 2010 audit. During its audit of the fiscal year 2011 consolidated financial statements of the U.S. government (CFS), GAO identified new and continuing control deficiencies in the Department of the Treasury’s (Treasury) and the Office of Management and Budget’s (OMB) processes used to prepare the CFS. These control deficiencies contributed to material weaknesses in internal control over the federal government’s ability to adequately account for and reconcile intragovernmental activity and balances between federal entities; ensure that the federal government’s accrual-based consolidated financial statements were consistent with the underlying audited entities’ financial statements, properly balanced, and in conformity with U.S. generally accepted accounting principles; and identify and either resolve or explain material differences between (1) components of the budget deficit that are used to prepare certain information in the CFS and (2) related amounts reported in federal entities’ financial statements and underlying financial information and records. GAO identified new control deficiencies involving the need to develop or revise and implement written procedures for appropriate Treasury and OMB officials to (1) review and approve the drafts of the Financial Report of the United States Government (Financial Report) before they are provided to GAO and (2) better ensure that key federal entity personnel are actively involved in the process for preparing and reviewing the Financial Report ; enhance procedures for timely review, approval, and use of the CFS disclosure checklist; develop procedures for pursuing indications that financial information provided by federal entities for inclusion in the CFS may not be in conformity with applicable accounting standards; enhance Treasury’s intragovernmental data validation process; and enhance procedures for timely identifying, notifying, and obtaining closing packages from federal entities as they first become significant to the Financial Report . In addition, GAO found that various other control deficiencies identified in previous years’ audits with respect to the CFS preparation continued to exist. Specifically, of the 50 open recommendations from GAO’s prior reports regarding control deficiencies in the CFS preparation process,12 were closed and 38 remained open as of December 12, 2011, the date of GAO’s report on its audit of the fiscal year 2011 CFS. GAO will continue to monitor the status of corrective actions taken to address the 10 new recommendations as well as the 38 open recommendations from prior years as part of its fiscal year 2012 CFS audit. GAO is making 10 recommendations —9 to Treasury and 1 to OMB—to address new control deficiencies. In commenting on GAO’s draft report, Treasury and OMB generally concurred with GAO’s findings.
FAA faces significant demands that will challenge its ability to operate both in the current environment and in what it expects to encounter in the coming decade. With the industry still attempting to recover from the most tumultuous period in its history, FAA’s funding is constrained by lowered Airports and Airways Trust Fund receipts and increased pressure on the contribution from the General Fund. To meet its current and future operational challenges, FAA is facing demands for greater efficiency and accountability. And it goes without saying that FAA must continue to meet demands for maintaining safety standards. Since 2001, the U.S. airline industry has confronted financial losses of previously unseen proportions. Between 2001 and 2003, the airline industry reported losses in excess of $20 billion. A number of factors – including the economic slowdown, a shift in business travel buying behavior, and the aftermath of the September 11, 2001 terrorist attacks— contributed to these losses by reducing passenger and cargo volumes and depressing fares. The industry has reported smaller losses since 2001, but still may not generate net profits for 2004. To improve their financial position, many airlines cut costs by various means, notably by reducing labor expenditures and by decreasing capacity through cutting flight frequencies, using smaller aircraft, or eliminating service to some communities. According to data from the Bureau of Transportation Statistics, large U.S. air carriers cut their operating expenses by $7.8 billion from 2000 through 2002. The drop in total large air carrier operating expenses stands in sharp contrast to increases in FAA’s budget. (See Figure 1.) FAA’s budget – which has increased from $9 billion in 1998 to $14 billion in 2004 — will be under pressure for the foreseeable future. Over the past 10 years, FAA has received on average approximately 80 percent of its annual funding from the Airports and Airways Trust Fund (Trust Fund), which derives its receipts from taxes and fees levied on airlines and passengers. The downturn in passenger travel, accompanied by decreases in average yields, has resulted in lowered receipts into the Trust Fund. On average, domestic yields have fallen since 2000, and are at their lowest levels since 1987. As a result, the total amount of transportation taxes that were remitted to the Trust Fund declined by $2.0 billion (19.6 percent) between fiscal years 1999 and 2003 (in 2002 dollars). Contributions from the General Fund have averaged about 20 percent of FAA’s budget since 1994, but total Federal spending is under increasing stress because of growing budget deficits. According to the March 2004 analysis from the Congressional Budget Office, the Federal deficit under the President’s fiscal 2005 budget will be $358 billion. Clearly, a major challenge for FAA both now and into the future will be cost-cutting and cost control. Operating costs represent over half of FAA’s budget. For 2005, the Administration has requested $7.8 billion for Operations. Because salaries and benefits make up 73 percent of that total, restraining the growth in operations spending will be extremely difficult, even with improvements in workforce productivity. Capital expenses (i.e., the Facilities and Equipment account) represent less than 20 percent of FAA’s budget, but virtually none of the projects requested for funding for 2005 is expected to generate any savings in the Operations account. Funds for airports’ capital development have more than doubled since 1998, rising from $1.6 billion (18.3 percent of the total) to a requested $3.5 billion (25.1 percent of the total) in 2005. Current funding levels are sufficient to cover much of the estimated cost of planned capital development. However, building new runways is not always a practicable way to increase capacity. FAA must decide how to increase capacity and service, as well as improve system efficiency and safety. FAA’s ability to operate efficiently and effectively – particularly regarding its air traffic control modernization projects — have been hampered over time by inadequate management of information technology and financial management controls. FAA’s ATC modernization projects have consistently experienced cost, schedule, and performance problems that we and others have attributed to systemic management issues. The effect has been extraordinary cost growth and a persistent failure to deploy systems. FAA initially estimated that its ATC modernization efforts could be completed over 10 years at a cost of $12 billion. Two decades and $35 billion later, FAA still has not completed key projects, and expects to need another $16 billion thru 2007, for a total cost of $51 billion. GAO has kept major FAA modernization systems on the watch list of high-risk federal programs since 1995. We believe that, in the current budget environment, cost growth and schedule problems with ongoing modernization efforts can have serious negative consequences: postponed benefits, costly interim systems, other systems not being funded, or a reduction in the number of units purchased. FAA recognizes that future U.S. air transport activity will likely place significant demands on its ability to keep the system operating. FAA’s most recent forecasts project significant increases in overall system activity by 2015. Along with increased movements of aircraft and passengers comes an increased workload for FAA, as well as demands for more efficient operations and/or an expansion of capacity. (See Table 1). Evidence of FAA’s inability to meet system capacity demands already exists from the experience at Chicago O’Hare earlier this year. To reduce flight delays, FAA asked American Airlines and United Airlines to reduce their peak scheduled operations by 7.5 percent by June 10. As Secretary Mineta has already recognized, unless system capacity expands, the nation will face “…more and more O’Hares as economy continues to grow, and as new technology and competition bring even greater demand.” It seems clear, however, that FAA’s Operational Evolution Plan, a few additional runways, and updating more controller workstations with the Standard Terminal Automation Replacement System (STARS) are not the answer to the system’s need for capacity. We cannot pave our way to the year 2025. Over the years, systematic management issues, including inadequate management controls and human capital issues have contributed to the cost overruns, schedule delays, and performance shortfalls that FAA has consistently experienced in acquiring its major ATC modernization systems. Historically, some of the major factors impeding ATC acquisitions included an ineffective budget process and an inability to provide good cost and schedule estimates. A number of cultural problems including widely diffused responsibility and accountability, inadequate coordination, and poor contract management/oversight also slowed the progress of individual projects. Problems within FAA’s acquisition and procurement processes included an inability to obligate and spend appropriate funds in a timely manner, a complicated procurement and acquisition cycle, failure to field systems in a timely fashion, and an inability to field current technology systems. FAA lacked a means to strategically analyze and control requirements, and good cost and schedule estimates were often not effectively developed and integrated into acquisition plans. To address many of these issues, Congress passed legislation in 1995 exempting FAA from many of the existing Federal personnel and procurement laws and regulations and directed the agency to develop and implement new acquisition and personnel systems. More recently, in 2000, the Congress and the administration together provided for a new oversight and management structure and a new air traffic organization to bring the benefits of performance management to ATC modernization. According to FAA, burdensome government-wide human capital rules impeded its ability to hire, train, and deploy personnel and thereby hampered its capacity to manage ATC modernization projects efficiently. In response to these concerns, Congress granted FAA broad exemptions from federal personnel laws and directed the agency to develop and implement a new personnel management system. Human capital reforms: Following the human capital exemptions granted by Congress in 1995, FAA initiated reforms in three primary areas: compensation and performance management, workforce management, and labor and employee relations. In the area of compensation and performance management, FAA introduced two initiatives—a new, more flexible pay system in which compensation levels are set within broad ranges, called pay bands, and a new performance management system intended to improve employees’ performance through more frequent feedback with no summary rating. Both new systems required an exemption from laws governing federal civilian personnel management found in title 5 of the United States Code. In the area of workforce management, FAA implemented a number of initiatives in 1996 through the establishment of agency-wide flexibilities for hiring and training employees. In the area of labor and employee relations, FAA established partnership forums for union and nonunion employees and a new model work environment program. Other human capital initiatives have included restructuring FAA’s organizational culture and implementing means to provide sustained leadership. Organizational culture: FAA issued an organizational culture framework in 1997 that attempted to address some of the vertical “stovepipes” that conflicted with the horizontal structure of ATC acquisition team operations. A key piece of this framework included the establishment of integrated product teams in an attempt to improve collaboration among technical experts and users. Moreover, integrated teams have not worked as intended. For example, competing priorities between two key organizations that were part of the Wide Area Augmentation System’s integrated team ultimately negated its effectiveness and undermined its ability to meet the agency’s goals for the system. Sustained leadership: Until former Administrator Garvey completed her 5- year term in 2002, FAA had been hampered by a lack of sustained leadership at FAA was also problematic. During the first 10 years of the ATC modernization effort, the agency had seven different Administrators and Acting Administrators, whose average tenure was less than 2 years. Such frequent turnover at the top contributed to an agency culture that focused on short-term initiatives, avoided accountability, and resisted fundamental improvements to the acquisition process. . Nine years have passed since the agency received broad exemptions from laws governing federal civilian personnel management. While FAA has taken a number of steps since personnel reforms were implemented, it is not clear whether and to what extent these flexibilities have helped FAA to more effectively manage its workforce and achieve its mission. The agency did not initially define clear links between reform goals and program goals, making it difficult to fully assess the impacts of personnel reform. FAA has not yet fully implemented all of its human capital initiatives and continues to face a number of key challenges with regard to personnel issues. In our February 2003 report, we found that the agency had not fully incorporated elements that are important to effective human capital management into its overall reform effort, including data collection and analysis and establishing concrete performance goals and measures. Currently, the agency is still working to implement tools to keep accurate cost and workforce data. The new Air Traffic Organization has announced plans for establishing cost accounting and labor distribution systems, but they are not yet in place. More comprehensive cost accounting systems and improved labor distribution systems are necessary to maximize workforce productivity and to plan for anticipated controller retirements. More broadly, taking a more strategic approach to reform will allow the agency to better evaluate the effects of human capital initiatives, which it sees as essential to its ATC modernization effort. FAA established its current acquisition management system (AMS) in 1996 following acquisition reform. The agency has reported taking steps to overseeing investment risk and capturing key information from the investment selection process in a management information system. It has also implemented guidance for validating costs, benefits, and risks. FAA has also taken steps to improve the management of its ATC modernization efforts. For example, it implemented an incremental, “build a little, test a little” approach that improved its management by providing for mid-course corrections and thus helping FAA to avoid costly late-stage changes. In the area of management controls, FAA has (1) developed a blueprint for modernization (systems architecture) to manage the development of ATC systems; (2) established processes for selecting and controlling information technology investments, (3) introduced an integrated framework for improving software and system acquisition processes, and (4) improved its cost-estimating and cost-accounting practices. Nonetheless, ATC modernization efforts continue to experience cost, schedule, and performance problems. FAA is not yet incorporating actual costs from related system development efforts in its processes for estimating the costs of new projects. Further, the agency has not yet fully implemented processes for evaluating projects after implementation in order to identify lessons learned and improve the investment management process. Reliable cost and schedule estimates are essential to addressing some of the ongoing problems with ATC acquisitions. In addition to controlling cost and schedule overruns, FAA needs to take concrete steps to identify and eliminate redundancies in the National Airspace System (NAS). FAA must review its long-term ATC modernization priorities to assess their relative importance and feasibility in light of current economic constraints, security requirements, and other issues. The ongoing challenges facing air traffic control modernization efforts led Congress and the administration to create a new oversight and management structure through the new Air Traffic Organization (ATO) in order to bring the benefits of performance management to ATC modernization. The ATO was created by an executive order in 2000 to operate the air traffic control system. In the same year, Congress enacted legislation establishing the Air Traffic Services Subcommittee, a five-member board to oversee the ATO and a chief operating officer to manage the organization. The ATO was designed to bring a performance management approach to ATC modernization efforts. The Air Traffic Services Subcommittee has made some initial efforts with regard to the establishment of the ATO. They have taken steps to focus on the structure of the ATC system, including reviewing and approving performance metrics for the ATO, establishing a budget, and approving three large procurements that FAA initiated. However, progress in establishing the organization has been slow, given that FAA received the mandate to establish the ATO nearly four years ago. FAA encountered difficulties finding a qualified candidate to take the position of chief operating officer, and did not fill the vacancy until June 2003. The final executive positions for the organization including the Vice- Presidents of Safety and Communications were just filled last month. Key tasks for the ATO will include organizational restructuring, implementing effective financial management and cost-accounting systems, evaluating day-to-day business practices, and fostering growth with efficiency. Rapidly changing technology, limited financial resources, and the critical importance of meeting client needs will present significant challenges in order for the ATO to truly evolve into a high performing organization. To successfully meet the challenges of the 21st century, FAA must fundamentally transform its people, processes, technology, and environment to build a high-performing organization. Our work has shown that high-performing organizations have adopted management controls, processes, practices, and systems that are consistent with prevailing best practices and contribute to concrete organizational results. Specifically, the key characteristics and capabilities of high-performing organizations fall into four themes as follows: A clear, well-articulated, and compelling mission. High-performing organizations have a clear, well-articulated, and compelling mission, strategic goals to achieve it and a performance management system that aligns with these goals to show employees how their performance can contribute to overall organizational results. FAA has taken its first steps toward creating a performance management system by aligning its goals and budgetary resources through its Flight Plan—blueprint for action for fiscal year 2004 through 2008—and its fiscal year 2005 budget submission. In addition, the new ATO has published both its vision and mission statement. Our past work has found that FAA’s ability to acquire new ATC modernization systems has been hampered by its organizational culture, including employee behaviors that did not reflect a strong commitment to mission focus. Given the central role that FAA’s employees will play in achieving these performance goals and overall agency results, it is critical for them to both embrace and implement the agency’s mission in the course of their daily work. In addition, our work has found regularly communicating a clear and consistent message about the importance of fulfilling the organization’s mission helps engage employees, clients, customers, partners, and other stakeholders in achieving higher performance. Strategic use of partnerships. Since the federal government is increasingly reliant on partners to achieve its outcomes, becoming a high- performing organization requires that federal agencies effectively manage relationships with other organizations outside of their direct control. FAA is currently working to forge strategic partnerships with its external customers in a number of ways. For example, the agency recently announced a program to create “express lanes in the sky” to reduce air traffic delays this spring and summer and is in the early stages of working with selected federal partners to develop a long-term plan for the national aerospace system (2025) and to leverage federal research funds to conduct mutually beneficial research. In addition, FAA has ongoing partnerships with the aviation community to assess and address flight safety issues (e.g., development of technology to prevent fuel tank explosions and to reduce the potential for aircraft wiring problems through development of a “smart circuit breaker”). However, our past work has shown that forging strategic partnerships with organizations outside of FAA can be difficult and time-consuming. For example, FAA’s efforts to establish voluntary data sharing agreements with airlines—Flight Operational Quality Assurance Program (FOQA)— spanned more than a decade, due in part, to tremendous resistance from aviation community stakeholders who formed a rare alliance to oppose several of FAA’s proposals. In addition, when attempting to increase airport capacity (e.g., new runways), FAA and airport operators have frequently faced opposition from the residents of surrounding communities and environmental groups. Residents are often concerned about the potential for increases in airport noise, air pollutant emissions, and traffic congestion. Focus on needs of clients and customers. Serving the needs of clients and customers involves identifying their needs, striving to meet them, measuring performance, and publicly reporting on progress to help assure appropriate transparency and accountability. To better serve the needs of its clients and customers, FAA published Flight Plan, which provides a vehicle for identifying needs, measuring performance, and publicly reporting progress. Flight Plan includes performance goals in the areas of safety, greater capacity, international leadership, and organizational excellence, which are linked to the agency’s budget and progress monitored through a Web-based tracking system. However, over the years, FAA’s efforts to meet client and customer needs have not always been successful, and some have had a long lasting negative impact. FAA has had particular difficulty fielding new ATC modernization systems within cost, schedule and performance goals to meet the needs of the aviation community. Agency promises to deliver new capabilities to airlines via improvements to the ATC system led some airlines to install expensive equipment in their aircraft to position themselves to benefit from expected FAA services; however, when the agency failed to deliver on those promises, participating air carriers were left with equipment that they could not use—no return on their investment. In addition, shifting agency priorities have made it difficult for the aviation industry to anticipate future requirements and plan for them in a cost-effective manner (e.g., providing air carriers with adequate lead time to purchase new equipment and airframe manufacturers with lead time to incorporate changes into new commercial airplane designs). Furthermore, the absence of a full-functioning cost-accounting system makes it difficult for FAA to assess the actual cost of providing services to users of the National Airspace System. Strategic management of people. Most high-performing organizations have strong, charismatic, visionary, and sustained leadership, the capability to identify what skills and competencies the employees and the organization need, and other key characteristics including effective recruiting, comprehensive training and development, retention of high- performing employees, and a streamlined hiring process. Toward this end, FAA has hired a Chief Operating Officer (COO) to stand up its new ATO. Our work on high-performing organizations has recommended use of the COO concept to facilitate transformational change in federal agencies and to provide long-term attention and focus on management issues. Furthermore, FAA has placed 78 percent of its workforce under a pay-for- performance system and implemented a training approach for its acquisition workforce which reflects four of the six elements used by leading organizations to deliver training effectively. However, it is too soon to know the extent to which these elements of effective training will be incorporated into the new ATO. Finally, FAA is currently conducting an Activity Value Analysis, a bottoms-up effort to establish a baseline of ATO headquarters activities and their value to stakeholders. The results of this analysis are intended to help FAA’s leadership target cost-cutting and cost savings efforts. Despite FAA’s efforts to date, our past work has found the agency’s strategic management of human capital lacking. For example, organizational culture issues at FAA (e.g., its vertical, stovepiped structure) have discouraged collaboration among technical experts and users of the ATC system and contributed to the agency’s inability to deliver new ATC systems within cost, schedule and performance goals. One of the most significant early challenges facing the ATO will be negotiating a new contract with air traffic controllers, which is due to expire in September 2005. The DOT IG has repeatedly noted that despite the importance of controllers’ jobs, that FAA simply cannot sustain the continued salary cost growth for this workforce, which rose from an average salary of $72,000 in 1998 to $106,000 in 2003. Given the inextricable link between FAA’s operating costs and its controller workforce, striking an acceptable balance between controllers’ contract demands and controlling spiraling operating costs will be a strong determinant of the ATO’s credibility both within FAA and across the aviation industry. While FAA has taken some promising steps through its new ATO to restructure itself in a manner consistent with high-performing organizations, the agency still faces significant and longstanding systemic management challenges. These challenges must be overcome if FAA is to keep pace with ongoing changes in the aviation industry and transform itself into a world-class organization. Our work for more than two decades has shown that even modest organizational, operational, and technological changes at FAA can be difficult and time consuming, all of which underscores the difficult road ahead for FAA and its new ATO. This concludes my statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have at this time. For further information on this testimony, please contact JayEtta Hecker at (202) 512-2834 or by e-mail at heckerj@gao.gov. Individuals making key contributions to this testimony include Samantha Goodman, Steven Martin, Beverly Norwood, and Alwynne Wilbur. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Over the last two decades, FAA has experienced difficulties meeting the demands of the aviation industry while also attempting to operate efficiently and effectively. Now, as air traffic returns to pre- 9/11 levels, concerns have again arisen as to how prepared FAA may be to meet increasing demands for capacity, safety, and efficiency. FAA's air traffic control (ATC) modernization efforts are designed to enhance the national airspace system through the acquisition of a vast network of radar, navigation, and communication systems. Nine years have passed since Congress provided FAA with personnel and acquisition reforms. However, projects continue to experience cost, schedule and performance problems. FAA's Air Traffic Organization (ATO) is its most current reform effort. Expectations are that the ATO will bring a performance management approach to ATC modernization. This statement focuses on three main questions: (1) What are some of the major challenges and demands that confront FAA? (2) What is the status of FAA's implementation of reforms and/or procedural relief that Congress provided? and (3) What are some of the critical success factors that will enable FAA to become a highperforming organization? A forecasted increase in air traffic coupled with budgetary constraints will challenge FAA's ability to meet current and evolving operational needs. The commercial aviation industry is still recovering from financial losses exceeding $20 billion over the past 3 years. Many airlines cut their operating expenses, but FAA's budget continued to rise. However, transportation tax receipts into the Airport and Airways Trust Fund, from which FAA draws the majority of its budget, have fallen by $2.0 billion (nearly 20 percent) since 1999 (in constant 2002 dollars). Cost-cutting and cost-control will need to be watchwords for FAA from this point forward. FAA has implemented many of the reforms authorized by Congress 9 years ago, but achieved mixed results. Despite personnel and acquisition reforms the agency contended were critical to modernizing the nation's air traffic control (ATC) system, systemic management issues continue to contribute to the cost overruns, schedule delays, and performance shortfalls. FAA's most current reform effort, the Air Traffic Organization (ATO) -- a new performance-based organization mandated by AIR-21 that is operating the ATC system is just now being put in place. To meet its new challenges, FAA must fundamentally transform itself into a high-performing organization. The key characteristics and capabilities of high-performing organizations fall into four themes: (1) a clear, well articulated, and compelling mission; (2) strategic use of partnerships; (3) focus on the needs of clients and customers; and (4) strategic management of people. FAA has taken some promising steps through its new ATO to restructure itself like high-performing organizations, but still faces significant and longstanding systemic management challenges. Even modest organizational and operational changes at FAA can be difficult and time consuming.
The U.S. homeland continues to face an uncertain, complex security environment with the potential for terrorist incidents and natural disasters which can produce devastating consequences. Ensuring an effective response will require that federal departments and agencies, states, and local governments conduct integrated disaster response planning and test these plans by exercising together. Exercises play an instrumental role in preparing the nation to respond to an incident by providing opportunities to test emergency response plans, evaluate response capabilities, assess the clarity of established roles and responsibilities, and improve proficiency in a simulated, risk-free environment. Short of performance in actual operations, exercises provide the best means to assess the effectiveness of organizations in achieving mission preparedness. Exercises provide an ideal opportunity to collect, develop, implement, and disseminate lessons learned and to verify corrective action taken to resolve previously identified issues. Sharing positive experiences reinforces positive behaviors, doctrine, and tactics, techniques, and procedures, while disseminating negative experiences highlights potential challenges in unique situations or environments or identifies issues that need to be resolved. According to the National Response Framework, well-designed exercises improve interagency coordination and communications, highlight capability gaps, and identify opportunities for improvement. There are various types of exercises ranging from tabletop exercises that involve key personnel discussing simulated scenarios in informal settings to a full-scale exercise, including many agencies, jurisdictions, and disciplines and a “boots on the ground” response, such as firefighters decontaminating mock victims. DOD established the Office of the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs to oversee homeland defense activities for DOD, under the authority of the Under Secretary of Defense for Policy, and, as appropriate, in coordination with the Chairman of the Joint Chiefs of Staff. This office develops policies, conducts analysis, provides advice, and makes recommendations on homeland defense, defense support of civil authorities, emergency preparedness, and domestic crises management matters within the department. The assistant secretary assists the Secretary of Defense in providing policy directions to NORTHCOM and other applicable combatant commands to guide the development and execution of homeland defense plans and activities. This direction is provided through the Chairman of the Joint Chiefs of Staff. The Chairman of the Joint Chiefs of Staff, as principal military advisor to the President and Secretary of Defense, has numerous responsibilities relating to homeland defense and civil support, including providing advice on operational policies, responsibilities, and programs. Furthermore, the Chairman of the Joint Chiefs of Staff and the Joint Staff are responsible for formulating joint training policy and doctrine. The Joint Staff assists the Chairman by facilitating implementation of the Chairman’s joint training programs, including the Joint Training System, Chairman’s sponsored exercise program, and joint exercise program. NORTHCOM is the military command responsible for planning, organizing, and executing DOD’s homeland defense and civil support missions within its area of responsibility—the continental United States (including Alaska) and territorial waters (see fig. 1). Homeland defense is the protection of U.S territory, sovereignty, domestic population, and critical defense infrastructure against external threats and aggression. DOD is the primary federal agency responsible for homeland defense operations, such as air defense, and NORTHCOM is the combatant command responsible for commanding and coordinating a response to a homeland defense incident. To carry out its homeland defense mission, NORTHCOM is to conduct operations to deter, prevent, and defeat threats and aggression aimed at the United States. NORTHCOM’s second mission is civil support or defense support of civil authorities. Civil support is DOD support to U.S. civilian authorities, such as DHS, for domestic emergencies, both natural and man-made, and includes the use of DOD personnel—federal military forces and DOD’s career civilian and contractor personnel—and DOD agency and component resources. Because these missions are complex and interrelated, they require significant interagency coordination. Civil support missions include domestic disaster relief operations for incidents such as fires, hurricanes, floods, and earthquakes. Such support also includes counterdrug operations and management of the consequences of a terrorist incident employing a weapon of mass destruction. DOD is not the primary federal agency for such missions (unless so designated by the President) and thus provides defense support of civil authorities only when (1) state, local, and other federal resources are overwhelmed or unique military capabilities are required; (2) assistance is requested by the primary federal agency; and (3) NORTHCOM is directed to do so by the President or the Secretary of Defense. See fig. 2 for the pathway for requesting DOD and NORTHCOM assistance during an incident. NORTHCOM conducts or participates in exercises to improve readiness to perform its assigned missions. The command annually conducts 2 large- scale exercises—Ardent Sentry and Vigilant Shield—and participates in over 30 smaller command, regional, state, and local exercises. Each Ardent Sentry and Vigilant Shield training event emphasizes one of the key missions while at the same time including elements of the other. Ardent Sentry emphasizes the civil support missions; Vigilant Shield the homeland defense missions. The basis for NORTHCOM’s exercises is DOD’s Joint Training System. NORTHCOM’s Training and Exercise Directorate is responsible for planning and executing joint training, exercises, and education programs to ensure NORTHCOM is prepared to accomplish its assigned missions. Due to the need to prepare for and conduct military operations to defend the United States and fight the nation’s wars, DOD has developed an established, authoritative, time-tested process for planning, conducting, and evaluating exercises in order to test and improve preparedness to meet its wide range of critical missions. NORTHCOM uses DOD’s Joint Training System as the basis to design, develop, and conduct exercises. The Joint Training System provides an integrated, requirements-based method for aligning training programs with assigned missions consistent with command priorities, capabilities, and available resources. The joint system consists of four phases beginning with the identification of critical capabilities required based on assigned missions, proceeding through the planning and scheduling of training events, the execution and evaluation of required training, and assessing training proficiency against required capability (see fig. 3). This process is designed to ensure that an organization’s training program is linked to the Joint Mission Essential Task List, the personnel executing the tasks are properly trained, and shortfalls in training are identified and corrected in order to improve readiness. The Joint Training Information Management System is an automated system specifically designed to assist users in managing elements of each of the four phases of the Joint Training System. During the execution phase, commanders and directors focus on executing and evaluating planned training events, which can be accomplished through academic training, exercises, or a combination of these activities. During the execution stage of the Joint Training System, the Joint Event Life Cycle provides a five-stage methodology for joint-event development design, planning, preparation, execution, and evaluation. For example, DOD components prepare for the execution of an exercise by conducting five conferences, such as the Concept Development Conference where exercise and training objectives are discussed and scenarios developed. Activities for the Joint Event Life Cycle are managed through the Joint Training Information Management System. Evaluating lessons learned and identifying issues for corrective actions are fundamental components of DOD’s training and exercise process. The Chairman of the Joint Chiefs of Staff provides policy, direction, and guidance for DOD’s Joint Lessons Learned Program. The objectives of this program are to collect and analyze observations from exercises and real world events; disseminate validated observations and findings to appropriate officials; identify and implement corrective actions; and track corrective actions until reobserved in a subsequent exercise or event to ensure that the issue has been successfully resolved. Combatant commands, including NORTHCOM, execute lessons discovery, knowledge development, and implementation activities scaled to meet the command’s requirements while supporting and feeding into the Chairman’s Joint Lessons Learned Program by identifying lessons applicable across combatant commands and the services. The NEP was established in April 2007 under the leadership of the Secretary of Homeland Security to prioritize and coordinate federal, state, and local exercise activities and serves as the principal mechanism for examining the preparation of the federal government to respond to an incident and adopting policy changes to improve such preparation. The day-to-day staff-level coordination of the NEP is managed by the NEP Executive Steering Committee—a working group of the White House’s Domestic Readiness Group Exercise and Evaluation Sub-Policy Coordination Committee—and is chaired and facilitated by FEMA’s National Exercise Division. The steering committee is also responsible for framing issues and recommendations for the full coordination committee on exercise themes, goals, objectives, scheduling, and corrective actions. Figure 4 illustrates the major events and milestones of the NEP and NORTHCOM’s exercise program and table 1 provides information on related major documents. The NEP includes a series of national exercises projected on a 5-year exercise schedule. These exercises are organized into four tiers with each tier reflecting different requirements for interagency participation (see fig. 5). FEMA administers the NEP and maintains the Homeland Security Exercise and Evaluation Program—a capabilities and performance-based exercise program—to provide standardized policy, methodology, and terminology for exercise design, development, conduct, evaluation, and improvement planning. DHS maintains policy and guidance for this program. Similar to DOD’s Joint Training System, the Homeland Security Exercise Evaluation Program uses an exercise life cycle with five phases: foundation, design and development, conduct, evaluation, and improvement planning. This program also provides document templates for exercise planning and evaluation and a collection of interactive, on-line systems for exercise scheduling, design, development, conduct, evaluation, and improvement planning, referred to as the Homeland Security Exercise Evaluation Program Tool Kit (see fig. 6). FEMA also has additional resources to support exercises. For example, exercise stakeholders can access FEMA’s Lessons Learned Information Sharing system, an interagency Web site for posting lessons learned and sharing best practices, to learn about promising practices that could facilitate exercise activities. A national online comprehensive tool that facilitates scheduling and synchronization of national-level, federal, state, and local exercises. A project management tool and comprehensive tutorial for the design, development, conduct, and evaluation of exercises. Exercise Evaluation Guide Builder (Beta) An online application that enables users to customize exercise evaluation guides and templates. Master Scenario Events List Builder (Beta) A tool that enables users to create customized master scenario events list formats by selecting from a list of data fields. An online application that enables users to prioritize, track, and analyze improvement plans developed from exercises and real-world events. NORTHCOM’s Commander’s Training Guidance requires that NORTHCOM establish a training and exercise program consistent with the Joint Training System and establishes that training efforts and resources will be focused on two large-scale exercises annually. The Joint Training System requires, among other things, that an organization’s training objectives be linked to its Joint Mission Essential Task List and include the use of the Joint Events Life Cycle for planning, conducting, and assessing exercises. We found that NORTHCOM has developed a comprehensive exercise program consistent with DOD’s Joint Training System. For example, NORTHCOM uses the Joint Training Information Management System to link training objectives with its Joint Mission Essential Task List. NORTHCOM officials enter information on task performance of exercise participants into the Joint Training Information Management System to evaluate the extent to which the command is trained based on performance requirements in the Joint Mission Essential Task List. NORTHCOM also uses the Joint Training Information Management System to manage the Joint Events Life Cycle for its large-scale exercises, including planning exercise milestones and developing a time line that allows exercise planners to see where they are in the event life-cycle process. For example, NORTHCOM holds five planning conferences for each exercise, including a concept development conference, where exercise and training objectives are discussed and scenarios developed. We also found that NORTHCOM has conducted 13 large-scale exercises since it was created in 2002, generally including 2 exercises each year (see table 2). Vigilant Shield is held in the fall and focuses primarily on NORTHCOM’s homeland defense mission, and Ardent Sentry is generally conducted in the spring and focuses on defense support of civil authorities. NORTHCOM guidance outlines the postexercise documentation required to be completed for each exercise, including quick look, after-action, and exercise summary reports; provides a time line for the completion of these documents; and includes general direction that these documents follow the same focus areas as the collection management plan—the source document from which exercise analysts identify, examine, and recommend emerging issues and trends. We found that NORTHCOM has generally completed exercise summary reports for its exercises; however, neither NORTHCOM nor Joint Forces Command officials could locate an exercise summary report for Unified Defense 03. In addition, postexercise documentation is not consistently included on NORTHCOM’s portal or the Joint Training Information Management System. NORTHCOM guidance issued in June 2008 provides a time line for the completion of postexercise documents and has been applicable to 2 subsequent exercises–Ardent Sentry 08 and Vigilant Shield 09. According to the 2008 guidance, the exercise summary report is to be submitted to the NORTHCOM Commander within 90 days of completing an exercise. The Ardent Sentry 08 and Vigilant Shield 09 exercise summary reports were issued 99 days and 92 days, respectively, after the completion of each exercise. Overall, we reviewed exercise summary reports for 11 of NORTHCOM’s large-scale exercises that have taken place since 2003. Seven of the 11 exercise summary reports were issued within 100 days. Four of the reports were issued later than 100 days, and 1 of NORTHCOM’s earlier reports was issued in less than 30 days. NORTHCOM guidance states that exercise summary reports should provide the official description of the exercise, identify significant lessons learned, and be targeted toward a national audience. Guidance also requires that exercise summary reports follow the same focus areas as the collection management plan—the source document from which exercise analysts identify, examine, and recommend emerging issues and trends. We found that NORTHCOM’s exercise summary reports generally included an executive summary, training objectives, and the exercise’s major scenarios and events, but did not consistently include lessons learned, exercise strengths and weaknesses, or clear recommendations. The exercise summary reports that included a section on lessons learned lacked details. For example, 6 of the 11 exercise summary reports we reviewed included an identified lessons learned section, and just 1 of these 6 reports—Unified Defense 04—provided additional information on lessons learned beyond identifying the title of each observation and the status of the observation in the lessons learned management system. As discussed later in this report, access to this system is required in order to obtain any additional information on the lesson learned. We also found that NORTHCOM exercise summary reports have not followed the same focus areas as collection management plans. For example, none of the seven exercise summary reports for NORTHCOM exercises conducted since Hurricane Katrina in 2005 reported on the information identified in the collection management plans’ focus areas. Inconsistencies in exercise documentation may be occurring because DOD and NORTHCOM guidance do not require a standard format or specific content for postexercise documentation. Although NORTHCOM uses other methods to document exercises, such as the Joint Training Information Management System, this system does not include a complete record of each exercise. For example, the Joint Training Information Management System does not include the lessons learned from an exercise. In addition, access to this system is generally limited to DOD officials. Recognizing the need for a complete and consistent record of each exercise, DHS’s Homeland Security Exercise Evaluation Program provides a template for exercise documentation, including format and content. NORTHCOM used this template for National Level Exercise 2-08, but does not use the template for its own exercises. Despite differences in the requirements and complexities of NORTHCOM’s and DHS’ exercise programs, the lack of a complete and consistent record of each exercise lessens the extent to which NORTHCOM can ensure it has trained to key focus areas. Further, it deprives the command of a key source of historical information upon which to base current and future assessments of exercises and a consistent venue for sharing lessons learned with interagency partners and states. NORTHCOM recognizes the importance of exercising with key partners in all its missions and that, in order to achieve its goal of being trained and ready to execute joint operations and ensure a seamless operating environment, NORTHCOM should maximize exercise participation with federal, state, and local agencies and National Guard units. NORTHCOM has included interagency partners, such as DHS, FEMA, and the U.S. Coast Guard, and several states in its large-scale exercises (see table 3). We found that 17 civilian federal agencies and organizations have participated to varying degrees in one or more of the seven large-scale NORTHCOM exercises that have occurred since Hurricane Katrina made landfall in August 2005. Seventeen states have participated in NORTHCOM exercises since that time, and 8 of these states—Arizona, California, Connecticut, Indiana, Michigan, Oregon, Rhode Island, and Washington—played a major role by having a portion of the exercise conducted in the state and having various state agencies and officials participate. For example, Indiana and Rhode Island played major roles in Ardent Sentry 07 for the detonation of a 10-kiloton improvised nuclear device and category III hurricane impacting the New England region, respectively. Both states established emergency operating centers and exercised large numbers of state emergency management personnel. State emergency management and National Guard officials told us that they participated in NORTHCOM exercises because they wanted to better understand the (1) capabilities that NORTHCOM could bring to the response to an incident and (2) command and control issues of the troops in a state when NORTHCOM is involved. We previously reported that states’ participation in NORTHCOM exercises helps to build relationships and improve coordination. Officials from all of the states we met with told us that they derived benefits from their participation in these exercises. For example, state emergency management officials from three states told us that first-hand interaction with federal military forces and the opportunity to observe the federal response to an incident was beneficial. In addition, two state emergency management and National Guard officials told us that NORTHCOM officials were professional, well- trained, and helpful. Further, officials from five states told us that NORTHCOM provided beneficial resources, such as funds for travel to attend exercise planning conferences and contractor staff to help state officials prepare exercise scripts and injects. Finally, officials from two states told us that the benefits of working with NORTHCOM included gaining an understanding of the resources and capabilities that NORTHCOM can provide, as well as understanding how NORTHCOM coordinates its response through FEMA. NORTHCOM is also attempting to include states in exercises through the Vigilant Guard Program. The goal of the Vigilant Guard Program is to enhance National Guard and State emergency management agency preparedness to perform their homeland defense and Defense Support to Civil Authorities roles and responsibilities. It focuses on State Guard Joint Force Headquarters coordination with the state emergency management agency and Joint Task Force-State operations and involves multiple states and agencies. The program began in September 2004 and included one exercise in fiscal year 2005. Now the plan is to conduct four exercises annually. NORTHCOM was given management responsibility for the Vigilant Guard exercises in 2007, although the National Guard Bureau retains responsibility for budgeting for these events. Two of the four annual Vigilant Guard exercises are to be linked to major combatant command exercises, usually NORTHCOM’s Ardent Sentry and Vigilant Shield. States hosting a Vigilant Guard exercise determine the objectives for these events, and NORTHCOM provides support. Separate planning begins for these Vigilant Guard exercises prior to the related planning meetings for any linked NORTHCOM exercise. NORTHCOM’s Ardent Sentry 09 is linked with a Vigilant Guard exercise in Iowa with scenarios including a train derailment and a chemical spill, an epidemic outbreak, and terrorism incident. A key element to developing effective working relationships with all states is a well-thought out and consistent process for including the states in planning, conducting, and assessing exercises. Without such a process, states may be unwilling to participate in future NORTHCOM exercises, impacting the seamless exercise of all levels of government and potentially affecting NORTHCOM’s ability to provide support to civil authorities. We found that challenges remain which have resulted in inconsistencies in the way that NORTHCOM involves the states in its exercises. One of DOD’s challenges is adapting its exercise system and practices to accommodate the coordination and involvement of other federal, state, local, and tribal agencies that do not have the same kinds of practices or level of planning effort. Differences in exercise culture stem from differences in missions, experience, authority, scope, and resources available to DOD, interagency partners, and states. DOD has an established, authoritative, time-tested process for planning, conducting, and evaluating exercises in order to test and improve preparedness to meet its wide range of critical missions. Within DOD, training and exercises are considered a vital component of its overall mission of defending the national interests and significant resources are devoted to these activities. In contrast, DHS, as the lead for interagency homeland security efforts, is a new agency and has faced challenges since it was created due to frequent reorganization and not being fully staffed. DHS and other civilian agencies and state and local governments have day-to-day missions and responsibilities that may take priority over exercises and often do not have the resources or experience to participate in or conduct exercises. For example, DOD exercises often are conducted 24 hours a day, 7 days a week and may last a week or more to enhance the realism of the exercise, while civilian agencies generally participate 8 hours per day, usually—according to NORTHCOM officials— during normal business hours, and do not exercise longer than a few days. Therefore, DOD exercises are generally longer in duration, more resource intensive, and involve more participants than other federal and state exercises. Furthermore, DOD views itself as the last line of defense and often exercises until resources are exhausted to fully assess capabilities and identify areas needing improvement. Civilian agencies and states may prefer not to exhaust resources during an exercise in order to avoid appearing unprepared for an incident and the associated political controversy. Another challenge that NORTHCOM faces is exercising with the various states and territories within its area of responsibility considering the legal and historical limits of the constitutional federal-state structure. The states have a wide range and type of civilian state agencies responsible for emergency management, some of which are headed by the Adjutant General of the state, who also heads the military department or National Guard, and others are completely separate entities (see table 4). Working with states has been the responsibility of the National Guard and is relatively new for a federal military command like NORTHCOM. NORTHCOM officials face challenges in dealing with the various civilian agencies, differing emergency management structures, capabilities, and needs of the states. For example, for Ardent Sentry 08 (linked with National Level Exercise 2-08), NORTHCOM planned a scenario involving a chemical bomb attack in Seattle, Washington without consulting the state health department or civil support team—the agencies responsible for responding to a chemical or biological attack. State officials told us that NORTHCOM invited the health department to participate once state officials informed them that they should be involved, but that the scenario was already locked in without the input of this key participant. DOD officials told us that they rely on FEMA regional offices to provide information on state agencies. However, we believe that NORTHCOM officials should have determined if all relevant agencies were included in the exercise when directly interacting with state officials during the scenario development and other planning conferences, before the scenarios were locked in. Washington emergency management officials told us that this affected the realism of the exercise. NORTHCOM also faces challenges in balancing its training objectives with those of state agencies and organizations. State and local governments seek to exercise their first responder capabilities before having their resources overwhelmed and needing to seek federal assistance. On the other hand, NORTHCOM seeks to exercise its capability to provide support to civil authorities when local, state, and other federal resources are exhausted. This necessarily requires scenarios that exceed the states’ capabilities and that stress DOD capabilities. Officials from four of the seven states we interviewed told us that NORTHCOM’s exercise scenarios appeared unrealistic, overwhelmed their states too soon during the exercise, or did not allow states to fully exercise their own training objectives. For example, the scenario for Ardent Sentry 06 included multiple improvised explosive devices detonating over a 4-day period in various sites, such as the City of Detroit, St. Clair and Wayne Counties, Michigan, and Windsor, Ontario, Canada with over 14,000 fatalities and a simultaneous pandemic flu outbreak in Michigan. State emergency management officials told us that such a large number of casualties would overwhelm state resources almost immediately, and therefore precluded fully exercising training objectives for state and local responders. Officials from these states told us that because they did not have the opportunity to exercise their own training objectives, they believed NORTHCOM was using them as a training tool. A NORTHCOM official told us that NORTHCOM needs the states to participate in exercises and, therefore, will be flexible to accommodate other organizations’ training objectives; however, NORTHCOM ultimately has its own objectives to exercise. Officials from five of seven states noted that, for example, they face budget and staffing limitations, and playing a major role in a NORTHCOM exercise often requires establishing a state emergency operations center with numerous staff and agencies involved. Given the expansive scenarios NORTHCOM uses to guide its exercises and the perception of half of the states we visited that this limits the benefits to them, we believe that the states may be less likely to expend scarce resources to participate in future NORTHCOM exercises. Inconsistencies with how NORTHCOM involves states in planning, conducting, and assessing exercises is occurring in part because NORTHCOM officials lack experience dealing with the various state agencies and emergency management structures. Inconsistencies are also occurring because NORTHCOM has not established an informed, consistent process for including states in its exercises. One aspect of this process is the way that NORTHCOM requests state participation in its exercises. Currently, NORTHCOM has various processes for requesting that other federal departments and agencies participate in its exercises, such as making the request through the Joint Staff. FEMA officials told us that requests for state participation in NORTHCOM exercises should be made through FEMA’s regional offices. However, because NORTHCOM does not have an established process for requesting state participation, officials from the states we visited told us that NORTHCOM officials made requests informally and in a variety of ways, including through the National Guard Bureau, the state’s National Guard, or FEMA’s regional offices. In some cases, such as when the state emergency management agency and state National Guard have a close working relationship, this method has been effective for NORTHCOM. However, in other cases, this method has led to more limited exercises. For example, emergency management officials from one state told us that NORTHCOM does not have full state representation if it only exercises with the state National Guard. In that case, NORTHCOM therefore misses out on interaction with other key state emergency management officials and responders and affects the realism of the exercise. Another aspect of the lack of a consistent process for requesting state participation is potentially missing the opportunity to leverage the existing expertise of the National Guard Bureau and defense coordinating officers located in each of the 10 FEMA regional offices. As we previously reported, the National Guard Bureau and defense coordinating officers have knowledge and experience in dealing with states in their region and may be a valuable resource for NORTHCOM officials during the planning and conduct of exercises. The three defense coordinating officers with whom we met told us that they participate in NORTHCOM exercises, but currently their role does not involve requesting state participation on behalf of NORTHCOM or providing state-specific information to NORTHCOM exercise officials. Without an informed and consistent process for including the states in planning, conducting, and assessing its exercises, NORTHCOM increases the risk that its exercises will not provide benefits for all participants, impacting the seamless exercise of all levels of government and potentially affecting NORTHCOM’s ability to provide support to civil authorities. DOD and NORTHCOM guidance requires that NORTHCOM identify observations during the course of normal operations, exercises, and real- world events; capture the detail required to fully understand the problem; and share valid lessons learned and issues as widely as possible. NORTHCOM has been identifying observations, lessons learned, and needed corrective actions from its exercises and operations since the command was created in 2002. NORTHCOM collects and tracks observations through the Joint Lessons Learned Information System (JLLIS)—the automated official DOD system for managing and tracking exercise observations and recording lessons learned. As of April 2009, DOD exercise participants input 94 observations into JLLIS during NORTHCOM’s most recent large-scale exercise, Vigilant Shield 09. Table 5 shows the observations entered into JLLIS or its predecessor for NORTHCOM’s major exercises since 2006. The philosophy and approach of NORTHCOM’s Lessons Learned Program have been largely the same since NORTHCOM published its first instruction for the program in 2003, although the requirement to re- observe corrective actions in a subsequent exercise or operation before closing them was not established until 2005. We found that NORTHCOM generally has a systematic lessons learned and corrective action program, based on clear procedures and a regular process. Observations are assigned to an office of primary responsibility within NORTHCOM and categorized as either a lesson learned—a positive finding—-or an issue which requires corrective action. NORTHCOM’s intent is to manage and resolve issues requiring corrective action at the lowest organizational level possible. This responsibility is generally within NORTHCOM’s various directorates, component commands, or a Joint Task Force. Issues may be closed at the directorate level without external approval or oversight. Broader scope or more sensitive issues requiring the involvement of more than one directorate or subcommand go into the formal corrective action board process for review, tracking, and approval as necessary. This formal process includes two boards—the Corrective Action Board and the Executive Corrective Action Board—to review and resolve issues. Figure 7 illustrates the flow of NORTHCOM’s lessons learned and corrective action process. Joint Staff officials told us that about 10Department of State officials have access to JLLIS, and DHS and Department of Energy officials have requested access for some of their staff. Other departments and agencies, such as the Federal Bureau of Investigation and Department of Health and Human Services, have received briefings about JLLIS, but have not requested access. The Joint Staff official also told us that his office does not have enough staff to support a large number of non-ommon access card users requesting JLLIS access, and granting access would be a lengthy process due to the software and security requirements that must be addressed. Directorate officials into JLLIS. However, NORTHCOM’s portal the same DOD-issued card to gain entry. In addition, NORTHCOM’s lessons learned manager told us no one has submitted observations using the template since it was put on the portal. This may be because the command has not actively publicized how to access the template a underscored the value to the command of obtaining observations from interagency partners and states. In response to our inquiries in May 2009, in the NORTHCOM’s lessons learned manager told us that the command isprocess of adding a link to DHS’s Homeland Security Information Network so that interagency partners and states will be able to submit lessons learned which can subsequently be transferred to JLLIS by NORTHCOM officials. In addition to collecting observations using JLLIS, NORTHCOM can obtain lessons learned from interagency partners and states during postexercise ours meetings. NORTHCOM conducts a review called a Hotwash within h of completing the exercise so that exercise participants can discuss observations that significantly impacted their mission and recommend emergent themes for discussion during a subsequent review known as the facilitated after-action review. This review, generally held 7 days after the s to exercise is completed, provides an opportunity to present major issue senior leaders and obtain the Commander’s guidance for resolution. However, the extent to which interagency and state officials are atten and participating in NORTHCOM’s postexercise meetings is unclear. Based on NORTHCOM’s documentation, only two states (out of the last six major exercises) participated in a Facilitated After Action Review— California in Vigilant Shield 09 and Alaska in Ardent Sentry 07. Officials from three states in addition to California told us that they participated the after-action meeting for the exercises they participated in, but they may have participated in the regional or national-level meeting rather than old us NORTHCOM’s. Officials from two of the seven states we met with t that they did not attend NORTHCOM’s postexercise reviews for the in exercises in which they participated at least partly due to staffing and budget limitations. NORTHCOM has also attempted to share lessons learned with other federal agencies and states by using FEMA’s lessons learned sharing system. For example, NORTHCOM has posted six reports onto FEMA’s lessons learned system, including four recent exercise reports and two reports from operations in 2008. However, with one exception, the documents that NORTHCOM has made available on this system (1) include only lists of observations and, in some cases, record-tracking numbers from JLLIS and previous lessons learned systems, and (2) lack detailed information on individual lessons learned and corrective actions. Joint Staff and NORTHCOM officials told us that they do not post detailed information on the unclassified Lessons Learned Information Sharing system Web site, because it is not adequately protected from the potential for unauthorized access to records. As a result, the security of the information cannot be assured. According to these officials, if an adversary nation or terrorist group gained access to this information, it may be possible for them to identify weaknesses in NORTHCOM’s operations that can be exploited. In a recent exercise summary report NORTHCOM stated that it will post lessons learned, best practices, and reports that may benefit their non-DOD mission partners in FEMA’s Lessons Learned Information Sharing system, which the report describes as a secure, restricted-access information system. Because security concerns are preventing NORTHCOM from openly sharing all its unclassified lessons learned with its interagency partners and the states, the information NORTHCOM does provide may be of limited value for helping its partners improve the nation’s disaster responsiveness. Because NORTHCOM is not fully involving other federal agencies and states in its lessons learned process, it is missing opportunities to learn lessons from an exercise. For example, officials from two states did not provide NORTHCOM with lessons learned from exercises because they did not attend the command’s post exercise reviews. As a result, NORTHCOM risks the reoccurrence of potential problems that were not identified in its process. DOD and NORTHCOM guidance requires that issues requiring corrective actions be tracked and remain open until the solutions are completed and verified as effective—through training, operations, or exercises. We found that NORTHCOM directorates and subcommands are closing some issues prematurely, without confirming that corrective actions were made or verifying in a subsequent exercise or operation that the corrective action is effective. We reviewed unclassified records in JLLIS from NORTHCOM’s previous six large-scale exercises and found at least 77 of the 375 records or about 20 percent required corrective actions but were either closed prior to completing the corrective action or closed without verifying the effectiveness of the corrective action. For example, an observation was made during Ardent Sentry 07 that NORTHCOM did not have a process for addressing a foreign nation’s offer of military-to-military assistance in a major disaster. The issue was validated and the corrective action developed, but the issue was closed by the originating organization before the corrective action could be verified or reobserved in a subsequent exercise. The record was closed even though the Executive Corrective Action Board directed that it remain open until an exercise of suitable scope to require significant military support was developed. Another example of a record being closed without verification or re- observation is an observation made during Ardent Sentry 07 raising concerns that NORTHCOM personnel could arrive to assess a disaster site without alerting state officials they would be coming. As a result, NORTHCOM developed a new Command Assessment Element Concept of Execution in July 2007 to promote better command and control and situational awareness; however, the issue was closed before the procedure could be observed in a subsequent exercise or operation to verify effective resolution. These issues are likely being closed without verification or re-observation, because NORTHCOM Training and Exercise Directorate officials do not have oversight over the disposition of open issues that are resolved within directorates or are unable to give long-standing issues the sustained management attention needed to ensure resolution. NORTHCOM’s lessons learned manager told us that the command does not have the staff necessary to oversee the actions on records handled within the other directorates. In addition, while the checkbox format in JLLIS makes it easy to see whether an issue is open, awaiting verification, or closed, entries made in JLLIS regarding corrective actions required, implementation date, and plan for verification are primarily in a narrative format, which may make the review and oversight process more time consuming. Without sufficient oversight, NORTHCOM cannot ensure that corrective actions are verified and reobserved in a subsequent exercise or operation before the issue is closed, so that the command knows the solution is effective. We recognize that such oversight should be addressed without significantly stressing NORTHCOM’s staff. However, if NORTHCOM does not ensure that corrective actions are fully resolved, it increases the risk that these issues may occur again, possibly during crucial, real-world situations. This lack of oversight, coupled with the lack of a well-thought out and consistent process for including the states in assessing exercises as discussed earlier in this report, further limits the knowledge gained and value of the exercise for all participants. Since the NEP Charter was approved in January 2007, NORTHCOM has participated in the major national exercise held under the NEP and taken steps to integrate its exercises into the national program. NEP guidance requires that heads of departments and agencies actively participate in tier I exercises and recommends participation in tier II exercises either through the National Exercise Simulation Center or as determined by agency leadership. Departments or agencies can participate in the NEP by combining an existing exercise with a NEP exercise, taking part in a tier II exercise sponsored by a different department or agency, or requesting to lead a tier II exercise to obtain greater interagency participation and support. DOD guidance requires that components participate in or lead planning efforts of NEP exercises as appropriate given the scenario or as tasked by the Assistant Secretary of Defense, Homeland Defense and America’s Security Affairs or the Chairman of the Joint Chiefs of Staff. NORTHCOM’s training guidance specifies the NEP exercises in which the command plans to participate during the following 2 fiscal years. NORTHCOM combined two of its large-scale exercises— Vigilant Shield 08 and Ardent Sentry 08—with major national exercises and has taken part in two additional exercises sponsored by other departments (see table 6). For example, National Level Exercise 1-08, a tier I exercise, and NORTHCOM’s Vigilant Shield 08 were conducted October 15 -20, 2007, in parallel with Top Officials 4 and several other exercises. These exercises were linked together by the use of common scenarios and objectives intended to test existing plans, policies, and procedures to identify planning and resource gaps and develop corrective actions to improve preparedness against a weapons of mass destruction attack. NORTHCOM officials told us that they generally would like to participate in NEP exercises to achieve the benefits of exercising with interagency partners, but in some cases it is not beneficial to do so. For example, the officials told us NORTHCOM decided not to combine Ardent Sentry 09 with National Level Exercise 09—a tier 1 exercise scheduled for July 2009—because the objectives and scenarios for the exercises did not meet their training needs. Although NORTHCOM officials will conduct Ardent Sentry 09 separately, they are using the National Exercise Simulation Center—FEMA’s newly established training and exercise facility—to provide a test run for the center’s use in National Level Exercise 09. DOD and NORTHCOM have taken steps to integrate exercises with the National Exercise Program, including posting the command’s exercises on DHS’s National Exercise Schedule, successfully applying to lead a tier II exercise, and publishing guidance on integration with the NEP. The NEP Implementation Plan recommends that federal departments and agencies post exercises on the NEP’s National Exercise Schedule so that exercises and planning meetings can be synchronized across the federal government. NORTHCOM has posted its annual Ardent Sentry and Vigilant Shield exercises for the first 4 of 5 fiscal years on the national schedule, while FEMA’s National Exercise Division has posted exercises for the first 3 fiscal years. As of June 2009, neither the Joint Staff nor any other combatant commands have posted exercises on the national schedule. In addition, NORTHCOM recently requested and was granted approval to lead Vigilant Shield 10 as a tier II exercise scheduled for November 2009. Vigilant Shield 10 should have greater interagency participation than it would have received as a tier 3 exercise, since federal departments and agencies will be required, at a minimum, to participate in the National Exercise Simulation Center. As of May 2009, the participants of Vigilant Shield 10 include the DHS and the Departments of Justice, Energy, Transportation, Health and Human Services, and Veteran Affairs; the Joint Chiefs of Staff and U.S. Joint Forces Command; and other government and nongovernment organizations. This exercise will be the first time that NORTHCOM will share planning responsibilities with FEMA’s National Exercise Division. This exercise will also be linked to a Canadian government an exercise to demonstrate its readiness for the 2010 Olympics in Vancouver. NEP guidance includes policies and tools for the design, planning, conduct, and evaluation of exercises—known as the Homeland Security Exercise and Evaluation Program, which creates a common exercise policy and consistent terminology for exercise planners and serves as the foundation of NEP exercises. FEMA requires that entities, such as state and local governments, receiving homeland security grant funding for their exercises adhere to specific Homeland Security Exercise Evaluation Program guidance for exercise program management, design, conduct, evaluation, and improvement planning. We reviewed key program documents, such as the Implementation Plan, and found that this guidance is unclear about the extent to which federal agencies should use the Homeland Security Exercise Evaluation Program. For example, the Implementation Plan states that the NEP does not displace a preexisting exercise program, and none of the NEP guidance requires that federal agencies use the Homeland Security Exercise Evaluation Program. However, the Implementation Plan states that the Homeland Security Exercise Evaluation Program will serve as the doctrinal foundation for NEP exercises. FEMA officials told us that federal agencies should use this program when participating in tier I and tier II exercises so that the various exercise participants have consistency when planning, conducting, and assessing exercises. We found that NORTHCOM generally has used DOD’s Joint Training System guidance for planning NEP exercises, defining capabilities, and reporting exercise results. NORTHCOM officials told us that the Joint Training System is consistent with the NEP and served, in part, as the basis for the Homeland Security Exercise and Evaluation Program. We found that these sets of guidance have similar processes but use different methods for defining the tasks and capabilities that are performed and validated in an exercise. The primary differences between these sets of guidance are that (1) DOD’s task list, which serves as the basis for its exercises, includes tasks that are specific to military missions, such as troop movements and sealifts; (2) DHS guidance provides more detailed criteria for the postexercise documentation, such as content and format; and (3) DHS’s planning cycle is generally shorter—9 to 15 months versus 12 to 18 months for DOD. (See table 7.) See app. II for a more detailed comparison. According to both the Homeland Security Exercise and Evaluation Program and Joint Training System guidance, it is important to link tasks and capabilities with exercise objectives to ensure that participants exercise or train as they would perform in a real-world event. The Homeland Security Exercise and Evaluation Program recommends using DHS’s Target Capabilities List or the Universal Task List to formulate the tasks and capabilities that underlie the objectives for an exercise. These lists describe the capabilities government entities need and tasks they are expected to perform to prevent, protect against, respond to, and recover from incidents of national significance. In contrast, NORTHCOM derives its tasks and capabilities from the Universal Joint Task List to formulate Joint Mission Essential Tasks. According to NORTHCOM guidance, the command is required to include in its exercises the Joint Mission Essential Tasks associated with its Joint Training Plan, which is updated annually. These tasks are identified by joint force commanders as most essential to their assigned or anticipated missions with priority given to their wartime missions and describe their priority wartime requirements. We found that DOD’s operating instruction for participation in the NEP does not provide guidance on how DOD components should incorporate tasks and capabilities derived from sources recommended by the Homeland Security Exercise and Evaluation Program when participating in NEP exercises. The primary differences between DHS’ and DOD’s lists are that DOD’s task lists generally incorporate more descriptive metrics and criteria to assess performance and include tasks that are specific to military missions, such as troop movements and sealifts. In some cases, state National Guards officials have had to translate DOD task lists into DHS tasks lists when working with their civilian partners and vice versa. We also found that neither DOD’s nor NORTHCOM’s guidance for developing postexercise reports includes the same degree of specificity recommended in the Homeland Security Exercise Evaluation Program. For example, both sets of exercise guidance require postexercise reports; however, the Homeland Security Exercise Evaluation Program provides templates and guidance for these documents, including requiring an improvement plan to clearly outline the corrective actions needed, which are not included in DOD’s or NORTHCOM’s guidance. In addition, NORTHCOM’s exercise summary reports for National Level Exercise 1-08 and 2-08 did not contain all information recommended by the Homeland Security Exercise Evaluation Program. For example, NORTHCOM did not include the recommended analyses regarding the capabilities and tasks tied to the exercises’ objectives. As stated above, we reviewed NEP guidance such as the Implementation Plan and found it does not clearly state the extent to which federal agencies are required to follow the Homeland Security Exercise and Evaluation Program. As a result of this unclear guidance, we found that agency officials have varying interpretations of the requirements. For example, a DOD and a Joint Staff official told us that NEP guidance does not require agencies to use the Homeland Security Exercise Evaluation Program even for NEP exercises. Therefore, NORTHCOM uses the Joint Training System rather than the Homeland Security Exercise and Evaluation Program as the basis for planning, conducting, and assessing exercises. However, officials from FEMA’s National Exercise Division told us that all participating agencies should use the Homeland Security Exercise and Evaluation Program guidance for tier I and tier II NEP exercises. FEMA officials stated that federal departments and agencies should be held accountable for meeting key requirements, but that FEMA’s authority is limited to guiding, supporting, and coordinating with, but not directing other federal departments and agencies to comply with guidance. As we have previously reported, we believe that FEMA’s expanded leadership role under the Post-Katrina Act provides FEMA opportunities to instill a shared sense of responsibility and accountability on the part of all agencies. Neither DOD nor NORTHCOM guidance specifically addresses the extent to which DHS’s Homeland Security Exercise and Evaluation Program planning and documentation requirements should be followed. We recognize that NORTHCOM and DOD must meet their own mission and exercise requirements and the Joint Training System may be best suited for NORTHCOM’s exercises; however, all of the states we visited use Homeland Security Exercise and Evaluation Program guidance. We found that having differing sets of guidance, such as DOD’s and DHS’ capabilities task lists and postexercise documentation requirements, makes exercises more difficult and potentially limits the benefits for participating states. For example, officials from three states we visited told us that using NORTHCOM’s exercise planning and reporting requirements rather than Homeland Security Exercise Evaluation Program guidance has made the processes more difficult. Further, the Defense Science Board found that inconsistent approaches to the development and content of postexercise documentation may affect the ability of organizations to fully learn lessons identified in exercises. We also reported that when other federal entities carry out processes that do not specifically follow the Homeland Security Exercise and Evaluation Program, FEMA managers do not have the necessary data to measure progress, identify gaps in preparedness, and track lessons learned—key objectives of the NEP. We believe that achieving national preparedness requires a whole-of-government approach and is a shared responsibility among federal, state, local, and tribal governments and organizations and an integration of their various standards, policies, and procedures into the national system. There is an increasing realization within the federal government that an effective, seamless national response to an incident requires a strong partnership among federal, state, and local governments and organizations, including integrated planning, training, and the exercise of those plans. For DOD, the effective execution of civil support, especially amid simultaneous, multijurisdictional disasters, requires ever-closer working relationships with other departments and agencies and at all levels of government. NORTHCOM’s use of DOD’s Joint Training System has provided a robust process for planning and conducting exercises to improve preparedness to achieve its homeland defense and civil support missions, and its efforts to involve its interagency partners and the states in exercises have helped to reduce uncertainty about the process for responding to an incident. However, without a consistent record of what has occurred during an exercise that is accessible by all exercise participants, including those from other federal agencies and states, NORTHCOM cannot ensure that it has met internal standards, trained to key focus areas, or compared the goals and results of exercises over time. Further, a key element to developing effective working relationships with all states is a consistent process for including states in planning and executing NORTHCOM’s exercises that incorporates state-specific knowledge and information. By coordinating consistently with organizations, like FEMA and NGB, that have knowledge and experience dealing with states, NORTHCOM can improve the value and effectiveness of exercises for all of the participants involved. Exercises provide an opportunity to enhance preparedness by collecting, developing, implementing, and disseminating lessons learned and verifying corrective action taken to resolve previously identified issues. NORTHCOM’s clear procedure for capturing observations in JLLIS and identifying issues needing corrective action has helped to improve its capabilities to complete its missions. However, by not providing federal agencies and states greater access to its lessons learned process, NORTHCOM will lose opportunities to learn valuable lessons from an exercise, particularly observations from the states that could enhance coordination and build more effective interagency relationships. Further, the risk that issues may reoccur will be increased, particularly when interagency partners are not aware of key issues or concerns that might impede the government’s overall responsiveness to a natural or man-made disaster. In addition, when corrective actions remain open until fully implemented and verified in a subsequent exercise, NORTHCOM will have greater assurance that issues raised during exercises are being adequately addressed and the corrections are in fact solving the problems identified. NEP policies and tools for the design, planning, conduct, and evaluation of exercises are intended to create a common exercise policy and consistent terminology for exercise planners across all levels of government to improve the federal government’s ability to evaluate national preparedness. The steps DOD and NORTHCOM have taken to integrate exercises with the NEP have helped DHS to prioritize and coordinate federal exercise activities and enhance the federal government’s ability to respond to an incident. We recognize that NORTHCOM and DOD must meet their own mission and exercise requirements and the Joint Training System may be best suited to meet the high standards required for NORTHCOM’s exercises. However, achieving national preparedness requires shared responsibility among federal, state, and local governments and organizations and an integration of their various standards, policies, and procedures into the national system. We also recognize that the NEP continues to evolve and become more useful to federal and state partners. IHowever, in the absence of clear guidance from DHS on the extent to which agencies should use Homeland Security Exercise Evaluation Program planning and documentation guidance, DOD should ensure that its components clearly understand when the use of this guidance is appropriate so that both DOD and its exercise partners, such as other federal agencies and states, derive the most benefits from exercises. This, in turn, contributes to the ultimate success of a whole-of-government approach to national preparedness. To improve NORTHCOM’s consistency with exercise documentation, we recommend that the Secretary of Defense direct NORTHCOM’s Commander to develop guidance with specific criteria for postexercise documentation, particularly the Exercise Summary Report as the official exercise record, including the content and format to be included in such reports that will allow the results and lessons learned of exercises to be easily reviewed and compared. To improve NORTHCOM’s involvement of interagency partners and states in its exercises, we recommend that the Secretary of Defense, in consultation with the Chairman of the Joint Chiefs of Staff, the Commander, U.S. Northern Command, and other relevant combatant commanders, coordinate with the Department of Homeland Security and Federal Emergency Management Agency to develop guidance and procedures for consistently involving state officials in planning, executing, and assessing exercises that incorporate relevant state-specific information, and the Secretary of Defense direct NORTHCOM’s Commander to develop a training plan for NORTHCOM headquarters staff on state emergency management structures and relevant issues related to working with civilian state and local emergency management officials. To improve NORTHCOM’s involvement of interagency partners and states in its lessons learned and corrective action process and its management of corrective actions, we recommend that the Secretary of Defense direct: NORTHCOM’s Commander to establish and publicize valid and easily accessible procedures for non-DOD exercise participants to submit observations relevant to NORTHCOM, such as placing a template on NORTHCOM’s publicly accessible Web site or DHS’s Homeland Security Information Network, so that NORTHCOM officials have a clear, secure avenue to obtain observations and assess potential lessons that originate with its exercise partners; the Chairman of the Joint Chiefs of Staff, in consultation and coordination with DHS, to either resolve information assurance issues so that the combatant commands, including NORTHCOM, can post Exercise Summary Reports with lessons learned and observations from NEP exercises on DHS’s Lessons Learned Information Sharing system to make them easily accessible to interagency partners and states or establish an alternative method to systematically collect and share lessons learned; and the Chairman of the Joint Chiefs of Staff to revise the joint lessons learned operating instruction to include procedures to ensure that appropriate corrective actions are implemented and verified in a subsequent exercise or operation before being closed and that the reasons for closure are documented. Possible procedures might be adding a verification checkbox on JLLIS’s issue management page or requiring that the directorates and subordinate commands within the combatant commands provide a status report when a correction is implemented and reobserved or closed for reasons other than re- observation. To improve NORTHCOM’s ability to work with interagency partners on major national exercises and further achieve the objectives of the NEP, we recommend that the Secretary of Defense revise the instruction on DOD participation in the NEP and/or direct the Chairman of the Joint Chiefs of Staff to revise the operating instruction regarding DOD participation in the NEP to provide the general conditions under which the combatant commands are expected to follow the Homeland Security Exercise and Evaluation Program planning and documentation requirements or the DOD’s Joint Training System should be modified for those civil support exercises. In comments on a draft of this report, DOD generally agreed with the intent of our recommendations and discussed steps it is taking or plans to take to address these recommendations. DOD also provided technical comments, which we have incorporated into the report where appropriate. DHS also reviewed a draft of this report and provided technical comments, which we have incorporated where appropriate. In response to our recommendation that NORTHCOM develop guidance with specific criteria for postexercise documentation to allow the results and lessons learned of exercises to be reviewed and compared, DOD agreed that such information should be provided in a standardized format that can be easily accessed and understood by authorized organizations which might benefit from such knowledge. DOD cautioned that any actions in response to this recommendation must accommodate constraints regarding classified information. We agree that properly securing classified information is a critical responsibility and believe this can easily be accomplished without undermining the intent of the recommendation, which is to improve the consistency and completeness of formal exercise documentation and thereby its overall value. In response to our recommendation that DOD coordinate with DHS and FEMA to develop guidance and procedures for consistently involving state officials in planning, executing, and assessing exercises that incorporate relevant state-specific information, DOD agreed that better coordination for interfacing with state officials can be achieved. DOD also pointed out that NORTHCOM continues to expand its efforts to work through defense coordinating officers, existing state National Guard relationships, and FEMA regional headquarters partners to ensure that states are able to benefit from participation in DOD-sponsored exercises. However, DOD also said that while NORTHCOM has continuously engaged and encouraged state participation in NORTHCOM-sponsored exercises, the primary audience for such training is and must remain NORTHCOM. DOD also suggested that our recommendation has applicability to other federal interagency partners and that the issue should be addressed to the Exercise and Evaluation Sub-Interagency Planning Committee as a revision to the National Exercise Program Implementation Plan. As our report indicates, we agree that NORTHCOM has sought to engage and involve the states in its comprehensive exercise program. NORTHCOM plans for and conducts major exercises both inside and outside the construct of the National Exercise Program. Particularly for NORTHCOM- sponsored exercises focused on the command’s civil support mission, the effective involvement of and interaction with state and other federal partners is a critical component of improving and maintaining NORTHCOM’s preparedness. For NORTHCOM’s participation in national- level exercises, the preparedness goals and objectives of all participants are equally important. We believe that in developing procedures to improve coordination with the states, DOD can (1) avoid situations where exercises meant to improve preparedness are not fully coordinated with the necessary partners; (2) capitalize on the structures and organizations it already has in place, such as the defense coordinating officers and relationships with state National Guard headquarters; and (3) coordinate with DHS and FEMA to improve the military-civilian interface. With regard to the latter, the Exercise and Evaluation Sub-Interagency Planning Committee may indeed be one of the venues at which DOD can effectively coordinate with its interagency partners. With respect to our recommendation that NORTHCOM develop a training plan for NORTHCOM headquarters staff on state emergency management structures and relevant issues related to working with civilian state and local emergency management officials, DOD agreed and noted that headquarters training is required for all newly assigned NORTHCOM staff. Further, DOD noted that NORTHCOM sponsors three versions of its defense support of civil authorities seminar that are targeted to staff at different seniority levels. We agree that NORTHCOM has continued to improve the level of awareness and training it provides staff on the complexities of providing defense support to civilian authorities in the United States. However, this does not fully address our recommendation. While training on the general procedures of the national response framework, the nature of state-federal government relations, and DOD’s proper role in providing military support to civil authorities is invaluable for NORTHCOM staff, we continue to believe that this should be supplemented by the kinds of state-specific information that would provide both exercise officials and all other staff with an understanding of the key differences between states. These differences are possibly as numerous as the number of states and play a role in all routine interactions between the individual states and DOD officials as well as for effective coordination for exercise planning and coordination during a natural disaster or some other no-notice incident requiring defense support to civil authorities. DOD agreed with our recommendation that NORTHCOM establish and publicize valid and easily accessible procedures for non-DOD exercise participants to submit observations relevant to NORTHCOM, such as placing a template on NORTHCOM’s Web site or DHS’ Homeland Security Information Network, so that NORTHCOM officials have a clear, secure avenue to obtain observations and assess potential lessons that originate with its exercise partners. DOD indicated that collecting exercise information from all perspectives would provide additional opportunities to improve NORTHCOM’s ability to accomplish its mission tasks. DOD also agreed with our recommendation that it work with DHS to either resolve information assurance issues so that NORTHCOM can post Exercise Summary Reports with lessons learned on DHS’ Lessons Learned Information Sharing system or establish an alternative method to systematically collect and share lessons learned. DOD cautioned that while wide dissemination of information approved for release would be of great benefit to homeland security entities it continues to adhere to the Joint Training System and cannot mandate that DHS alter its Lessons Learned Information System to make accommodations. DOD also noted that it has procedures in place to allow specifically cleared individuals from outside DOD access to information contained in Exercise Summary Reports. We agree that DOD cannot mandate alterations to the Lessons Learned Information System. We also agree that the Joint Training System should remain the chief guidance for the conduct of DOD exercises. However, we continue to believe that in working with DHS on the proper level and mode of information sharing, DOD may be able to improve the dissemination of relevant exercise-related information to all appropriate officials. DOD agreed with our recommendation that the Chairman of the Joint Chiefs of Staff revise the joint lessons learned operating instruction to include procedures to ensure that appropriate corrective actions are implemented and verified in a subsequent exercise or operation before being closed and that the reasons for closure are documented. DOD indicated that the Chairman of the Joint Chiefs of Staff Instruction 3150.25D could be expanded to provide more guidance and the Joint Lessons Learned Information System could be updated to provide a technological solution to address the issue once the process and procedures are in place. DOD also indicated that the process of verifying corrective action and closing issues will become more effective with the modifications it outlined in response to the recommendation. In response to our recommendation that DOD revise guidance on DOD participation in the National Exercise Program to provide the general conditions under which the combatant commands are expected to follow the Homeland Security Exercise and Evaluation Program planning and documentation requirements or the DOD’s Joint Training System should be modified for those civil support exercises, DOD recognized the importance of ensuring effective interaction with interagency partners for homeland security-related exercises. However, DOD noted that the National Exercise Program Implementation Plan contains language placed there at DOD’s insistence that establishes a process to resolve doctrinal differences during exercise planning. DOD indicated that together with provisions in the implementation plan establishing the administration, scope, and hierarchy of multiagency homeland security exercises and the 5-year National Exercise Program schedule, this should address our recommendation. DOD further noted that The Joint Training System remains the Secretary of Defense’s guidance on DOD exercises and that the National Exercise Program Implementation Plan stipulates that individual department or agency exercise programs should not be replaced. We agree that the Joint Training System is and should be DOD’s primary guidance for ensuring that DOD components train and exercise according to standards. However, because interagency exercises are becoming an ever larger part of the national preparedness effort, and to the extent that effective exercise planning is bolstered by common procedures, our recommendation is intended to help DOD clarify for its components the circumstances under which the specific planning and documentation requirements for the Homeland Security Exercise and Evaluation Program can be followed without detriment to DOD’s high training and exercise standards or compromise of the Joint Training System. DOD’s written comments are reprinted in appendix III. We are sending copies of this report to the Secretary of Defense, Secretary of Homeland Security, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or dagostinod@gao.gov. Contacts points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. In conducting this review, we generally focused our scope on U.S. Northern Command’s (NORTHCOM) large-scale exercises conducted since Hurricane Katrina made landfall in August 2005. To determine the extent to which NORTHCOM’s exercise program is consistent with Department of Defense (DOD) training and exercise requirements and includes relevant exercise partners, we evaluated NORTHCOM’s compliance with exercise reporting and documentation requirements established in DOD and command guidance. We reviewed available guidance to determine requirements for timing, format, and content. We also compared these requirements with guidance contained in the Department of Homeland Security’s (DHS) Homeland Security Exercise and Evaluation Program documentation. We reviewed exercise documentation for all large-scale exercises the command performed since it was established in 2002 to determine the extent to which the command complied with the guidance. We also performed an assessment of the experiences and level of participation from some interagency organizations and states in NORTHCOM’s large-scale exercises. We initially met with Nevada officials who participated in a NORTHCOM exercise prior to Hurricane Katrina— Determined Promise 03—to provide context to the extent that changes may have been made to NORTHCOM’s exercise program and help develop our state selection methodology. We selected a nongeneralizable sample of six states based on the extent to which they have participated in major NORTHCOM exercise since Hurricane Katrina and the varying scenarios of the exercises. The states we selected played a major role in NORTHCOM exercises by having a portion of the exercise conducted in their state and having various state agencies and officials participate. States we selected include Arizona, California, Michigan, Oregon, Rhode Island, and Washington. We met with representatives from each state’s emergency management organization and state national guard. Because of the methodology selected, the resulting data and information from these state visits could not be projected to make assumptions about the rest of the states and what they may experience exercising with NORTHCOM. We also met with officials from three Federal Emergency Management Agency (FEMA) regional offices that had exercised with NORTHCOM in three of the last six large-scale exercises. We also interviewed officials from the Office of the Assistant Secretary of Defense for Homeland Security and America’s Security Affairs, Joint Staff, and NORTHCOM with knowledge of and experience with NORTHCOM’s training and exercise program. To determine the extent to which NORTHCOM is using lessons learned during exercises to improve mission preparedness, we reviewed DOD, NORTHCOM, and DHS National Exercise Program (NEP) guidance for recording, tracking, and managing lessons learned and assessed NORTHCOM’s management of exercise observations and issues identified in several of NORTHCOM’s large-scale exercises since Hurricane Katrina in 2005. We interviewed NORTHCOM, Joint Staff, and FEMA officials regarding the various lessons learned management systems, and how interagency and state access to these systems can be accomplished. We also spoke with an official in the General Services Administration regarding the types of federal personal identification verification cards used by DOD and other federal departments and agencies to access government computer systems. In reviewing the management of NORTHCOM’s lessons learned program we identified and reviewed all unclassified exercise observations from its last six large-scale exercises that had been activated in NORTHCOM’s area of the Joint Lessons Learned Information System (JLLIS). Our review of the records in JLLIS entailed determining each record’s status (open or closed), its type (issue or lesson learned), and each record’s disposition after NORTHCOM staff have acted on these records to respond to the issues or lessons learned documented. Based on our review, we generally placed these records into one of several categories: open; closed, nonconcur; issue closed with reobservation; issue closed with no reobservation; and lesson learned. In addition we reviewed several records that had been merged with other original records because each related to the same issue; however, the original record for that issue was not part of our universe. Therefore, without reviewing the lead record the merged records lacked sufficient information regarding their disposition and that condition became another category. Finally, to determine the extent to which NORTHCOM is integrating its training and exercises with the NEP we reviewed DOD, NORTHCOM, and Department of Homeland Security guidance to identify any differences in exercise planning and documentation between DOD’s guidance and that for the NEP. We used that analysis to determine under what conditions NORTHCOM should apply standards related to the NEP, and how DOD and its subordinate commands should participate in the NEP tier I or II exercises. We reviewed NORTHCOM documentation from two major national exercises conducted during fiscal year 2008 to determine the extent to which NORTHCOM employed the guidance from the Homeland Security Exercise Evaluation Program. We determined that national exercises that are operations-based exercises in that they involved the deployment of personnel would be the best candidates for evaluating NORTHCOM’s participation in such exercises. We also interviewed state emergency management and National Guard officials from six states that have exercised with NORTHCOM since 2005, to understand the extent to which NORTHCOM is integrating its exercise planning and conduct with the interagency as well as various state governments. In addressing our objectives, we reviewed plans and related documents, obtained information, and interviewed officials at the following locations: NORTHCOM Headquarters, Peterson Air Force Base, Colorado Springs, Colorado Joint Forces Command, Joint Warfighting Center, Suffolk, Virginia The Office of the Secretary of Defense, Washington, D.C. The Joint Staff, Washington, D.C. U.S. Army North, Fort Sam Houston, San Antonio, Texas National Guard Bureau, Arlington, Virginia Department of Homeland Security, Washington, D.C. U.S. Coast Guard, Atlantic Area, Portsmouth, VA FEMA’s National Preparedness Directorate, Washington, D.C. FEMA Region 1, Boston, Massachusetts FEMA Region 9, Oakland, California FEMA Region 10, Bothell, Washington General Services Administration, Washington, D.C. Arizona Division of Emergency Management, Phoenix, Arizona Arizona National Guard, Joint Force Headquarters, Phoenix, Arizona California Emergency Management Agency, Sacramento, California California National Guard, Joint Force Headquarters, Sacramento, Michigan State Police, Emergency Management and Homeland Security Division, Lansing, Michigan Michigan National Guard, Lansing, Michigan Nevada State Division of Emergency Management, Carson City, Nevada National Guard, Joint Force Headquarters, Carson City, Rhode Island Emergency Management Agency, Cranston, Rhode Island Rhode Island National Guard, Joint Force Headquarters, Cranston, Oregon Military Department, Office of Emergency Management, Salem, Oregon Oregon Military Department, National Guard Joint Force Headquarters, Salem, Oregon Washington Military Department, Emergency Management Division, Washington National Guard, Joint Force Headquarters, Camp Murray, We conducted our review from June 2008 to September 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We reviewed the time lines and milestones for developing exercises found in the Department of Defense’s Joint Training System and U.S. Northern Command’s (NORTHCOM) implementing guidance and compared them with the Department of Homeland Security’s Homeland Security Exercise and Evaluation Program guidance to determine the similarities and differences between them. We used the guidance associated with operation-based exercises rather than discussion-based exercises to present the full spectrum of Homeland Security Exercise and Evaluation Program processes and planning events. Davi M. D’Agostino, (202) 512-5431 or dagostinod@gao.gov. In addition to the contact named above, Joseph Kirschbaum, Assistant Director; Gilbert Kim; David Hubbell; Joanne Landesman; Christopher Mulkins; Erin Noel; Terry Richardson; and Richard Winsor made key contributions to this report. National Preparedness: FEMA Has Made Progress, but Needs to Complete and Integrate Planning, Exercise, and Assessment Efforts. GAO-09-369. Washington, D.C.: April 30, 2009. Emergency Management: Observations on DHS’ Preparedness for Catastrophic Disasters. GAO-08-868T. Washington, D.C.: June 11, 2008. National Response Framework: FEMA Needs Policies and Procedures to Better Integrate Non-Federal Stakeholders in the Revision Process. GAO-08-768. Washington, D.C.: June 11, 2008. Homeland Defense: Steps Have Been Taken to Improve U.S. Northern Command’s Coordination with States and the National Guards Bureau, but Gaps Remain. GAO-08-252. Washington, D.C.: April 16, 2008. Homeland Defense: U.S. Northern Command Has Made Progress but Needs to Address force Allocation, Readiness Tracking Gaps, and Other Issues. GAO-08-251. Washington, D.C.: April 16, 2008. Continuity of Operations: Selected Agencies Tested Various Capabilities during 2006 Governmentwide Exercise. GAO-08-185. Washington, D.C.: November 19, 2007. Homeland Security: Preliminary Information on Federal Action to Address Challenges Faced by State and Local Information Fusion Centers. GAO-07-1241T. Washington, D.C.: September 27, 2007. Homeland Security: Observations on DHS and FEMA Efforts to Prepare for and Respond to Major and Catastrophic Disasters and Address Related Recommendations and Legislation. GAO-07-1142T. Washington, D.C.: July 31, 2007. Influenza Pandemic: DOD Combatant Commands’ Preparedness Efforts Could Benefit from More Clearly Defined Roles, Resources, and Risk Mitigation. GAO-07-696. Washington, D.C.: June 20, 2007. Homeland Security: Preparing for and Responding to Disasters. GAO-07-395T. Washington, D.C.: March 9, 2007. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO-06-618. Washington, D.C.: September 6, 2006. Homeland Defense: National Guard Bureau Needs to Clarify Civil Support Teams’ M9ission and Address Management Challenges. GAO-06-498. Washington, D.C.: May 31, 2006. Hurricane Katrina: Better Plans and Exercises Needed to Guide the Military’s Response to Catastrophic Natural Disasters. GAO-06-808T. Washington, D.C.: May 25, 2006. Hurricane Katrina: Better Plans and Exercises Needed to Guide the Military’s Response to Catastrophic Natural Disasters. GAO-06-643. Washington, D.C.: May 15, 2006. Hurricane Katrina: GAO’s Preliminary Observations Regarding Preparedness, Response, and Recovery. GAO-06-442T. Washington, D.C.: March 8, 2006. Emergency Preparedness and Response: Some Issues and Challenges Associated with major Emergency Incidents. GAO-06-467T. Washington, D.C.: February 23, 2006. Statement by Comptroller General David M. Walker on GAO’S Preliminary Observations Regarding Preparedness and Response to Hurricanes Katrina and Rita. GAO-06-365R. Washington, D.C.: February 1, 2006. Homeland Security: DHS’ Efforts to Enhance First Responders’ All- Hazards Capabilities Continue to Evolve. GAO-05-652. Washington, D.C.: July 11, 2005. Homeland Security: Process for Reporting Lessons Learned from Seaport Exercises Needs Further Attention. GAO-05-170. Washington, D.C.: January 14, 2005. Homeland Security: Federal Leadership and Intergovernmental Cooperation Required to Achieve First Responder Interoperable Communications. GAO-04-740. Washington, D.C.: July 20, 2004. Homeland Defense: DOD Needs to Assess the Structure of U.S. Forces for Domestic Military Missions. GAO-03-670. Washington, D.C.: July 11, 2003. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. Joint Training: Observations on the Chairman, Joint Chiefs of Staff, Exercise Program. NSIAD-98-189. Washington, D.C.: July 10, 1998.
U.S. Northern Command (NORTHCOM) exercises to test preparedness to perform its homeland defense and civil support missions. GAO was asked to assess the extent to which NORTHCOM is (1) consistent with Department of Defense (DOD) training and exercise requirements, (2) involving interagency partners and states in its exercises, (3) using lessons learned and corrective actions to improve preparedness, and (4) integrating its exercises with the National Exercise Program (NEP). To do this, GAO reviewed NORTHCOM and NEP guidance and postexercise documentation, assessed NORTHCOM compliance, and compared DOD and NEP exercise requirements. NORTHCOM's exercise program is generally consistent with the requirements of DOD's Joint Training System, but its exercise reporting is inconsistent. Since the command was established in 2002, NORTHCOM has conducted 13 large-scale exercises and generally completed exercise summary reports within the required time frame. However, those reports did not consistently include certain information, such as areas needing improvement, because NORTHCOM lacks guidance that specifies exercise reports' content and format, potentially impacting its ability to meet internal standards for planning and execution of joint exercises, and to compare and share exercise results over time with interagency partners and states. Nineteen federal agencies and organizations and 17 states and the District of Columbia have participated in one or more of the seven large-scale exercises that NORTHCOM has conducted since September 2005. However, NORTHCOM faces challenges in involving states in the planning, conduct, and assessment of its exercises, such as adapting its exercise system and practices to involve other federal, state, local, and tribal agencies that do not have the same practices or level of planning resources. Inconsistencies with how NORTHCOM involves states in exercises are occurring in part because NORTHCOM officials lack experience dealing with states and do not have a consistent process for including states in exercises. Without such a process, NORTHCOM increases the risk that its exercises will not provide benefits for all participants, impact the seamless exercise of all levels of government, and potentially affect NORTHCOM's ability to provide civil support capabilities. NORTHCOM has a systematic lessons learned and corrective action program to improve preparedness, but gaps remain with collecting and sharing lessons with agency and state partners and managing corrective actions. Access to the system NORTHCOM uses for managing exercise observations is limited for non-DOD participants, and DOD believes that the Department of Homeland Security's system is not adequately protected from unauthorized users. NORTHCOM's mitigation steps have not resolved the issues. In addition, about 20 percent of the corrective actions tracked by NORTHCOM were being closed prematurely due to gaps in oversight. Closing issues prematurely increases the risk that issues will reoccur and limits the knowledge gained and value of the exercise. NORTHCOM has taken steps to integrate its exercises with the NEP, but guidance is not consistently applied. NORTHCOM has participated in several NEP exercises and is leading its first major NEP exercise in the fall of 2009. However, NORTHCOM has used DOD's Joint Training System planning and documentation requirements rather than DHS's requirements, because NEP guidance is not clear on what exercise planning standard should be used and DOD guidance does not address the issue. The states we visited use NEP guidance. Differences between NEP and DOD guidance could affect the ability of all participants to develop effective working relationships.
Outsourcing for commercial services is a growing practice within the government in an attempt to achieve cost savings, management efficiencies, and operating flexibility. Various studies in recent years have highlighted the potential for DOD to achieve significant savings from outsourcing competitions, especially those that involve commercial activities that are currently being performed by military personnel. Most of DOD’s outsourcing competitions, like those of other government agencies, are to be conducted in accordance with policy guidance and implementation procedures provided in the Office of Management and Budget’s (OMB) Circular A-76 and its supplemental handbook. In August 1995, the Deputy Secretary of Defense directed the services to make outsourcing of support activities a priority. The Navy’s initial outsourcing plans for fiscal years 1997 and 1998 indicated that it would conduct A-76 outsourcing competitions involving about 25,500 positions, including about 3,400 military billets. As of February 1998, however, the actual number of military billets announced for A-76 competitions in fiscal years 1997 and 1998 was changed to 2,100. Navy officials told us that when the Navy announces its intention to begin an A-76 study that includes military billets, the funding for those billets is eliminated from the military personnel budget beginning with the year the study is expected to be completed. The Navy’s rationale for eliminating these billets from the budget is that it expects the functions to be either outsourced to the private sector or retained in-house and performed by government civilians. Either way, the functions will be funded through the service’s operations and maintenance budget and not the military personnel budget. According to OMB’s Circular A-76, certain functions should not be outsourced to the private sector. These functions include activities that are closely related to the exercise of national defense and DOD’s war-fighting capability and must be performed by government personnel. DOD guidance designates that one such protected area is billets that are required to support rotational requirements for active duty enlisted military personnel returning from overseas assignments or sea duty. Rotational billets are generally defined as those positions that must remain available to military service members to (1) ensure that those returning from overseas assignments or sea duty have adequate rotation opportunities and (2) provide opportunities for the service members to continue to function within their areas of specialty for purposes of maintaining readiness, training, and required skills. The Navy has identified the minimum number of such rotational billets required for enlisted personnel for each specific skill specialty and grade. Its sea-shore rotation goal is that sufficient shore billets be available for each skill specialty and grade level to provide an equal mix of sea duty and shore duty, that is, 3 years at sea for every 3 years on shore, for its enlisted personnel in grades E-5 through E-9. Because sea billets exceed shore billets, the Vice Chief of Naval Operations established a sea-shore rotation policy in December 1997 directing that the aggregate sea-shore rotation for enlisted personnel in grades E-5 through E-9 be no more than 4 years at sea for every 3 years on shore. Actual sea-shore rotations, however, depending on the skill specialty and grade level, have ranged from 3 to 5 years at sea for every 3 years on shore. As of February 1998, the total number of sea billets exceeded shore billets by more than 40,000. Consequently, with fewer shore billets available for rotation purposes, less time is being spent ashore than at sea. For years, the Navy has been unable to attain its sea-shore rotation goal because of shortages of shore billets for some skills and the difficulty of duplicating some of the specific skill specialties on shore. Moreover, about 66 percent of the total enlisted billets for specific skill specialties (called ratings) for grades E-5 through E-9 required at sea aboard ships do not easily lend themselves to comparable shore duty, according to Navy officials. These types of billets, called sea ratings, include ratings such as electronic technicians, machinist mates, and various aviation-related specialties. To overcome the difficulty of providing comparable shore billets for all sea ratings, the Navy has had to use general duty shore billets for enlisted personnel that cannot be assigned to their specific rating on shore. General duty billets include such functions as security positions, recruiters, and other duties. Navy officials believe that using personnel in these billets is productive, but such positions should be limited because they can impact training, skill retention, and morale. According to these officials, personnel assigned to general duty billets are not receiving specific training and experience related to their sea-duty rating. Historically, the Navy has attempted to minimize the number of sailors in general duty billets. As of January 1998, the Navy had about 12,500 enlisted personnel working in general duty shore positions. As of August 1997, outsourcing studies announced by the Navy in fiscal years 1997 and 1998 included some military positions for which rotational shortages existed based on the sea-shore rotation policy effective at that time. Of the total 740 Navy-wide military billets announced for study in 1998, 306 billets are for tug operations and maintenance functions that include ratings that have rotational shortages. These included 201 military billets in Norfolk, Virginia, 51 military billets in Pearl Harbor, Hawaii, and 54 military billets in Guam for tug operations and maintenance functions. (See table 1.) We also identified other shore functions that have been announced for potential outsourcing that some Navy officials expect will create or contribute to existing shortages of rotational billets. In fiscal year 1997, the Navy announced plans to study 216 military billets for base operations support functions in Guam for ratings that have rotational shortages. In January 1998, the Navy announced plans to perform A-76 studies for bachelor officer quarters (BOQ) and bachelor enlisted quarters (BEQ) functions of military billets that have rotational shortages. (See table 2.) As of August 1997, data showed that these outsourcing initiatives would further reduce the rotation base for specific ratings and would add to existing rotational shortages. Navy officials told us that the decisions to study these functions for potential outsourcing were made before the Navy had developed servicewide and regional data needed to identify the impact on sea-shore rotations and, as a result, they were unaware of the potential impact. In commenting on a draft of this report, DOD added that, even though the Navy’s decision to study these functions was made before today’s stringent procedures were in place, the Navy concluded after the decision was made that the impact on sea-shore rotation and career progression would be acceptable. Navy officials at the affected installations stated that the Navy’s decision to study these functions for potential outsourcing will seriously affect sea-shore rotation, resulting in the elimination of military billets and fewer opportunities available on shore for enlisted personnel grades E-5 through E-9. Other Navy officials expressed similar concerns and the view that these outsourcing initiatives could result in less flexibility for the Navy and impair career progression and morale for its enlisted servicemembers. In fiscal year 1997, the Commander in Chief, Atlantic Fleet, canceled outsourcing study plans for about 240 military billets in the BOQ and BEQ functions because of Navy-wide rotational shortages for mess specialists and the related impact on sea-shore rotation. Similarly, in fiscal year 1998, the Commander in Chief, Pacific Fleet, canceled plans to begin A-76 studies involving about 63 military billets in these functions for the same reason. Although the funding for these billets had been deleted from the 1999 budget, both commands are planning to reinstate funding authorization by reprogramming existing resources. The Navy also canceled an A-76 study of the BOQ and BEQ functions at the Naval Security Station, Washington, because Navy officials had determined that outsourcing these functions would have further degraded sea-shore rotation. The Navy does not intend to cancel its plans to begin the A-76 studies announced for tug operations and maintenance involving 306 military billets or the base operations support at Guam even though the shore billets that will be eliminated will further impact the sea-shore rotation base. According to Navy officials, the decision to study for outsourcing the tug operations and maintenance function was initially based on the fact that the Navy’s tug boats were old and costly to maintain and would eventually have to be replaced if the tug operations and maintenance were not outsourced. Navy officials stated that several options will be considered to accommodate the impact on sea-shore rotation, such as reclassifying the shore billets to a related billet, general duty billet, or increasing the number of shore billets for those ratings in other locations. Until May 1997, the Navy did not have procedures in place to ensure that rotational requirements were adequately considered when it determined potential functions for outsourcing study. At that time, the Navy adopted policies and procedures to examine Navy-wide and regional effects of its outsourcing plans on the sea-shore rotation base. Specifically, a memorandum of agreement was established specifying the coordination process between the Navy’s headquarters infrastructure officials and the military personnel officials regarding the procedures for studying military functions for potential outsourcing. This memorandum of agreement was further strengthened in September 1997 by a more detailed Navy-wide memorandum of agreement that applied to all major commands for all infrastructure reductions, including outsourcing. Also, in August 1997, the Navy’s Bureau of Personnel provided major commands and other officials with Navy-wide and regional manpower data tools specifying the rotational requirements for each specific rating. Outsourcing officials are expected to use this information to assess rotational requirements of specific ratings for grades E-5 through E-9 when identifying potential candidates for outsourcing. If a rotational shortage is identified, the specific rating is not recommended for outsourcing to avoid further degradation of the sea-shore rotation base. In December 1997, the Vice Chief of Naval Operations approved a set of business rules to further strengthen the policies and procedures for protecting military billets with rotational shortages from potential outsourcing. These business rules require that the overall sea-shore rotation for sea ratings will not exceed 4 years at sea for every 3 years on shore and that the sea-shore rotation for individual ratings will not exceed 5 years at sea for every 3 years on shore. The Vice Chief of Naval Operations directed that these business rules be followed for all infrastructure reductions, including outsourcing. Moreover, Navy infrastructure officials and military manpower officials told us that they are continuing to work closely regarding outsourcing goals and sea-shore rotation requirements as the Navy moves to identify potential outsourcing candidates and meet its outsourcing study and savings goals. Between fiscal years 1997 and 2002, the Navy plans to study 80,500 civilian and military positions for potential outsourcing at an estimated savings of $2.5 billion. The Navy estimates that about 10,000 of these positions will be military billets and the remaining 70,500 will be positions currently occupied by civilians. (See table 3.) Because the funding for the Navy’s military billets is eliminated from the personnel budget when the billets are announced for study, the funding for all military billets approved for competition will be deleted from the Navy’s personnel budget by the year 2003. To eliminate the 10,000 military billets from the military personnel budget by the year 2003, the Navy’s objective has been to announce about 2,000 military billets for study each year for 5 years beginning in fiscal year 1997. In fiscal year 1997, the Navy announced plans to study about 1,400 military billets for potential outsourcing. It appears likely, however, that the Navy will fall short of its goal for fiscal year 1998. As of January 1998, the Navy had announced plans to study 740 military billets and 6,678 civilian positions for fiscal year 1998. Navy officials told us in February 1998 they will announce additional A-76 studies in fiscal year 1998, but did not know the specific activities that would be studied or the number of billets that would be affected. As of February 1998, the Navy was attempting to identify potential functions and billets for outsourcing in subsequent fiscal years, but it had not determined the specific number of military billets or civilian positions that will be announced for study in those years. Some Navy officials have expressed concern over whether they will be able to attain the overall goal of studying 10,000 military billets by the year 2003. In addition, some Navy base commanders are concerned that outsourcing decisions affecting their installations may be made without their input. Despite these concerns, the Navy has programmed estimated savings of $2.5 billion from outsourcing into its future years defense plan, increasing the pressure to identify candidates for outsourcing studies. The Navy has established an ambitious goal for itself in terms of number of positions it plans to study for potential outsourcing under A-76. At the same time, the Navy is relying on its major commands to identify the functions to study to meet these programmed budget savings. Navy officials stated that they began a series of planning conferences in September 1997 involving appropriate officials from Navy headquarters and major commands. According to these officials, one of the primary objectives of the planning conferences is to begin discussing a strategic plan for accomplishing the outsourcing goals for fiscal years 1999 through 2001. While we believe that a strategic plan is necessary to achieve the Navy’s outsourcing goals, ongoing coordination and improved planning between headquarters and the major commands will be required to reach agreement on realistic goals and time frames and to identify areas most conducive to outsourcing and likely to yield the greatest savings. In addition, improved planning and coordination could minimize the elimination of required military shore billets, as well as avoid prematurely programming savings into future years’ budgets. Navy officials stated that, in addition to the recent planning conferences, it plans to address the larger issue of how the Navy conducts its business and possible alternatives for meeting Navy-wide personnel levels and requirements. The Navy established ambitious goals for studying military and civilian personnel positions for potential outsourcing under A-76 competitions. Only as it began initiating the plans for some of these studies involving military personnel positions did it find that outsourcing some of these positions could affect positions reserved for sea-shore rotational requirements—a situation that caused the Navy to withdraw some of its planned outsourcing initiatives. The Navy has recently established policies and procedures to ensure that sea-shore rotation requirements are reviewed and considered when identifying potential functions for outsourcing. While the Navy has recently begun to focus on strategies for attaining its outsourcing goals for future years, improved planning and coordination between headquarters and major commands are needed to reach agreement on realistic goals and time frames. Improved planning and coordination could also identify areas most conducive to outsourcing, least likely to eliminate needed shore billets, and likely to yield the greatest savings. In addition, improved planning and coordination could minimize the elimination of required military shore billets, as well as avoid prematurely programming savings into future years’ budgets. To enhance the likelihood that plans for outsourcing are reasonable and achievable, we recommend that the Secretary of Defense take steps to ensure that the Secretary of the Navy, as it develops its strategic plan, involves the major commands to reach agreement on realistic goals and time frames, and identify areas most conducive to outsourcing. Likewise, we recommend that the Secretary of Defense periodically reassess whether outsourcing savings targets that are used in planning for future years budgets are achievable in the time frames planned. In commenting on a draft of this report, DOD concurred with our conclusions and recommendation (see app. II). DOD provided a number of comments addressing how the Navy has taken significant steps to implement policies and coordination procedures to protect rotational billets from outsourcing considerations and to involve the major commands in the strategic planning process for attaining its future years’ outsourcing goals. DOD noted, and we concur, that a number of studies included in the Navy’s initial outsourcing announcement were canceled because a subsequent review revealed that they were not appropriate competition candidates. Since then, DOD notes that the Navy has made progress to widen the scope of its outsourcing program and to involve all claimants (major commands) in the planning process. DOD indicated that the canceled outsourcing studies cited in our report were not representative of the Navy’s competitive outsourcing program as it exists today. The Navy is currently implementing a number of initiatives to improve strategic planning that should enable them to identify areas most conducive to outsourcing, without exacerbating shortages of rotational billets. DOD also stated that the Navy’s current outsourcing policies and procedures require that no function employing military personnel will be announced for potential outsourcing until the Navy’s Manpower Office determines that outsourcing the function will not have an adverse affect. DOD stated that these activities demonstrate the Navy’s commitment to work with its major commands, and therefore, additional direction from the Secretary of Defense is unnecessary. We agree that the Navy has begun some important actions toward developing a strategic plan and including its major commands in that process. However, the Navy has not completed its plan as of April 1998. At the same time, our report points out that it appears likely the Navy will fall short of its goal for new outsourcing studies in fiscal year 1998, and some Navy officials expressed concern to us over whether they will be able to attain the optimistic goal of studying 10,000 military billets by the year 2003 and save $2.5 billion from outsourcing in its future years defense plan. This goal adds pressure on the claimants to emphasize outsourcing, and accordingly, we believe it will remain critical for the Navy to continue to work with its major commands to complete the development of its plans for accomplishing these objectives. Likewise, we believe it is important to periodically reassess the extent to which savings goals and objectives are achievable and whether savings targets established for out-year budget purposes might need to be revised. In view of this, we have revised our recommendation to recommend that the Secretary of Defense ensure that the Secretary of the Navy, as it develops its strategic plan involves the major commands to reach agreement on realistic goals and time frames, and identify areas most conducive to outsourcing. We have also added a recommendation that the Secretary of Defense periodically reassess whether outsourcing savings targets that are used in planning for future years budgets are achievable in the time frames planned. Our scope and methodology are discussed in appendix I. DOD’s comments are reprinted in appendix II. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committees on Armed Services and on Appropriations and the House Committees on National Security and on Appropriations; the Director, Office of Management and Budget; and the Secretaries of the Army, the Navy, and the Air Force. Copies will also be made available to others upon request. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix III. Neither the Army nor the Air Force have experienced problems similar to the Navy in making outsourcing decisions, primarily because of mission differences. The Army’s and Air Force’s policies for protecting rotational billets are designed to ensure a proper balance between the numbers and types of billets located overseas and in the continental United States. The types of rotational billets that the Army and Air Force need to protect from outsourcing are generally in highly technical areas that would not normally be appropriate for outsourcing. Moreover, both services rely, to varying extent, on contractor personnel to perform base support type functions. The Navy, on the other hand, operates forward-deployed forces from its ships and requires military personnel to perform virtually all of its support services that might be done by civilians were the Navy operating from land bases. Therefore, the focus of the review was on the Navy. We met with officials from the Office of the Secretary of Defense, the Army, the Air Force, and the Navy regarding their policies for considering rotational and career development requirements in outsourcing decisions. We obtained policies related to outsourcing and rotational billets, memorandums of agreements, and procedures for identifying A-76 study candidates. We also met with officials from the Army Training and Doctrine Command at Fort Monroe, Virginia; the Air Force Air Combat Command at Langley Air Force Base, Virginia; and the Navy Commander in Chief, Atlantic Fleet in Norfolk, Virginia; to discuss their policies and procedures for identifying and protecting rotational billets from outsourcing considerations. We obtained documentation regarding current and planned A-76 studies, and A-76 study plans that were eliminated because of the impact on rotational requirements. We obtained information pertaining to outsourcing and rotational billets from the Navy Commander in Chief, Pacific Fleet at Pearl Harbor, Hawaii, and met with various Navy Base commanders in the Norfolk, Virginia, area to obtain their perspective on contracting out of functions historically performed by enlisted personnel. We reviewed the outsourcing initiatives and the impact of these initiatives on rotational billets in the Army, Air Force, and Navy. However, we focused the majority of our work on the Navy’s outsourcing initiatives and the potential impact of those initiatives on sea-shore rotation. We compared the database of Navy-wide and regional data on the rotational requirements for each specific rating for grades E-5 through E-9 to the Navy’s outsourcing initiatives. We did not independently validate the mathematical models the services used to identify rotational requirements or the criteria they used in building these models. We conducted our review from September 1997 to April 1998 in accordance with generally accepted government auditing standards. David A. Schmitt, Evaluator-in-Charge Sandra D. Epps, Site Senior Tracy Whitaker Banks, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed whether the Department of Defense's (DOD) outsourcing of commercial activities reduces the availability of rotational billets for active duty military personnel, focusing on: (1) how the Navy's current outsourcing efforts have affected rotational billets; and (2) whether the Navy has policies and procedures in place to minimize the impact of outsourcing on rotational billets in the future. GAO noted that: (1) several Navy Office of Management and Budget Circular A-76 competitions announced for study in fiscal years (FY) 1997 and 1998 have the potential to eliminate military billets in areas where rotational shortages exist for personnel returning from sea duty; (2) as a result, the Navy has decided not to begin some of these A-76 studies and plans to reinstate funding authorization for the military positions eliminated when the studies were announced; (3) until recently, the Navy had not developed specific policies and coordination procedures to protect rotational billets from outsourcing considerations; (4) according to Navy officials such policies and procedures were not needed prior to FY 1997 because the Navy's outsourcing initiatives were limited and not centrally managed; (5) in May 1997, Navy officials signed a memorandum of agreement specifying a coordination process between the Navy's headquarters infrastructure officials and military personnel representatives to ensure that consideration is given to rotation requirements when determining potential functions for outsourcing; (6) this memorandum of agreement was further strengthened in September 1997 by a more detailed Navy-wide memorandum of agreement that applied to all major commands, which the Navy refers to as major claimants, for all infrastructure reductions, including outsourcing; (7) this coordination policy should prove important since the Navy's goal is to have completed A-76 competitions for 80,500 positions by the year 2002, including about 10,000 military billets; (8) the Navy expects that its outsourcing efforts will produce savings and accordingly has programmed expected savings of $2.5 billion into its future years defense plan for FY 2000 through 2003; (9) the Navy has not identified the specific activities and locations that will be studied to achieve projected savings, but has tasked its major commands with recommending specific activities and locations for A-76 competitions to meet this savings goal; (10) the Navy recently began a series of planning conferences involving appropriate officials from headquarters and major commands focusing on strategies for attaining its future years' outsourcing goals; (11) however, given the Navy's plans for outsourcing competitions, ongoing coordination and improved planning between headquarters and major commands is needed to reach agreement on realistic goals and timeframes; and (12) in addition, improved planning and coordination could minimize the elimination of required military shore billets, as well as avoid prematurely programming savings into future years' budgets.
Administered by SBA’s Office of Disaster Assistance (ODA), the Disaster Loan Program is the primary federal program for funding long-range recovery for nonfarm businesses that are victims of disasters. It is also the only form of SBA assistance not limited to small businesses. Small Business Development Centers (SBDC) are SBA’s resource partners that provide disaster assistance to businesses. SBA officials said that SBDCs help SBA by doing the following: conducting local outreach to disaster victims, assisting declined business loan applicants or applicants who have withdrawn their loan applications with applications for reconsideration or re-acceptance, assisting declined applicants in remedying issues that initially precluded loan approvals, and providing business loan applicants with technical assistance, including helping businesses reconstruct business records, helping applicants better understand what is required to complete a loan application, compiling financial statements, and collecting required documents. SBA can make available several types of disaster loans, including two types of direct loans: physical disaster loans and economic injury disaster loans. Physical disaster loans are for permanent rebuilding and replacement of uninsured or underinsured disaster-damaged property. They are available to homeowners, renters, businesses of all sizes, and nonprofit organizations. Economic injury disaster loans provide small businesses that are not able to obtain credit elsewhere with necessary working capital until normal operations resume after a disaster declaration. Businesses of all sizes may apply for physical disaster loans, but only small businesses are eligible for economic injury loans. SBA has divided the disaster loan process into three steps: application, verification and loan processing, and closing. Applicants for physical disaster loans have 60 days from the date of disaster declaration to apply for the loan and applicants for the economic injury disaster loan applicants have 9 months. Disaster victims may apply for a disaster business loan through the disaster loan assistance web portal or by paper submission. The information from online and paper applications is fed into SBA’s Disaster Credit Management System, which SBA uses to process loan applications and make determinations for its disaster loan program. SBA has implemented most of the requirements of the 2008 Act, which comprises 26 provisions with substantive requirements for SBA, including requirements for disaster planning and simulations, reporting, and plan updates (see app. I for a summary of the provisions). For example, SBA made several changes to programs, policies, and procedures to enhance its capabilities to prepare for major disasters. Section 12063 states that SBA should improve public awareness of disaster declarations and application periods, and create a marketing and outreach plan. In 2012, SBA completed a marketing and outreach plan that included strategies for identifying regional stakeholders (including SBDCs, local emergency management agencies, and other local groups such as business and civic organizations) and identifying regional disaster risks. SBA’s plan stated that it would (1) develop webinars for specific regional risks and promote these before the traditional start of the season for certain types of disasters such as hurricanes; and (2) establish a recurring schedule for outreach with stakeholders when no disaster is occurring. Furthermore, the most recent Disaster Preparedness and Recovery Plan from 2016 outlines specific responsibilities for conducting region-specific marketing and outreach through SBA resource partners and others before a disaster as well as plans for scaling communications based on the severity of the disaster. (See below for more information about SBA’s Disaster Preparedness and Recovery Plan.) Section 12073 states that SBA must assign an individual with significant knowledge of, and substantial experience in, disaster readiness and planning, emergency response, maintaining a disaster response plan, and coordinating training exercises. In June 2008, SBA appointed an official to head the agency’s newly created Executive Office of Disaster Strategic Planning and Operations. SBA officials recently told us that the planning office, now named the Office of Disaster Planning and Risk Management, is under the office of the Chief Operating Officer. Although the organizational structure changed, the role of the director remains the same: to coordinate the efforts of other offices within SBA to execute disaster recovery as directed by the Administrator. Among the director’s responsibilities are to create, maintain, and implement the comprehensive disaster preparedness and recovery plan, and coordinate and direct SBA training exercises relating to disasters, including simulations and exercises coordinated with other government departments and agencies. Section 12075 states that SBA must develop, implement, or maintain a comprehensive written disaster response plan and update the plan annually. SBA issued a disaster response plan in November 2009 and the agency has continued to develop, implement, and revise the written disaster plan every year since then. The plan, now titled the Disaster Preparedness and Recovery Plan, outlines issues such as disaster responsibilities of SBA offices, SBA’s disaster staffing strategy, and plans to scale disaster loan-making operations. The plan is made available to all SBA staff as well as to the public through SBA’s website. SBA has taken actions to fully address other provisions, such as those relating to augmenting infrastructure, information technology and staff as well as improving disaster lending. For example, to improve its infrastructure, information technology, and staff, SBA put in place a secondary facility in Sacramento, California, to process loans during times when the main facility in Fort Worth, Texas, is unavailable. SBA also improved its Disaster Credit Management System, which the agency uses to process loan applications and make determinations for its disaster loan program, by increasing the number of concurrent users that can access it. Furthermore, SBA increased access to funds by making nonprofits eligible for economic injury disaster loans. SBA has not piloted or implemented three guaranteed disaster loan programs. The 2008 Act included three provisions requiring SBA to issue regulations to establish new guaranteed disaster programs using private- sector lenders: Expedited Disaster Assistance Loan Program (EDALP) would provide small businesses with expedited access to short-term guaranteed loans of up to $150,000. Immediate Disaster Assistance Program (IDAP) would provide small businesses with guaranteed bridge loans of up to $25,000 from private-sector lenders, with an SBA decision within 36 hours of a lender’s application on behalf of a borrower. Private Disaster Assistance Program (PDAP) would make guaranteed loans available to homeowners and small businesses in an amount up to $2 million. In 2009, we reported that SBA was planning to implement requirements of the 2008 Act, including pilot programs for IDAP and EDALP. SBA requested funding for the two programs in the President’s budget for fiscal year 2010 and received subsidy and administrative cost funding of $3 million in the 2010 appropriation, which would have allowed the agency to pilot about 600 loans under IDAP. SBA officials also told us that they performed initial outreach to lenders to obtain reactions to and interest in the programs. They believed such outreach would help SBA identify and address issues and determine the viability of the programs. In May 2010, SBA told us its goal was to have the pilot for IDAP in place by September 2010. Furthermore, the agency issued regulations for IDAP in October 2010. In 2014, we reported on the Disaster Loan Program (following Hurricane Sandy) and found that SBA had yet to pilot or implement the three programs for guaranteed disaster loans. In July 2014, SBA officials told us that the agency still was planning to conduct the IDAP pilot. However, based on lender feedback, SBA officials said that the statutory requirements, such as the 10-year loan, made a product like IDAP undesirable and lenders were unwilling to participate unless the loan term was decreased to 5 or 7 years. Congressional action would be required to revise statutory requirements, but SBA officials said they had not discussed the lender feedback with Congress. SBA officials also told us the agency planned to use IDAP as a guide to develop EDALP and PDAP, and until challenges with IDAP were resolved, it did not plan to implement these two programs. As a result of not documenting, analyzing, or communicating lender feedback, SBA risked not having reliable information—both to guide its own actions and to share with Congress—on what requirements should be revised to encourage lender participation. Federal internal control standards state that significant events should be promptly recorded to maintain their relevance and value to management in controlling operations and making decisions. We concluded that not sharing information with Congress on challenges to implementing IDAP might perpetuate the difficulties SBA faced in implementing these programs, which were intended to provide assistance to disaster victims. Therefore, we recommended that SBA conduct a formal documented evaluation of lenders’ feedback on implementation challenges and statutory changes that might be necessary to encourage lenders’ participation in IDAP, and then report to Congress on these topics. In response to our recommendations, SBA issued an Advance Notice of Proposed Rulemaking in October 2015 to seek comments on the three guaranteed loan programs. In July 2016, SBA sent a letter to the Ranking Member of the House Committee on Small Business that discussed how the agency evaluated feedback on the three programs and explained the remaining challenges to address the statutory provisions for the three programs. Based on this action, we closed the recommendations for SBA to develop an implementation plan, formally evaluate lender feedback, and report to Congress on implementation challenges. SBA has yet to announce how it will proceed with the statutory requirements to establish these loan programs. SBA made several changes to its planning documents in response to recommendations in our 2014 report about the agency’s response to Hurricane Sandy. In 2014, we found that after Hurricane Sandy, SBA did not meet its goal to process business loan applications (21 days from receipt to loan decision). SBA took an average of 45 days for physical disaster loan applications and 38 days for economic injury applications. According to SBA, the agency received a large volume of electronic applications within a few days of the disaster. While SBA created web- based loan applications to expedite the process and encouraged their use, the agency noted that it did not expect early receipt of such a high volume of loan applications early in its response and delayed increasing staffing. At the time of our 2014 report, SBA also had not updated its key disaster planning documents—the Disaster Preparedness and Recovery Plan and the Disaster Playbook—to adjust for the effects a large-volume, early surge in applications could have on staffing, resources, and forecasting models for future disasters. According to SBA’s Disaster Preparedness and Recovery Plan, the primary goals of forecasting and modeling are to predict application volume and application receipt as accurately as possible. Federal internal control standards state that management should identify risk (with methods that can include forecasting and strategic planning) and then analyze the risks for their possible effect. Without taking its experience with early application submissions after Hurricane Sandy into account, SBA risked being unprepared for such a situation in future disaster responses, potentially resulting in delays in disbursing loan funds to disaster victims. We therefore recommended that SBA revise its disaster planning documents to anticipate the potential impact of early application submissions on staffing, resources, and timely disaster response. In response to our recommendation, SBA updated its key disaster planning documents, including the Disaster Preparedness and Recovery Plan and Disaster Playbook, to reflect the impact of early application submissions on staffing for future disasters. For example, the documents note that the introduction of the electronic loan application increased the intake of applications soon after disasters. SBA received 83 percent of applications electronically in fiscal year 2015 and 90 percent in 2016. The documents also note that the electronic loan application has reduced the time available to achieve maximum required staffing and that SBA has revised its internal resource requirements model for future disasters to activate staff earlier based on the receipt of applications earlier in the process. Furthermore, our review of the most recent Disaster Preparedness and Recovery Plan from 2016 shows that SBA continues to factor in the effect of electronic loan application submissions on staffing requirements. In our November 2016 report, we reviewed the actions SBA took or planned to take to improve the disaster loan program, as discussed in its Fiscal Year 2015 Annual Performance Report. SBA focused on promoting disaster preparedness, streamlining the loan process, and enhancing online application capabilities (see table 1). We also reported in November 2016 that, according to SBA officials, the agency made recent enhancements to the disaster loan assistance web portal, such as a feature that allows a loan applicant to check the status of an application and the application’s relative place in the queue for loan processing. The web portal also includes a frequently asked questions page, telephone, and e-mail contacts to SBA customer service, and links to other SBA information resources. These enhancements may have had a positive impact on the agency’s loan processing. For example, we reported that an SBA official explained that information from online applications is imported directly into the Disaster Credit Management System, reducing the likelihood of errors in loan applications, reducing follow-up contacts with loan applicants, and expediting loan processing. As we found in our November 2016 report, SBA published information (print and electronic) about the disaster loan process, but much of this information is not easily accessible from the disaster loan assistance web portal. SBA’s available information resources include the following: Disaster business loan application form (Form 5) lists required documents and additional information that may be necessary for a decision on the application. Fact Sheet for Businesses of All Sizes provides information about disaster business loans, including estimated time frames, in a question-and-answer format. 2015 Reference Guide to the SBA Disaster Loan Program and Three-Step Process Flier describe the three steps of the loan process, required documents, and estimated time frames. Partner Training Portal provides disaster-loan-related information and resources for SBDCs (at https://www.sba.gov/ptp/disaster). However, we found SBA had not effectively integrated these information resources into its online portals; much of the information was not easily accessible from the loan portal’s launch page or available on the training portal. For example, when a user clicks on the “General Loan Information” link in the loan portal, the site routes the user to SBA’s main website, where the user would encounter another menu of links. To access the fact sheet, the reference guide, and the three-step process flier, a site user would click on three successive links and then select from a menu of 15 additional links. Among the group of 15 links, the link for Disaster Loan Fact Sheets contains additional links to five separate fact sheets for various types of loans. According to SBA officials, SBA plans to incorporate information from the three-step loan process flier in the online application, but does not have a time frame for specific improvements. SBA officials also said that disaster-loan information is not prominently located on SBA’s website because of layout and space constraints arising from the agency’s other programs and priorities. We concluded that absent better integration of, and streamlined access to, disaster loan-related information on SBA’s web portals, loan applicants—and SBDCs assisting disaster victims— may not be aware of key information for completing applications. Thus, we recommended that SBA better integrate information (such as its reference guide and three-step process flier) into its portals. In response to our report, SBA stated in a January 2017 letter that the disaster loan assistance portal includes links to various loan-related resources and a link to SBA.gov, where users can access the SBA Disaster Loan Program Reference Guide and online learning center. However, SBA did not indicate what actions it would take in response to our recommendation. We plan to follow up with SBA on whether the agency plans to centrally integrate links to loan-related resources into its disaster loan assistance web portal and Partner Training Portal. We also found in our November 2016 report that SBA has not consistently described key features of the loan process in its information resources, such as the application form, fact sheet, and reference guide, and none of these resources include explanations for required documents (see table 2). The Paperwork Reduction Act has a broad requirement that an agency explain reasons for collecting information and use of the collected information. According to SBDCs we interviewed and responses from SBA and American Customer Satisfaction Index surveys, some business loan applicants found the process confusing due to inconsistent information about the application process, unexpected requests for additional documentation, and lack of information about the reasons for required documents. We concluded that absent more consistent information in print and online resources, loan applicants and SBDCs might not understand the disaster loan process. As a result, we recommended SBA ensure consistency of content about its disaster loan process by including information, as appropriate, on the (1) three-step process; (2) types of documentation SBA may request and reasons for the requests; and (3) estimates of loan processing time frames and information on factors that may affect processing time. In response to our report, SBA stated in a January 2017 letter that the agency provides consistent messaging about the time frame for making approval decisions on disaster loan applications: SBA’s goal is to make a decision on all home and business disaster loan applications within 2–3 weeks. However, SBA did not indicate that what actions it would take in response to our recommendation. We plan to follow up with SBA on whether the agency will take any action to ensure content is consistent across print and online resources, among other things. In our November 2016 report, we further found that some business loan applicants were confused about the financial terminology and financial forms required in the application. Three SBDCs we interviewed mentioned instances in which applicants had difficulty understanding the parts of the loan application dealing with financial statements and financial terminology. For example, applicants were not familiar with financial statements, did not know how to access information in a financial statement, and did not know how to create a financial statement. Although the loan forms include instructions, the instructions do not define the financial terminology. According to SBA officials, the agency’s customer service representatives can direct applicants to SBDCs for help. Two of the three SBDCs said these difficulties arose among business owners who did not have formal education or training in finance or related disciplines—and were attempting applications during high-stress periods following disasters. The Plain Writing Act of 2010 requires that federal agencies use plain writing in every document they issue. According to SBA officials, although the agency does not provide a glossary for finance terminology in loan application forms, the disaster loan assistance web portal has a “contextual help” feature that incorporates information from form instructions. SBA customer service representatives and local SBDCs also can help explain forms and key terms. SBA has taken other actions to inform potential applicants about its loan program, including holding webinars and conducting outreach. However, these efforts may not offer sufficient assistance or reach all applicants. We concluded that without explanations of financial terminology, loan applicants may not fully understand application requirements, which may contribute to confusion in completing the financial forms. Therefore, we recommended that SBA define financial terminology on loan application forms (for example, by adding a glossary to the “help” feature on the web portal). In response to our report, SBA stated in a January 2017 letter that the agency has been developing a glossary of financial terms used in SBA home and business disaster loan applications and in required supporting financial documents. Once completed, SBA stated that it will make the glossary available through the agency’s disaster loan assistance portal and the SBA.gov website. We plan to follow up with SBA once the agency completes the glossary. Chairman Chabot, Ranking Member Velázquez, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For further information on this testimony, please contact William B. Shear at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Marshall Hamlett (Assistant Director), Christine Ramos (Analyst in Charge), John McGrail, and Barbara Roesmann. Appendix I: Summary of Provisions in the Small Business Disaster Response and Loan Improvements Act of 2008 This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
While SBA is known primarily for its financial support of small businesses, the agency also assists businesses of all sizes and homeowners affected by natural and other declared disasters through its Disaster Loan Program. Disaster loans can be used to help rebuild or replace damaged property or continue business operations. After SBA was criticized for its performance following the 2005 Gulf Coast hurricanes, the agency took steps to reform the program and Congress also passed the 2008 Act. After Hurricane Sandy (2012), questions arose on the extent to which the program had improved since the 2005 Gulf Coast Hurricanes and whether previously identified deficiencies had been addressed. This statement discusses (1) SBA implementation of provisions from the 2008 Act; (2) additional improvements to agency planning following Hurricane Sandy; and (3) SBA's recent and planned actions to improve information resources for business loan applicants. This statement is based on GAO products issued between July 2009 and November 2016. GAO also met with SBA officials in April 2017 to discuss the status of open recommendations and other aspects of the program. The Small Business Administration (SBA) implemented most requirements of the Small Business Disaster Response and Loan Improvements Act of 2008 (2008 Act). For example, in response to the 2008 Act, SBA appointed an official to head the disaster planning office and annually updates its disaster response plan. SBA also implemented provisions relating to marketing and outreach; augmenting infrastructure, information technology, and staff; and increasing access to funds for nonprofits, among other areas. However, SBA has not yet implemented provisions to establish three guaranteed loan programs. In 2010, SBA received an appropriation to pilot one program and performed initial outreach to lenders. However, in 2014, GAO found that SBA had not implemented the programs or conducted a pilot because of concerns from lenders about loan features. GAO recommended that SBA evaluate lender feedback and report to Congress about implementation challenges. In response, SBA sought comments from lenders and sent a letter to Congress that explained remaining implementation challenges. After Hurricane Sandy, SBA further enhanced its planning for disaster response, including processing of loan applications. In a 2014 report on the Disaster Loan Program, GAO found that while SBA encouraged electronic submissions of loan applications, SBA did not expect early receipt of a high volume of applications after Sandy and delayed increasing staffing. SBA also did not update key disaster planning documents to adjust for the effects of such a surge in future disasters. GAO recommended SBA revise its disaster planning documents to anticipate the potential impact of early application submissions on staffing and resources. In response, SBA updated its planning documents to account for such impacts. SBA has taken some actions to enhance information resources for business loan applicants but could do more to improve its presentation of online disaster loan-related information. In 2016, GAO found that SBA took or planned to take various actions to improve the disaster loan program and focused on promoting disaster preparedness, streamlining the loan process, and enhancing online application capabilities. However, GAO found that SBA had not effectively presented information on disaster loans (in a way that would help users efficiently find it), had not consistently described key features and requirements of the loan process in print and online resources, or clearly defined financial terminology used in loan applications. Absent better integration of, and streamlined access to, disaster loan-related information, loan applicants may not be aware of key information and requirements for completing the applications. Therefore, GAO recommended that SBA (1) integrate disaster loan-related information into its web portals to be more accessible to users, (2) ensure consistency of content about the disaster loan process across information resources, and (3) better define financial terminology used in the loan application forms. In January 2017, SBA indicated it was working on a glossary for the application. GAO plans to follow up with SBA about the other two open recommendations.
NTSB was initially established within the newly formed Department of Transportation (DOT) in 1966, but was made independent from DOT in 1974. NTSB is charged by Congress with investigating every civil aviation accident in the United States and significant accidents in other modes of transportation—railroad, highway, marine and pipeline. NTSB determines the probable cause of the accidents and issues safety recommendations aimed at preventing future accidents. In addition, NTSB carries out special studies concerning transportation safety and coordinates the resources of the federal government and other organizations to provide assistance to victims and their family members impacted by major transportation disasters. Unlike regulatory transportation agencies, such as the Federal Aviation Administration, NTSB does not have the authority to promulgate regulations to promote safety, but instead makes recommendations in its accident reports and safety studies to agencies that have such regulatory authority. NTSB is comprised of a five-person board—a chairman, vice chairman, and three members—appointed by the President with the advice and consent of the Senate. The chairman is the NTSB’s chief executive and administrative officer. The agency is headquartered in Washington, D.C., and maintains 7 regional offices and a training center located in Ashburn, Virginia. In fiscal year 2013, the board was supported by a staff of about 400, which includes nearly 140 investigators assigned to its modal offices—aviation; highway; marine; and rail, pipeline, and hazardous materials—as well as 73 investigation-related employees, such as engineers and meteorologists. NTSB’s modal offices vary in size in relation to the number of investigators, with the Aviation Safety office being the largest. In addition, the Office of Research and Engineering provides technical, laboratory, analytical, and engineering support for the modal offices. Staff from this office interpret information from flight data recorders, create accident computer simulations, and publish general safety studies. This review focuses on the extent to which NTSB has achieved measurable improvements from actions the agency has taken in five management and operational areas based on prior GAO recommendations. Training Center utilization. Making efficient and effective use of resources provided by Congress is a key responsibility of federal agencies. NTSB’s Training Center, which opened in August 2003 in Ashburn, Virginia, consists of classrooms, offices, and laboratory facilities used for instructional purposes and active investigations. NTSB uses this center to train its own staff and others from the transportation community to improve accident investigation techniques. NTSB charges tuition for those outside NTSB to take its courses, and generates additional revenue from space rentals to other organizations for events such as conferences on a cost reimbursable basis. Although there is no statutory requirement that NTSB cover the cost of its Training Center through the revenues generated from the facility, a 2007 review we conducted found that NTSB was not capitalizing on its lease flexibility to generate additional revenues and classrooms were significantly underutilized. For example, we found that less than 10 percent of the available classroom capacity was used in fiscal years 2005 and 2006. Furthermore, NTSB was encouraged in a Senate report accompanying the 2006 appropriations bill for DOT to be more aggressive in imposing and collecting fees to cover the costs of the Training Center. Since then, NTSB leased a large portion of the Training Center’s non-classroom space to the Federal Air Marshall Service and provided short-term leases of classroom space to other organizations. In addition, NTSB increased the amount of training it delivered at the Training Center. Recommendation close-out process. Efficiently managing the recommendation tracking process is a key function, according to NTSB officials. The recommendation close-out process is managed by the Safety Recommendations and Quality Assurance Division, which has responsibility for tracking the status of its recommendations. When NTSB receives correspondence from an agency about an NTSB recommendation, this division ensures it is properly routed and reviewed and contacts the agency about whether the response is acceptable. If NTSB is delayed in communicating with agencies about whether NTSB considers actions to address recommendations acceptable, that agency could delay implementing a course of action pending approval. In fiscal year 2010, NTSB replaced a lengthy, paper-based process with an automated system— the Correspondence, Notation, and Safety Recommendation system (CNS)—intended to facilitate the recommendation close out process by electronically storing and automatically routing agencies’ proposals to the appropriate NTSB reviewers, allowing for concurrent reviews by multiple parties within NTSB, and more accurately tracking responses. It is important to note that an agency is not necessarily restricted from implementing action prior to formal NTSB approval of that action. Depending on the complexity of the issue, agencies may begin to address issues prior to NTSB’s providing formal approval. In other circumstances, NTSB addresses safety deficiencies immediately before the completion of an investigation. For example, during the course of the TWA flight 800 investigation, NTSB issued an urgent safety recommendation once it was determined that an explosion in a fuel tank caused the breakup of the aircraft. Communication. Useful management practices include seeking and monitoring employee attitudes, encouraging two-way communication between employees and management, and incorporating employee feedback into new policies and procedures. This type of communication and collaboration across offices at all levels can improve an agency’s ability to carry out its mission by providing opportunities to share best practices and helping to ensure that any needed input is provided in a timely manner. To this end, NTSB managers and board members began holding periodic meetings with staff, conducting outreach to regional offices, and surveying staff about the effectiveness of communication techniques. Diversity management. Implementing a diversity management strategy and a more diverse workforce helps foster a work environment that not only empowers and motivates people to contribute to mission but also provides accountability and fairness for all employees. Diversity management helps an organization create and maintain a positive work environment where the similarities and differences of individuals are valued, so that all can reach their potential and maximize their contributions to the organization’s strategic goals. NTSB has developed diversity training courses and held events to educate staff on diversity and inclusiveness issues, created career development and mentoring programs to create upward mobility, targeted its recruitment program to reach a more diverse pool of applicants, and surveyed staff to assess the effectiveness of its efforts. Financial management. Sound financial management is crucial for responsible stewardship of federal resources. Traditionally, government financial systems and government managers have focused on tracking how agencies spend their budgets but did not focus on assessing the costs of activities to achieve efficiencies. More recently however, some government agencies have adopted cost accounting systems that track the cost of providing a service—in NTSB’s case, an accident investigation. In 2006, GAO recommended that NTSB develop a cost accounting system to track the amount of time employees spend on each investigation and other activities. This approach allows management to link the cost of providing a service directly with the budget and allocate resources based on those costs. To determine the costs associated with conducting accident investigations, NTSB launched a time and attendance program tied to its cost accounting platform that allows the agency to collect and analyze labor and certain other costs associated with individual investigations. Investigators account for their time on investigations through the time and attendance system using specific codes that identify different investigations. Our analysis found varying degrees of improvement associated with NTSB’s actions in each of the management and operational areas we selected for review. Our analysis showed that NTSB improved the utilization of the Training Center, which allowed it to recover a larger portion of its operating costs. NTSB increased utilization of both classroom and non-classroom space at the Training Center since we conducted our work in 2006. NTSB subleased all available office space at the Training Center to the Federal Air Marshal Service in 2007, and utilization of non-classroom spaces has been at 95 percent since then. At the same time, NTSB increased utilization of the classroom space, increasing its own use of classrooms, subleasing approximately one-third of the classroom space to the Department of Homeland Security in 2008, and providing short term leases to other outside parties for classroom use. Subsequently, NTSB reported classroom utilization rose from less than 10 percent in 2005 to 18 percent in fiscal year 2007. By fiscal year 2009, it had increased to over 60 percent—the target we identified in our 2008 report as the appropriate minimum level. Classroom utilization has remained above 60 percent through fiscal year 2012. We also found that improved Training Center utilization generated additional revenue over time, which allowed NTSB to recover a larger portion of the facility’s operating costs. When the Training Center first opened in fiscal year 2004, NTSB recovered about 4 percent of its operating costs, resulting in a deficit of nearly $6.3 million. Portions of the Training Center’s costs that are not covered by revenues from tuition and other sources such as facility rentals are offset by general appropriations to the agency; therefore, generating additional revenue makes those appropriated funds available for other uses. In 2011, NTSB indicated that it was committed to improving cost recovery at the Training Center. That year the agency set a goal to recover costs of the Training Center within 10 percent of the previous fiscal year. For example, in fiscal year 2010, NTSB recovered $2 million in operating costs, making the fiscal year 2011 goal to recover at least $1.8 million of the Training Center’s costs. NTSB achieved its goal in fiscal years 2011 and 2012 by which time the agency was recovering about half of the Training Center’s operating costs, reducing the operating deficit at the Training Center to $2.1 million, one-third of what it was in 2004. (See fig. 1 for changes in the Training Center’s expenses and revenues.) The automation of NTSB’s recommendation follow-up process has reduced the amount of time it takes to formally respond to agencies about whether planned actions to implement an NTSB recommendation are acceptable. In fiscal year 2010, NTSB deployed the previously described CNS to manage the Board’s correspondence, including accident reports, safety studies, recommendation transmittals, and public notice responses. CNS allows for the relevant modal offices and the Research and Engineering Office to simultaneously review and assess planned actions to address NTSB recommendations. According to NTSB officials, prior to the implementation of CNS, the average time NTSB took to respond to an agency’s proposals to address an NTSB recommendation was 216 days. After CNS was implemented, that figure dropped to 115 days—a reduction of 47 percent. At the same time, the number of responses the agency put out each quarter also increased. (See fig. 2.) NTSB officials have indicated that they have an internal goal to further reduce the response time to 90 days on average. Our analysis of improvements in NTSB’s employee and management communication related to NTSB’s efforts indicated uneven results; specifically we observed improvements in some but not all measures. We reviewed NTSB employees’ responses to the three federal survey questions we determined related to employees’ perceptions about managers’ communication, as described below: Managers communicate the goals and priorities of the organization. NTSB respondents increased their positive responses to this question, from about 49 percent in 2004 to 57 percent in 2012. (See fig. 3.) We compared NTSB employees’ responses with employees in a group of small federal and independent agencies and found that NTSB employees’ satisfaction level increased while the proportion of employees from small agencies responding positively to this question during the same period was relatively unchanged from 57 percent in 2004 to 59 percent in 2012. How satisfied are you with the information you receive from management on what’s going on in your organization? Responses to this question indicated an increase in the level of satisfaction, from 44 percent in 2004 to 49 percent in 2012. NTSB employees’ responses were similar to the positive responses by employees from small agencies that showed an increase from 44 percent to 50 percent. Managers promote communication among different work units (for example, about projects, goals, needed resources). The proportion of respondents reporting positive responses on this survey question from 2004 to 2012 was relatively unchanged from 48 percent to 50 percent. Similarly, there was little change in the proportion of positive responses reported by federal employees from small agencies from 50 percent in 2004 and 49 percent in 2012. NTSB officials stated that their internal communication surveys, which the agency administered 2009 through 2011, provided information that helped them identify continuing barriers to employee and management communication. In 2012, NTSB developed an action plan in this area that included detailed activities, target dates, and regular status reports. Furthermore, because of lingering concerns, NTSB continues to monitor employees’ views about employee and management communication to address any remaining weaknesses. Our analysis of outcomes associated with NTSB’s efforts to improve its diversity management program indicated uneven results, with indications of improvements in some measures but not all. We reviewed NTSB employees’ responses to the three federal survey questions that we determined related to employees’ perceptions about managers’ diversity and inclusiveness efforts, as described below: My supervisor/team leader is committed to a workforce representative of all segments of society. The federal survey of NTSB employees indicated an increase in the positive responses, from 54 percent to about 71 percent. (See fig. 4.) We compared NTSB employees’ responses to those of employees in a group of small federal and independent agencies and found that NTSB employees were more positive in 2012 than the employees from small agencies, whose positive responses rose from 57 percent in 2004 to 69 percent in 2012. Policies and programs promote diversity in the workplace. Again, NTSB employees’ responses to this question indicated an increase from 2004 to 2012, from 55 percent to 73 percent. NTSB employees’ satisfaction level exceeded that of employees from small agencies, whose positive responses to this question stayed about the same, from 56 percent in 2004 to 57 percent in 2012. Managers/supervisors/team leaders work well with employees of different backgrounds. The proportion of NTSB employees reporting positive responses on this survey question from 2004 to 2012 declined from 66 percent to 60 percent. The decline in NTSB employees’ level of satisfaction was greater than that shown by employees from small agencies, whose positive responses also declined from 66 percent in 2004 to 63 percent in 2012. One of the potential outcomes of a robust diversity management program is an increase in the diversity of the workforce. Based on our analysis of NTSB’s workforce diversity data, we found that the proportion of white employees in NTSB’s workforce declined from 77 percent in 2008 (289 employees) to 73 percent (293 employees) in 2012. (See table 1.) NTSB’s total workforce increased 6 percent over the same period, from 378 in 2008 to 402 in fiscal year 2012. The proportion of women remained roughly the same at about 40 percent, as did the proportion of African American employees at 17 percent, and Hispanic employees at 2 percent of the total NTSB workforce. Conversely, NTSB increased the number of employees who are Native American although these employees represent only 2 percent of the overall workforce. We compared these figures to those representing comparative groups in the civilian labor force and found that NTSB’s labor force had a larger proportion of some minority groups (e.g., African American) and smaller proportion of other groups (e.g., Hispanic) than the civilian labor force. Roughly half of NTSB’s workforce performs investigations and investigation-related work—work directly related to NTSB’s core mission. In 2008 there were 190 investigators and related staff; in 2010, there were 198, and in 2012 there were 206. Based on our analysis, we found that the proportion of investigator and investigation-related staff that were white was about 90 percent over the period of 2008 to 2012 and the proportion of women was about 19 percent. We compared these figures to those representing comparative groups in the civilian labor force and found that NTSB’s investigator and investigation-related workforce had a smaller proportion of minority and other groups, including African American, Asian, Hispanic, and women than the civilian labor force. In addition, NTSB reported that from fiscal years 2008 to 2012, it had no minority group members among its 15 senior executives although the number of women increased from 3 to 4. Despite its efforts, NTSB has not been able to appreciably change its diversity profile for minority group members and women. However, as mentioned previously, NTSB has taken steps to implement initiatives as a result of its diversity management strategy, including its recently completed diversity and inclusiveness survey, which the agency plans to use to identify gaps in its diversity and inclusiveness efforts and to benchmark future progress. It is too soon to tell whether initiatives, such as its recruitment strategies, will lead to additional changes in its workforce diversity profile. We were unable to determine if NTSB’s cost accounting system had improved the agency’s ability to make operational decisions because it has not yet fully utilized the system for its intended purposes. For the implementation of a cost accounting system to be effective, it must be tailored to the needs of the organization, be a tool managers can use to make everyday decisions, and be based on sound data that captures time spent on all activities, such as investigations and training. In 2011, NTSB implemented a cost accounting system that includes a time and attendance program in response to a GAO recommendation. This program allows an investigator to assign his or her time to specific investigations through a series of codes, allowing NTSB to assess the cost of conducting investigations rather than simply tracking and managing a budget. NTSB officials stated that the time and attendance system has allowed it to obtain information about the cost of investigations more efficiently than its prior method. In a May 2011 advisory, NTSB management envisioned that the cost accounting system would enable NTSB to measure and compare performance with other organizations, and the data from the system would help the agency monitor and improve productivity and mission effectiveness by better utilizing personnel resources. However, officials provided no time frame for when the data might be used by management for making resource and operational decisions. NTSB officials stated that they are currently focused on ensuring the quality of the time and attendance data before developing goals, targets, and management tools or using such information to make resource or operational decisions. While ensuring the quality of data is a necessary step in fully implementing a cost accounting system, it has been over 2 years since NTSB first began collecting time and attendance data to establish the cost of conducting investigations. As a result, NTSB is not using the system’s full capabilities. Thus, NTSB has not yet fully achieved its vision of using the data to improve labor productivity and mission effectiveness. NTSB has implemented a cost accounting system, but effective utilization is required to achieve the long-term rewards of those efforts. Although NTSB has been collecting data from this system to account for costs of investigations, it has not yet developed a management strategy that would allow the agency to maximize the utility of the cost accounting system. This has prevented NTSB from using that information to make the decisions necessary to better manage its labor resources. Without fully utilizing the cost accounting system, NTSB will not achieve the intended benefits of improving labor productivity and mission effectiveness. NTSB needs to continue its improvement efforts in each of the five areas discussed in this report. Further, to improve financial management and provide information to managers for operational decisions, we recommend that the Chairman of the NTSB direct senior management to develop a strategy for maximizing the utility of NTSB’s cost accounting system. We provided a draft of this report to NTSB for its review and comment. The agency provided written comments (see app. II). NTSB agreed with our recommendation and provided technical clarifications that we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and to the Chairman of the National Transportation Safety Board. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834. Contact points for Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our objective in this review was to assess whether there have been management and operational improvements associated with the National Transportation Safety Board’s (NTSB) actions in areas where GAO had conducted previous work and made recommendations. In that previous work, GAO made 21 recommendations within 12 management and operational areas. Some areas have more than one recommendation. For example, the communication area has two related recommendations; the first called for NTSB to develop mechanisms to facilitate communication from staff to management and the other called for NTSB to report to Congress on the status of the GAO recommendations. To determine whether there were any associated improvements based on NTSB’s actions, we first identified a subset of the 12 original management and operational areas that were included in NTSB’s 2013-2016 Strategic Plan. Being highlighted in NTSB’s current strategic plan indicates that these areas were relevant and important areas for the agency. We identified 7 of the 12 original management and operational areas in NTSB’s current strategic plan. These 7 areas contained 13 of the 21 original recommendations. Many of the 13 recommendations within these areas required only a single action by NTSB to implement (e.g. develop a strategic plan that follows performance based practices, articulate risk- based criteria for selecting which accidents to investigate, or obtain authority to use appropriations to make lease payments in order to correct its violation of the Anti-Deficiency Act) while other of those recommendations included strategies involving continual action and monitoring overtime in order to achieve desired improvements (e.g., increased utilization of the training center and improve the recommendation close out process). It is these latter types of recommendations that we focused on in this review, and based on these criteria, we identified 5 for review: increase the utilization of the Training Center, improve the process for changing the status of recommendations through computerization and concurrent review, develop mechanisms to facilitate communication from staff to develop strategies for diversity management as part of the human develop a full cost accounting system to track time employees spend on each investigation in training. We identified a sixth recommendation—maximize the delivery of core curriculum for each mode at the Training Center—based on our selection criteria. However, we did not include this recommendation in our review because NTSB lacked information about employee training and we could not identify an outcome measure without such data. To identify the outcome measures to assess changes associated with NTSB’s actions, we used information from prior GAO reports, information from NTSB including strategic plans, workforce reports, and financial reports as well as information gathered from interviews with NTSB officials. The measures we identified for each recommendation are: (1) Training Center utilization—utilization of classroom and non-classroom space and operating deficit; (2) recommendation close-out process— average time to respond to agency proposals; (3) employee and management communication—employee responses to Office of Personnel Management’s (OPM) federal employee surveys; (4) diversity management—employee responses to OPM’s federal employee surveys and NTSB employment levels of women and members of racial and ethnic groups; and (5) financial management—cost accounting reports used to measure performance. To measure whether there was improvement in the outcomes and results associated with NTSB’s actions, we compared current conditions with the initial conditions at the time we performed our previous work or earlier in some instances in order to establish a baseline before actions occurred. We used NTSB’s financial and program data, employee survey data from OPM, and workforce data from NTSB and the Bureau of Labor and Statistics (BLS). To ensure the data used were of sufficient reliability for our analysis, we examined program reporting procedures and quality assurance controls, and discussed various data elements with knowledgeable agency officials. We also spoke with NTSB officials who were knowledgeable about operations and management in these five selected areas. In addition to the contact named above, H. Brandon Haller (Assistant Director), Christopher Jones, Gail Marnik, Josh Ormond, Amy Rosewarne, and Jack Warner made key contributions to this report. National Transportation Safety Board: Implementation of GAO Recommendations. GAO-12-306R. Washington, D.C.: January 6, 2012 National Transportation Safety Board: Issues related to the 2010 Reauthorization. GAO-10-366T. Washington, D.C.: January 27, 2010 National Transportation Safety Board: Reauthorization Provides an Opportunity to Focus on Implementing Leading Management Practices and Addressing Human Capital and Training Center Issues. GAO-10-183T. Washington, D.C.: October 29, 2009 National Transportation Safety Board: Progress Made in Management Practices, Investigation Priorities, Training Center Use, and Information Security, But These Areas Continue to Need Improvement. GAO-08-652T. Washington, D.C.: April 23, 2008 National Transportation Safety Board: Observations on the Draft Business Plan for NTSB’s Training Center. GAO-07-886R. Washington, D.C.: June 14, 2007 National Transportation Safety Board: Progress Made, Yet Management Practices, Investigation Priorities, and Training Center Use Should Be Improved. GAO-07-118. Washington D.C.: November 22, 2006.
The NTSB plays a vital role in transportation safety. It is charged with investigating all civil aviation accidents in the United States and selected accidents in other transportation modes, determining the probable cause of these accidents, and making appropriate recommendations, as well as performing safety studies. In 2006, NTSB's reauthorization legislation mandated GAO to annually evaluate its programs. From 2006 to 2008, GAO made 21 recommendations to NTSB aimed at improving management and operations across several areas. Since that time, NTSB has taken action to address all 21 recommendations. Some of these were completed by requiring only a single action, whereas others required continuing effort to achieve operational improvement. For this review, GAO examined the extent to which desired outcomes are being achieved in five areas where continuing effort was necessary. GAO analyzed workforce, financial, and program data, and interviewed agency officials about actions NTSB has taken. GAO’s analysis found varying degrees of improvement associated with the National Transportation Safety Board’s (NTSB) actions in areas selected for review. • Training Center utilization . NTSB increased utilization of its Training Center—both non-classroom and classroom space—since 2006. NTSB has also set and achieved its cost recovery goal at the Training Center in the last 2 fiscal years, allowing NTSB to recover half of its operating costs. • Recommendation close-out process . By automating the recommendation follow-up process, NTSB has reduced by about 3 months the amount of time it takes to respond to agencies on whether planned actions to implement NTSB recommendations are acceptable; this allows agencies to move forward with approved actions sooner than under NTSB’s former paper-driven process. • Communication . NTSB employees’ responses on federal employee surveys from 2004 to 2012 indicated an increase from 49 to 57 percent in employees’ positive responses regarding managers’ communication about agency goals, and from 44 to 49 percent regarding the amount of information received. We compared NTSB employees’ responses to those of employees from a group of small agencies and found that NTSB employees’ satisfaction level was about the same or more positive depending on the question. NTSB officials continue to monitor employees’ views about communication to address any remaining concerns. • Diversity management . NTSB employees’ positive responses to the federal employee survey questions about managers’ commitment to diversity and NTSB’s diversity policies and programs increased from about 54 percent to over 70 percent from 2004 to 2012. However, employees’ positive responses to the question about managers’ ability to work well with employees with different backgrounds declined 6 percentage points over the same period. In addition, the proportion of minority and women employees in NTSB’s workforce, including in its investigator staff, showed little appreciable change over the period 2008 to 2012. NTSB’s workforce had a smaller proportion of some minority groups than the civilian labor force. NTSB officials are using results from their recent diversity survey to identify gaps in their diversity management efforts and to benchmark future progress. It is too soon to tell whether NTSB’s actions will lead to additional changes in its workforce diversity profile. • Financial management . To improve operational effectiveness, NTSB has implemented a cost accounting system that includes a time and attendance program to track staff hours and costs related to accident investigations. NTSB is currently focused on ensuring the quality of the time and attendance data, but has not yet developed a strategy to maximize the utility of its cost accounting system for making resource and operational decisions. Thus, NTSB has not yet fully achieved its vision of using the data to improve labor productivity and mission effectiveness. In each of the five areas NTSB needs to continue its improvement efforts. Further, GAO recommends that NTSB senior managers develop a strategy for maximizing the utility of NTSB's cost accounting system. GAO provided a draft of this report to officials at NTSB. NTSB officials concurred with the recommendation and provided technical comments, which GAO incorporated as appropriate.
DOE has broadly indicated the direction of the LGP but has not developed all the tools necessary to evaluate progress. DOE officials have identified a number of broad policy goals that the LGP is intended to support, including helping to ensure energy security, mitigate climate change, jumpstart the alternative energy sector, and create jobs. Additionally, through DOE’s fiscal year 2011 budget request and a mission statement for the LGP, the department has explained that the program is intended to support the “early commercial production and use of new or significantly improved technologies in energy projects” that “avoid, reduce, or sequester air pollutants or anthropogenic emissions of greenhouse gases, and have a reasonable prospect of repaying the principal and interest on their debt obligations.” To help operationalize such policy goals efficiently and effectively, principles of good governance identified in our prior work on GPRA indicate that agencies should develop associated performance goals and measures that are objective and quantifiable. These performance goals and measures are intended to allow comparison of programs’ actual results with the desired results. Each program activity should be linked to a performance goal and measure unless such a linkage would be infeasible or impractical. DOE has linked the LGP to two departmentwide performance goals: “Double renewable energy generating capacity (excluding conventional hydropower) by 2012.” “Commit (conditionally) to loan guarantees for two nuclear power facilities to add new low-carbon emission capacity of at least 3,800 megawatts in 2010.” DOE has also established nine performance measures for the LGP (see app. II). However, the departmentwide performance goals are too few to reflect the full range of policy goals for the LGP. For example, there is no measurable performance goal for job creation. The performance goals also do not reflect the full scope of the program’s authorized activities. For example, as of April 2010, DOE had issued two conditional commitments for energy efficiency projects—as authorized in legislation—but the energy efficiency projects do not address either of the performance goals because the projects are expected to generate little or no renewable energy and are not associated with nuclear power facilities. Given the lack of sufficient performance goals, DOE cannot be sure that the LGP’s performance measures are appropriate. Thus, DOE lacks the foundation to assess the program’s progress, and more specifically, to determine whether the projects it supports with loan guarantees contribute to achieving the desired results. As the LGP’s scope and authority have increased, the department has taken a number of steps to implement the program for applicants. For example, DOE has substantially increased the LGP’s staff and in-house expertise, and applicants we interviewed have commended the LGP staff’s professionalism. DOE officials indicated that, prior to 2008, staffing was inadequate to review applications, but since June 2008, the LGP’s staff has increased from 12 federal employees to more than 50, supported by over 40 full-time contractor staff. Also, the LGP now has in-house legal counsel and project finance expertise, which have increased the program’s capacity to evaluate proposed projects. In addition, in November 2009, the Secretary named an Executive Director, reporting directly to the Secretary, to oversee the LGP and to accelerate the application review process. Other key steps that DOE has taken include the following: DOE has identified a list of external reviewers qualified to perform legal, engineering, financial, and marketing analyses of proposed projects. Identifying these external reviewers beforehand helps to ensure that DOE will have the necessary expertise readily available during the review process. DOE officials said that the department has also expedited the procurement process for hiring these external reviewers. DOE developed a credit policies and procedures manual for the LGP. Among other things, the manual contains detailed internal policies and procedures that lay out requirements, criteria, and staff responsibilities for determining which proposed projects should receive loan guarantees. DOE revised the LGP’s regulations after receiving information from industry concerning the wide variety of ownership and financing structures that applicants or potential applicants would like to employ in projects seeking loan guarantees. Among other things, the modifications allow for ownership structures that DOE found are typically employed in utility-grade power plants and are commonly proposed for the next generation of nuclear power generation facilities. DOE obtained OMB approval for its model to estimate credit subsidy costs. The model is a critical tool needed for the LGP to proceed with issuing loan guarantees because it will be used to calculate each loan guarantee’s credit subsidy cost and the associated fee, if any, that must be collected from borrowers. (We are evaluating DOE’s process and key inputs for estimating credit subsidy costs in other ongoing work.) Notwithstanding these actions, the department is implementing the program in a way that treats applicants inconsistently, lacks systematic mechanisms for applicants to appeal its decisions or for applicants to provide feedback to DOE, and risks excluding some potential applicants unnecessarily. Specifically, we found the following: DOE has treated applicants inconsistently. Although our past work has shown that agencies should process applications with the goals of treating applicants fairly and minimizing applicant confusion, DOE’s implementation of the program has favored some applicants and disadvantaged others in a number of ways. First, we found that, in at least five of the ten cases in which DOE made conditional commitments, it did so before obtaining all of the final reports from external reviewers, allowing these applicants to receive conditional commitments before incurring expenses that other applicants were required to pay. Before DOE makes a conditional commitment, LGP procedures call for engineering, financial, legal, and marketing reviews of proposed projects as part of the due diligence process for identifying and mitigating risk. If DOE lacks the in-house capability to conduct the reviews, external reviews are performed by contractors paid for by applicants. In one of the cases we identified in which DOE deviated from its procedures, it made a conditional commitment before obtaining any of the external reports. DOE officials told us this project was fast-tracked because of its “strong business fundamentals” and because DOE determined that it had sufficient information to proceed. However, it is unclear how DOE could have had sufficient information to negotiate the terms of a conditional commitment without completing the types of reviews generally performed during due diligence, and proceeding without this information is contrary to the department’s procedures for the LGP. Second, DOE treats applicants with nuclear projects differently from applicants proposing projects that employ other types of technologies. For example, DOE allows applicants with nuclear projects that have not been selected to begin the due diligence process to remain in a queue in case the LGP receives additional loan guarantee authority, while applicants with projects involving other types of technologies that have not been selected to begin due diligence are rejected (see app. III). In order for applicants whose applications were rejected to receive further consideration, they must reapply and again pay application fees, which range from $75,000 to $800,000 (see app. IV). DOE also provided applicants with nuclear generation projects information on how their projects ranked in comparison with others before they submitted part II of the application and 75 percent of the application fees. DOE did not provide rankings to applicants with any other types of projects. DOE officials said that applicants with nuclear projects were allowed to remain in a queue because of the expectation that requests would substantially exceed available loan guarantee authority and that the applications would be of high quality. According to DOE officials, they based this expectation on information available about projects that are seeking licenses from the Nuclear Regulatory Commission. DOE officials also explained that they ranked nuclear generation projects for similar reasons—and also to give applicants with less competitive projects the chance to drop out of the process early, allowing them to avoid the expense involved in applying for a loan guarantee. However, all of the solicitations issued through 2008 initially received requests that exceeded the available loan guarantee authority (see app. V), so nuclear projects were not unique in that respect. In addition, applicants with coal-based power generation and industrial gasification facility projects paid application fees equivalent to those paid by applicants with nuclear generation projects but were not given rankings prior to paying the second application fee (see app. IV). To provide EERE applicants with earlier feedback on the competitiveness of their projects, DOE instituted a two-part application for the 2009 EERE solicitation—a change from the 2008 EERE solicitation. DOE officials stated that they made this change based on lessons learned from the 2008 EERE solicitation. While this change appears to reduce the disparity in treatment among applicants, it remains to be seen whether DOE will make similar changes for projects that employ other types of technologies. Third, DOE has allowed one of the front-end nuclear facility applicants that we contacted additional time to meet technical and financial requirements, including requirements for evidence that the technology is ready to move to commercial-scale operations, but DOE has rejected applicants with other types of technologies for not meeting similar technical and financial criteria. DOE has not provided analysis or documentation explaining why additional time was appropriate for one project but not for others. DOE lacks systematic mechanisms for applicants to appeal its decisions or provide feedback to DOE. In its solicitations, DOE states that a rejection is “final and non-appealable.” Once a project has been rejected, the only administrative option left to an applicant under DOE’s documented procedures is to reapply and incur all of the associated costs. Nevertheless, DOE said that, as a courtesy, it had rereviewed certain rejected applications. Some applicants did not know that DOE would provide such rereviews, which appear contrary to DOE’s stated policy and have been conducted on an ad hoc basis. DOE also lacks a systematic mechanism for soliciting, evaluating, and incorporating feedback from applicants about its implementation of the program. Our past work has shown that agencies should solicit, evaluate, and incorporate feedback from program users to improve programs. Unless they do so, agencies may not attain the levels of user satisfaction that they otherwise could. For example, during our interviews with applicants, more than half said they received little information about the timing or status of application reviews. Applicants expressed a desire for more information about the status of DOE’s reviews and said that not knowing when a loan guarantee might be issued created difficulties in managing their projects—for example, in planning construction dates, knowing how much capital they would need to sustain operations, and maintaining support for their projects from internal stakeholders. According to DOE officials, the department has reached out to stakeholders through its Web site, presentations to industry groups and policymakers, and other means. DOE has also indicated that it has changed the program to make it more user-friendly, based on lessons learned and applicant feedback. For example, unlike the 2008 EERE solicitation, the 2009 EERE solicitation includes rolling deadlines that give applicants greater latitude in when to submit their applications; a simplified part I application that provides a mechanism for DOE to give applicants early feedback on whether their projects are competitive; and delayed payment of the bulk of the “facility fee” that DOE charges applicants to cover certain program costs. While DOE said that these changes were based, in part, on feedback from applicants, because DOE has no systematic way of soliciting applicant feedback, the department has no assurance that the views obtained through its outreach efforts are representative, particularly since the means that DOE uses to obtain feedback do not guarantee anonymity. The department also has no assurance that the changes made in response to feedback are effectively addressing applicant concerns. DOE risks excluding some potential applicants. Even though the Recovery Act requires that applicants begin construction by the end of fiscal year 2011 to qualify for Recovery Act funding, DOE has not yet issued solicitations for the full range of projects eligible for Recovery Act funding under section 1705. DOE has issued two solicitations specific to the Recovery Act for the LGP, but neither invites applications for commercial manufacturing projects, which are eligible under the act. While DOE has announced that it will issue an LGP solicitation for commercial manufacturing projects, it has given no date for doing so. The 2009 EERE solicitation provided an opportunity for some manufacturing applicants to receive Recovery Act funding, but because DOE combined the Recovery Act’s requirements with the original section 1703 requirements, applicants with commercial manufacturing projects were excluded. DOE officials told us that they combined the requirements to ensure that projects that are initially eligible under section 1705 but that fail to start construction by the deadline can remain in the LGP under section 1703. DOE has made substantial progress in building a functional program for issuing loan guarantees under Title XVII of EPAct; however, it may not fully realize the benefits envisioned for the LGP until it further improves its ability to evaluate and implement the program. Since 2007, we have been reporting on DOE’s lack of tools necessary to evaluate the program and process applications and recommending that the department take steps to address these areas. While DOE has identified broad policy goals and developed a mission statement for the program, it will lack the ability to implement the program efficiently and effectively and to evaluate progress in achieving these goals and mission until it develops corresponding performance goals. As a practical matter, without such goals, DOE will also lack a clear basis for determining whether the projects it decides to support with loan guarantees are helping achieve the desired results, potentially undermining applicants’ and the public’s confidence in the legitimacy of those decisions. Such confidence could also be undermined by implementation processes that do not treat applicants consistently—unless DOE has clear and compelling grounds for disparate treatment—particularly if DOE skips steps in its review process prior to issuing conditional commitments or rereviews rejected applications for some applicants without having an administrative appeal process. Furthermore, while DOE has taken steps to increase applicants’ satisfaction with the program, it cannot determine the effectiveness of those efforts without systematic feedback from applicants that preserves their anonymity. To improve DOE’s ability to evaluate and implement the LGP, we recommend that the Secretary of Energy take the following four actions: Direct the program management to develop relevant performance goals that reflect the full range of policy goals and activities for the program, and to the extent necessary, revise the performance measures to align with these goals. Direct the program management to revise the process for issuing loan guarantees to clearly establish what circumstances warrant disparate treatment of applicants so that DOE’s implementation of the program treats applicants consistently unless there are clear and compelling grounds for doing otherwise. Direct the program management to develop an administrative appeal process for applicants who believe their applications were rejected in error and document the basis for conclusions regarding appeals. Direct the program management to develop a mechanism to systematically obtain and address feedback from program applicants, and, in so doing, ensure that applicants’ anonymity can be maintained, for example, by using an independent service to obtain the feedback. We provided a draft of this report to DOE for review and comment. In its written comments, DOE stated that it recognizes the need for continuous improvement to its Loan Guarantee Programs as those programs mature but neither explicitly agreed nor disagreed with our recommendations. In one instance, DOE specifically disagreed with our findings: the department maintained that applicants are treated consistently within solicitations. Nevertheless, the department stated that it is taking steps to address concerns identified in our report. Specifically, DOE pointed to the following recent or planned actions: Performance goals and measures. DOE stated that, in the context of revisions to its strategic plan, the department is revisiting the performance goals and measures for the LGP to better align them with the department’s policy goals of growing the green economy and reducing greenhouse gases from power generation. Consistent treatment of applicants. DOE recognized the need for greater transparency to avoid the perception of inconsistent treatment and stated that it will ensure that future solicitations explicitly describe circumstances that would allow streamlined consideration of loan guarantee applications. Appeals. DOE indicated that its process for rejected applications should be made more transparent and stated that the LGP continues to implement new strategies intended to reduce the need for any kind of appeals, such as enhanced communication with applicants including more frequent contact, and allowing applicants an opportunity to provide additional data at DOE’s request to address deficiencies DOE has identified in applications. While these actions are encouraging, they do not fully address our findings, especially in the areas of appeals and applicant feedback. We continue to believe that DOE needs systematic mechanisms for applicants to appeal its decisions and to provide anonymous feedback. DOE’s written comments on our findings and recommendations, along with our detailed responses, are contained in appendix VI. In addition to the written comments reproduced in that appendix, DOE provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, and other interested parties. This report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. To assess the extent to which the Department of Energy (DOE) has identified what it intends to achieve through the Loan Guarantee Program (LGP) and is positioned to evaluate progress, we reviewed and analyzed relevant provisions of Title XVII of the Energy Policy Act of 2005 (EPAct), the American Recovery and Reinvestment Act of 2009 (Recovery Act); DOE’s budget request documents; and Recovery Act planning information, as well as other documentation provided by DOE. We discussed strategic planning and program evaluation with cognizant DOE officials from the LGP office, the Office of the Secretary of Energy, the Office of the Chief Financial Officer, and the Credit Review Board (CRB) that is charged with coordinating credit management and debt collection activities as well as overall policies and procedures for the LGP. As criteria, we used the Government Performance Results Act (GPRA), along with our prior work on GPRA. To evaluate DOE’s implementation of the LGP for applicants, we reviewed relevant legislation, such as EPAct and the Recovery Act; DOE’s final regulations and concept of operations for the LGP; solicitations issued by DOE inviting applications for loan guarantees; DOE’s internal project tracking reports; technical and financial review criteria for the application review process; minutes from CRB meetings held between February 2008 and November 2009; applications for loan guarantees; application rejection letters issued by DOE; and other various DOE guidance and procurement documents related to the process for issuing loan guarantees. We interviewed cognizant DOE officials from the LGP office, the Office of the Secretary of Energy, the Office of the Chief Financial Officer, the Office of Headquarters Procurement Services, and program offices that participated in the technical reviews of projects, including the Office of Electricity Delivery and Energy Reliability, the Office of Energy Efficiency and Renewable Energy, the Office of Nuclear Energy, and the National Energy Technology Laboratory (NETL). In addition, we interviewed 31 LGP applicants and 4 trade association representatives, using a standard list of questions for each group, to obtain a broad representation of views that we believe can provide insights to bolster other evidence supporting our findings. We selected the applicants and trade associations using a mix of criteria, including the amount of the loan guarantee requested and the relevant technology. As criteria, we used our prior work on customer service. We did not evaluate the financial or technical soundness of the projects for which applications were submitted. We conducted this performance audit from January 2009 through July 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. DOE has developed the following nine performance measures for the LGP: percentage of projects receiving DOE loan guarantees that have achieved and maintained commercial operations; contain the loss rate of guaranteed loans to less than 4 percent; contain the loss rate of guaranteed loans to less than 11.81 percent in fiscal year 2009 (11.85 percent for fiscal years 2010 and 2011) on a long-term portfolio basis; newly installed generation capacity from power generation projects receiving DOE loan guarantees; average cost per megawatthour for projects receiving DOE loan forecasted greenhouse gas emissions reductions from projects receiving loan guarantees compared to ‘business as usual’ energy generation; forecasted air pollutant emissions (nitrogen oxides, sulfur oxides, and particulates) reductions from projects receiving loan guarantees compared to ‘business as usual’ energy generation; average review time of applications for Section 1705 guarantees; and percentage of conditional commitments issued to qualified applicants relative to plan. Appendix III: Application Review Process Required for projects with estimated total costs exceeding $25 million. Required for projects with estimated total costs exceeding $25 million. Required for projects with estimated total costs exceeding $25 million. Required for projects with estimated total costs exceeding $25 million. Appendix IV: Standardized Fees Associated with Obtaining a Loan Guarantee, by Solicitation $600,000 ½ of 1% of guaranteed 600,000 ½ of 1% of guaranteed 600,000 ½ of 1% of guaranteed 2008 Energy efficiency, renewable energy, and advanced transmission and distribution technologies (EERE) 1% of guaranteed amount $375,000 + 0.75% of guaranteed amount $1,625,000 + 0.50% of guaranteed amount 1% of guaranteed amount $375,000 + 0.75% of guaranteed amount $1,625,000 + 0.50% of guaranteed amount 600,000 ½ of 1% of guaranteed 2009 Commercial technology renewable energy generation projects under the Financial Institution Partnership Program (FIPP) Coal-based power generation and industrial gasification facilities Energy efficiency, renewable energy, and advanced transmission and distribution technologies (EERE) Electric power transmission infrastructure projects Commercial technology renewable energy generation projects under the Financial Institution Partnership Program (FIPP) The 2006 mixed solicitation invited applications for all technologies eligible to receive loan guarantees under the Energy Policy Act of 2005 except for nuclear facilities and oil refineries. The following are GAO’s comments on the Department of Energy’s (DOE) letter dated June 17, 2010. 1. DOE appears to concur with the spirit of our recommendation. Best practices for program management indicate that DOE should have objective, quantifiable performance goals and targets for evaluating its progress in meeting policy goals DOE has identified for the LGP. Such goals and targets are important tools for ensuring public accountability and effective program management. 2. Our finding about inconsistent treatment of LGP applicants is based on information obtained from applicants corroborated by documents from DOE. In the instance we identified in which DOE made a conditional commitment before obtaining any of the required external reports, the external reviewers were not fully engaged until after DOE had negotiated the terms of the conditional commitment, which is contrary to DOE’s stated procedures and provided an advantage to the applicant. Other applicants who received conditional commitments before completion of one or more of the reports called for by DOE’s due diligence procedures also had a comparative advantage in that they were able to defer some review expenses until after DOE had publicly committed to their projects. We continue to believe that DOE should revise the process for issuing loan guarantees to treat applicants consistently unless there are clearly established and compelling grounds for making an exception. 3. We agree that there may be grounds for treating applicants differently depending on the type of technology they employ but do not believe that DOE has adequately explained the basis for the differences among the solicitations. For example, DOE’s response does not address the possibility that lack of ranking information for fossil energy projects, combined with the knowledge that the solicitation was significantly oversubscribed, could have factored into applicants’ decisions to drop out of the process, especially given the relatively high fees associated with submitting part II of the application. 4. We disagree that DOE’s current process for rereviewing rejected applications is working. As we state in our report, some applicants did not know that DOE would provide rereviews. While we are encouraged by DOE’s efforts to reduce the need for appeals, we believe that an administrative appeal process would allow DOE to better plan and manage its use of resources on rejected applications. 5. We applaud DOE’s efforts to reach out to stakeholders and to use lessons learned to improve procedures and increase efficiencies and effectiveness. However, we continue to believe that DOE needs a systematic mechanism for applicants to provide anonymous feedback, whether through use of a third party or other means that preserves confidentiality. Several applicants we interviewed expressed concern that commenting on aspects of DOE’s implementation of the LGP could adversely affect their current or future prospects for receiving a loan guarantee. Systematically obtaining and addressing anonymous feedback could enhance DOE’s efforts to improve procedures and increase efficiencies and effectiveness. In addition to the individual named above, Karla Springer, Assistant Director; Marcia Carlsen; Nancy Crothers; Marissa Dondoe; Brandon Haller; Whitney Jennings; Cynthia Norris; Daniel Paepke; Madhav Panwar; Barbara Timmerman; and Jeremy Williams made key contributions to this report.
Since the Department of Energy's (DOE) loan guarantee program (LGP) for innovative energy projects was established in Title XVII of the Energy Policy Act of 2005, its scope has expanded both in the types of projects it can support and in the amount of loan guarantee authority available. DOE currently has loan guarantee authority estimated at about $77 billion and is seeking additional authority. As of April 2010, it had issued one loan guarantee for $535 million and made nine conditional commitments. In response to Congress' mandate to review DOE's execution of the LGP, GAO assessed (1) the extent to which DOE has identified what it intends to achieve through the LGP and is positioned to evaluate progress and (2) how DOE has implemented the program for applicants. GAO analyzed relevant legislation, prior GAO work, and DOE guidance and regulations. GAO also interviewed DOE officials, LGP applicants, and trade association representatives. DOE has broadly indicated the program's direction but has not developed all the tools necessary to assess progress. DOE officials have identified a number of broad policy goals that the LGP is intended to support, including helping to mitigate climate change and create jobs. DOE has also explained, through agency documents, that the program is intended to support early commercial production and use of new or significantly improved technologies in energy projects that abate emissions of air pollutants or of greenhouse gases and have a reasonable prospect of repaying the loans. GAO has found that to help operationalize such policy goals efficiently and effectively, agencies should develop associated performance goals that are objective and quantifiable and cover all program activities. DOE has linked the LGP to two departmentwide performance goals, namely to (1) double renewable energy generating capacity by 2012 and (2) commit conditionally to loan guarantees for two nuclear power facilities to add a specified minimum amount of capacity in 2010. However, the two performance goals are too few to reflect the full range of policy goals for the LGP. For example, there is no performance goal for the number of jobs that should be created. The performance goals also do not reflect the full scope of program activities; in particular, although the program has made conditional commitments to issue loan guarantees for energy efficiency projects, there is no performance goal that relates to such projects. Without comprehensive performance goals, DOE lacks the foundation to assess the program's progress and, more specifically, to determine whether the projects selected for loan guarantees help achieve the desired results. DOE has taken steps to implement the LGP for applicants but has treated applicants inconsistently and lacks mechanisms to identify and address their concerns. Among other things, DOE increased the LGP's staff, expedited procurement of external reviews, and developed procedures for deciding which projects should receive loan guarantees. However, GAO found: (1) DOE's implementation of the LGP has treated applicants inconsistently, favoring some and disadvantaging others. For example, DOE conditionally committed to issuing loan guarantees for some projects prior to completion of external reviews required under DOE procedures. Because applicants must pay for such reviews, this procedural deviation has allowed some applicants to receive conditional commitments before incurring expenses that other applicants had to pay. It is unclear how DOE could have sufficient information to negotiate conditional commitments without such reviews. (2) DOE lacks systematic mechanisms for LGP applicants to administratively appeal its decisions or to provide feedback to DOE on its process for issuing loan guarantees. Instead, DOE rereviews rejected applications on an ad hoc basis and gathers feedback through public forums and other outreach efforts that do not ensure the views obtained are representative. Until DOE develops implementation processes it can adhere to consistently, along with systematic approaches for rereviewing applications and obtaining and addressing applicant feedback, it may not fully realize the benefits envisioned for the LGP. GAO recommends that DOE develop performance goals reflecting the LGP's policy goals and activities; revise the loan guarantee process to treat applicants consistently unless there are clear, compelling grounds not to do so; and develop mechanisms for administrative appeals and for systematically obtaining and addressing applicant feedback. DOE said it is taking steps to address GAO's concerns but did not explicitly agree or disagree with the recommendations.
Through the purchase card program, agency personnel can acquire the goods and services they need directly from vendors. GSA, which manages the purchase card program governmentwide, has awarded contracts to banks to provide standard commercial charge cards for use by federal employees. When GSA first pilot-tested the purchase card in the late 1980s, its use was restricted to procurement personnel. In 1994, however, the Federal Acquisition Streamlining Act (FASA) defined micropurchases as purchases in amounts not greater than $2,500. The act authorized cardholders to make micropurchases without obtaining competitive quotations if they considered the price reasonable and directed that purchases be distributed equitably among qualified suppliers. The Federal Acquisition Regulation (FAR) designated the purchase card as the preferred method of making micropurchases. By shifting authority for small purchases from procurement offices to individual cardholders, agencies dramatically improved their ability to acquire quickly and easily items that were needed for day-to-day operations and to reduce administrative costs. Since the passage of FASA, the dollar value of goods and services acquired through the purchase card has exploded, as figure 1 shows. This growth was accompanied by an increase in the number of personnel using the purchase card. Table 1 provides information on fiscal year 2002 purchase card activity for the eight agencies we reviewed. Purchase card transactions at these agencies account for over 85 percent of the government’s purchase card spending. While purchase cards may be used to make payments under established contracts in addition to making micropurchases, an overwhelming majority of transactions are micropurchases. At the eight agencies reviewed, micropurchases represented 98 percent of transactions and accounted for 63 percent of the dollars expended. Appendix VI provides additional information on purchase card activity at the agencies we reviewed. GSA, whose mission is to help federal agencies better serve the public by offering acquisition services at the best value, has created several tools that can help cardholders obtain more favorable pricing for goods and services. The most common of these is the Schedule program, which offers discounted prices on a wide range of commercial goods and services from multiple vendors. The GSA Advantage on-line shopping service allows agencies to compare prices under various Schedule contracts, place orders, and make payments over the Internet. In addition, GSA has awarded contracts that offer federal agencies discounted prices on telecommunications services. Our prior work found that weak internal controls left purchase card use at DOD and several civilian agencies vulnerable to fraud and abuse. The list of Related Products at the end of this report identifies recent work in this area. To address these concerns, Congress has enacted legislation that directs DOD to improve program management by limiting the number of purchase cards, providing appropriate training to purchase card officials and cardholders, monitoring purchase card activity, disciplining cardholders who misuse the purchase card, and assessing the credit worthiness of cardholders. We recently reported that DOD has taken a number of actions to improve the controls over the purchase card program based on congressional action and our recommendations. To improve management of the purchase card program governmentwide, the proposed Purchase Card and Travel Card Accountability Act of 2003 would require the Administrator of the Office of Federal Procurement Policy to prescribe a governmentwide policy regarding the appropriate and inappropriate uses of the purchase and travel cards. In addition, the proposed Credit Card Abuse Prevention Act of 2003 would require civilian agencies to promulgate regulations to establish safeguards and internal controls to prevent fraud, misuse, and abuse. Although we found some initiatives under way to obtain vendor discounts from major purchase card vendors, agencies generally had not seized opportunities to obtain more favorable prices on purchase card buys— opportunities that could yield hundreds of millions of dollars in savings. Agency efforts to obtain more favorable prices for purchase cardholders had generally been limited to a few agencywide agreements with major vendors—that is, vendors with whom an agency spent $1 million or more in fiscal year 2002. Further, training for cardholders usually focused on internal controls and regulatory policies and did not provide practical information about steps cardholders can take to get better prices. As a result, cardholders often paid higher prices than necessary. The successful initiatives taken within some agencies demonstrate that, if agencies negotiated effective discount agreements with major purchase card vendors and improved communications to cardholders about how to obtain more favorable prices, significant savings could be realized. We found a wide variation in the number of agencywide discount agreements that the eight agencies we reviewed had negotiated with their major purchase card vendors. For example, Veterans Affairs had negotiated agencywide discount agreements with 37 of its 196 major purchase card vendors—the largest number of any of the agencies reviewed. The Army, Navy, and Air Force each had agencywide agreements with several major information technology vendors and one or more office supply vendors. Agriculture, Interior, and Justice each had a few agencywide agreements, which covered information technology products or office supplies. Transportation’s senior procurement executive told us this agency had no discount agreements that could be used agencywide. As shown in table 2, cardholders at the agencies we reviewed are using the purchase card to a great extent to buy items from major purchase card vendors, an indication that opportunities exist to negotiate additional discount agreements with these vendors. The effectiveness of the agreements that are in place also varied widely, and we found a number of ways in which agencies had not maximized their agreements’ potential to capture additional savings. First, agencies did not always take full advantage of competitive forces to ensure that their discount agreements with large vendors offered the most favorable prices, as shown in the following examples. Second, some agency discount agreements covered a limited range of products and therefore did not provide cardholders more favorable prices on all the items they purchase from a vendor. Overall, in 18 of the 27 transactions we reviewed where agencies had a discount agreement in place, the agreement did not cover the specific items that cardholders purchased, as demonstrated in the following examples. Finally, some discount agreements were not well-coordinated within the agency, creating the potential for overlap, as shown in the following example. Representatives of a number of agency components told us that, while they believed that their regional and local organizations had negotiated some discount agreements, they had no information on these agreements. Each of the agencies we reviewed had developed guidance and training programs for their cardholders that focused on regulatory policies and internal controls intended to prevent misuse of the purchase card. However, most of the guidance and training programs did not provide cardholders with practical information to help them get better pricing by using Schedule contracts or agency discount agreements, as in the following examples. Some training programs, however, had successfully communicated practical information to their cardholders on how to seek better prices, as in the following examples. Dun and Bradstreet’s analysis of fiscal year 2002 Interior transactions, conducted on our behalf, illustrates that cardholders frequently paid more than necessary. For example, the company analyzed Interior purchases from three office supply vendors that provided product descriptions along with their purchase card billing information. This analysis showed that ink cartridges were the most frequently purchased product. For one specific model of ink cartridge, 411 of 791 purchases were made at prices higher than the Schedule prices the vendors offered, indicating that cardholders had generally not taken advantage of discounts available through Schedule contracts. The prices paid for the same cartridge model ranged from $20.00 to $34.99. Our review of selected transactions also showed that, because they lacked practical information on how to achieve savings on purchases, cardholders paid more than necessary, as highlighted in the following examples. Some cardholders we talked to were simply unaware of the savings potential of using Schedule contracts or agency discount agreements. Of the transactions we reviewed where items were available through a GSA contract, a number of cardholders were unaware that the items could have been purchased through the GSA contract. Some cardholders appeared to not understand their fundamental responsibility for getting reasonable prices, as in the following examples. Other cardholders purchased products that were not available through the particular vendor’s Schedule contract. Because the cardholders did not consider whether products that met their needs were available from other Schedule vendors, they were unable to take advantage of lower, discounted prices these vendors might have offered, as shown in the following examples. Of the 135 transactions we reviewed, 70 included items that were unavailable through the selected vendor’s GSA contract. Other cardholders appeared to be confused about whether they were getting favorable prices. We also found cardholders who were not aware that they had received significant discounts, as in the following cases. Several agencies or agency components reported significant savings from their initiatives to leverage their buying power by negotiating discount agreements with major vendors, suggesting the potential for significant savings governmentwide. In all cases, the discount agreements are available to cardholders. Several examples follow. While the scope of our work did not include developing a governmentwide estimate of the potential savings from leveraging purchase card buying power, these examples indicate that the potential for savings could be significant. Given the range of savings under discount agreements currently in place with major vendors (8 to 35 percent) at the agencies we reviewed, a conservative approach indicates that, if these agencies were to achieve savings of just 10 percent on their purchase card expenditures with major vendors, annual savings of $300 million could be realized. The primary reason that agencies have not taken advantage of potential opportunities to capture savings through the purchase card program is the lack of management focus on this issue. Further, OMB has not leveraged its governmentwide oversight role by collecting and disseminating information on the successful initiatives some agencies have undertaken. In addition, agency officials identified several challenges that, in their view, have hindered them from more aggressively pursuing savings through the purchase card program. First, they noted that the purchase card is intended to streamline buying, and they are reluctant to impose requirements on cardholders that would undermine the simple, quick purchase card buying process. Officials also cited the need to balance governmentwide socioeconomic requirements—including providing opportunities for small businesses and purchasing products manufactured by non-profit agencies for the blind or severely handicapped (referred to as “JWOD” products)—with efforts to get better purchase card prices. Finally, officials noted that little detailed information is available on the specific products and services purchased through the purchase card, hampering efforts to analyze trends in order to achieve more savings. Although agency officials consistently identified these challenges, our review suggests that the challenges are not insurmountable, as evidenced by individual agency initiatives to address them. Agency purchase card managers have yet to turn their attention to capturing opportunities for savings in their purchase card programs. In the mid-1990s, managers were focusing on capturing the savings in administrative costs that use of the purchase card made possible and reengineering administrative processes that discouraged use of the card. In more recent years, our work and the work of agency inspectors general highlighted weaknesses in internal controls that left purchase card use vulnerable to fraud and abuse. Agency managers have made a concerted effort to address these internal control weaknesses, but have not paid similar attention to capitalizing on opportunities for savings on purchase card buys. In general, the agency management structures and processes do not establish departmentwide goals for the effectiveness of micropurchase activity, such as savings goals. To monitor agencies’ progress in implementing better internal controls, OMB requires agencies to report quarterly on such topics as investigations of potential fraud, disciplinary actions for fraudulent or improper card use, and initiatives to improve program management. However, OMB’s reporting requirement does not include gathering information on agency efforts to save money on purchase card buys. Consequently, governmentwide information on opportunities to achieve savings is not available. OMB representatives stated that they would consider the benefits of having agencies share information on leveraging purchasing power. They believe that increased focus on purchase card pricing issues is appropriate and mentioned that periodic cross-agency forums, sponsored by GSA, could be one mechanism for agencies to share successes they have had in negotiating discounts with major vendors. They also acknowledged that the currently-required quarterly reports could be used to gather information on the steps agencies are taking to better leverage their purchase card buying. Most of the agency officials we met with expressed interest in learning of steps being taken within the government to capture purchase card savings, particularly in light of the challenges discussed below. Several agency officials noted that promoting—or in some cases, requiring—the use of specific vendors with whom they have negotiated discount agreements could hinder cardholders from meeting their needs in the simplest, most expeditious manner. They fear that cardholders, who are generally not procurement officials, would be expected to spend more time seeking better prices—time that should be spent meeting mission requirements. While the FAR requires agencies to obtain reasonable prices, it limits the actions agencies need to take to verify price reasonableness. Given the wide variety of missions that cardholders must meet on a daily basis, they must retain the flexibility to make their purchases in a way that meets their needs. Our work showed that in some cases, as those shown below, Schedule contracts and discount agreements were not effective in meeting cardholder needs. In these cases, the cardholders took advantage of the purchase card’s flexibility to find other ways to fill their requirements. On the other hand, some cardholders were pleased with the Schedule contracts and agency discount agreements they used. Cardholders were able to easily place orders with the vendor, and the vendor filled their orders promptly and reliably, as in the following examples. GSA is working to further simplify cardholder access to discounted prices. To receive Schedule discounts, cardholders generally must place orders with a vendor through the GSA Advantage on-line shopping service or other designated ordering procedures. Some of GSA’s Schedule contracts, however, provide vendors the option of offering cardholders discounts at the point of sale in the vendors’ retail stores. For example, one GSA contracting officer modified a vendor’s contract to provide for point-of- sale discounts. The vendor then programmed cash registers in its retail stores to recognize a federal government purchase card when a shopper presents one and to apply the appropriate Schedule discount to the shopper’s order. GSA has partnered with DOD purchase card program officials to explore ways to increase the number of vendors that offer point-of-sale discounts to federal purchasers. Civilian agency officials expressed strong interest in this approach to facilitating cardholder access to Schedule discounts. Balancing governmentwide socioeconomic policies—such as providing federal contracting opportunities to small businesses—with initiatives to leverage agency buying power has also been a recurring concern for agencies. Although agencies are not required to reserve micropurchases for award to small businesses, officials we met with repeatedly noted that because large national vendors would be in the best position to win agencywide discount agreements, concerns would be raised that opportunities for small, local vendors could be reduced. Officials similarly raised concerns about the effect agencywide discount agreements would have on their ability to meet requirements to purchase JWOD products. Despite these concerns, some agencies have been able to leverage purchasing power while providing opportunities for small businesses, as highlighted in the following examples. Further, agency experience indicates that appropriately structured discount agreements can help ensure that cardholders purchase JWOD products when required, as in the following cases. Agency officials point to the lack of adequate data as a barrier to taking steps to analyze purchase card activity. They raised concerns about their ability to analyze purchase trends due to a lack of detailed information on the specific products and services purchased, known as “level 3” data. The banks that provide the agencies’ purchase cards generally do not have such data. For example, our analysis of Interior’s fiscal year 2002 transaction data indicated that less than 15 percent of all transactions included descriptions of the items and services purchased. Dun and Bradstreet found that many merchants have not invested in the electronic point-of-sale devices needed to transmit item descriptions along with other transaction information. A common reason offered by major vendors for not providing level 3 data is that their customers—the ordering agencies— have not requested it. Agency officials told us, however, that they have made clear to the banks that issue their purchase cards that access to level 3 information would be very helpful to them in gaining an understanding of what their cardholders are buying. GSA and other agencies are pursuing initiatives to provide agencies better data on their purchase card activity. GSA’s contracts with the banks that provide purchase cards, for example, require summary and analytical reports on agency purchase card activity, including information on the top 100 vendors by agency and on the types of vendors. According to the GSA purchase card program manager, these reports were intended to provide GSA with data it could use to help agencies gain insight into their purchase card expenditures and identify opportunities to leverage their purchasing power. The program manager indicated, however, that reports from the banks have frequently not been provided, not been provided timely, or not been provided in a format that facilitates analysis. For example, until the most recent reporting period, GSA had not received even basic information, such as the top 100 purchase card vendors, from some banks. The GSA program manager is pursuing efforts to encourage the banks to provide more useful reporting so that GSA will be able to provide more effective assistance to agencies, such as negotiating point-of-sale discounts with vendors. Other initiatives are also in place. GSA is working with DOD and other agencies to determine what barriers limit the level 3 data agencies receive and to explore ways to overcome these barriers. In addition, the Air Force Materiel Command is piloting a system intended to accumulate more consistent and specific information on purchase card transactions. While the lack of level 3 data is a valid concern, agencies can use the information that is available to start taking steps to get better prices. For example, we obtained from the banks a listing of all fiscal year 2002 purchase card transactions for each agency we reviewed. Using this listing, we summarized information on the vendors with whom cardholders at each agency had done $1 million or more in business during fiscal year 2002. All agencies have access to these data. When we shared this information with agency officials, several indicated that simply being able to identify major vendors was a useful first step in identifying opportunities to leverage their buying power. Several agencies have taken the initiative to begin analyzing their purchase card expenditures to identify opportunities for additional savings, although these initiatives in some cases had limitations, as in the following examples. While analyses conducted by agency components can provide useful insight into opportunities to leverage their purchasing power, they do not reflect the bigger picture of agencywide expenditures or agencywide opportunities to capture savings. Several of the agency discount agreements we reviewed require vendors to report periodically on sales made under discount agreements. This information can help agencies determine whether cardholders are taking advantage of favorable pricing. Agencies have just begun to tap the potential of leveraging the purchase card for better pricing. If greater management attention were paid to capitalizing on the opportunities to obtain more favorable prices, hundreds of millions of dollars in savings could be realized annually. Given the volume of purchase card activity, agencies could take advantage of these opportunities without sacrificing the ability to acquire items quickly or compromising socioeconomic goals. If agencies were to build on their initial experiences and duplicate these steps governmentwide, they would have the opportunity to save the taxpayer almost $300 million annually. OMB should take the lead in focusing management attention on this opportunity and guiding agencies towards capturing these savings. We are making the following eight recommendations to OMB, GSA, and the agencies we reviewed: To focus governmentwide management attention on taking advantage of opportunities to achieve savings on purchase card buys, we recommend that the director of OMB take the following two actions: Require agencies to report—either through the current quarterly reports or another mechanism—on the steps they are taking to leverage their purchase card buys in areas such as negotiating discount agreements with major purchase card vendors, implementing initiatives to better inform cardholders of opportunities to achieve savings, conducting analyses to identify such opportunities, and assessing, through mechanisms such as vendor reports, whether cardholders are taking advantage of savings opportunities. Annually report to Congress on the government’s progress in identifying and taking advantage of opportunities for savings on purchase card micropurchases. To assist agencies in identifying opportunities to achieve savings on purchase card buys and to facilitate cardholder access to discounted prices, we recommend that the administrator of GSA direct the purchase card program manager to take the following three actions continue efforts to improve reporting by the banks that provide purchase cards so that GSA will have the data it needs—including basic information such as top vendors and level 3 data where feasible—to assist agencies in effectively identifying opportunities to leverage their purchasing power; work with GSA’s acquisition center contracting officers to pursue point-of-sale discounts with large vendors; and as part of the existing cross-agency forums for purchase card discussions, encourage agencies to share information on their successes in leveraging the purchase card to obtain better prices as well as strategies for overcoming challenges that could hinder agencies’ ability to achieve purchase card savings. To more effectively capture the significant potential for savings that agencies can achieve, we recommend that the Secretaries of Agriculture, Defense, the Interior, Justice, Transportation, and Veterans Affairs direct their purchase card program managers—in coordination with officials responsible for procurement, finance, small business utilization, and other appropriate stakeholders—to take the following three actions: Develop mechanisms that provide cardholders more favorable pricing from major vendors or for key commodity groups, such as agencywide discount agreements with major vendors or simpler mechanisms that capitalize on trade discounts offered by local merchants. In designing such mechanisms, purchase card program managers should consider the need to take full advantage of competitive forces to assure the most favorable prices, ensure that agreements cover an adequate range of the products cardholders are likely to buy, coordinate negotiation activities within the department to reduce duplication of effort, and ensure that agreements appropriately support agencies’ efforts to meet governmentwide socioeconomic requirements. Revise programs for communicating with cardholders to ensure that the programs provide cardholders the information they need to effectively take advantage of mechanisms the agency has established to achieve savings. Such information would include telling cardholders about the GSA Schedule contracts or agency-specific agreements chosen as vehicles for leveraging the agency’s buying power, and procedures cardholders should follow to access and use these vehicles when they plan to make a purchase from these vendors. To the extent possible using available data, such as information on major vendors, analyze purchase card expenditure patterns to identify opportunities to achieve additional savings and to assess whether cardholders are getting good prices. Where available data are not sufficient for such analyses, investigate the feasibility of gathering additional information. In evaluating options for gathering additional information, purchase card program managers should carefully consider the costs and benefits of obtaining comprehensive information and imposing unwarranted burdens on cardholders, vendors, and other stakeholders. We received written comments on a draft of this report from DOD, GSA, the Department of the Interior, and the Department of Veterans Affairs. We received comments via e-mail from the Departments of Agriculture and Transportation. The Department of Homeland Security, the Department of Justice, and OMB did not provide comments. DOD concurred with our recommendation that the department develop mechanisms that provide cardholders more favorable prices, but stated that negotiating agencywide discount agreements might impede achieving the department’s small business goals. Accordingly, DOD intends to emphasize installation-level initiatives to obtain discounts from local vendors and to pursue point-of-sale discounts with larger vendors. DOD also concurred with our recommendation to revise programs for communicating with cardholders and partially concurred with our recommendation to analyze purchase card expenditure patterns to identify opportunities for savings. DOD stated that, until data on specific purchases is widely available, the feasibility of developing informed and cost-effective strategic sourcing decisions is questionable. Our recommendation, however, contemplated agencies using readily available data to gain insight into their purchase card expenditure patterns. Analysis of available purchase card transaction data could provide agencies a clearer understanding of which vendors are significant to their purchase card program. DOD’s written comments are reproduced in appendix II. GSA concurred with our findings and recommendations and stated that the report provides an objective analysis of the savings that agencies can obtain through the Schedule program and purchase card program. GSA’s written comments are reproduced in appendix III. The Department of the Interior did not specifically agree or disagree with our recommendations, but offered several observations on our report. The department took exception to our statement that lack of management focus and oversight had led to agencies’ not taking advantage of opportunities to capture purchase card savings. This statement was intended to portray the general picture at all the agencies we reviewed, and our report discusses the instances we noted where agencies had focused management attention on capturing savings and the benefits agencies obtained by doing so. Interior also commented that our recommendation that departments develop mechanisms to provide cardholders with more favorable prices should be directed to GSA rather than Interior, and that GSA’s buying programs should be revised to incorporate greater price reductions and be expanded to cover more vendors. We did not audit GSA’s buying programs as part of this report; however, recognizing the benefits of point-of-sale discounts, we have made a recommendation to GSA to pursue these discounts with large vendors. At the same time, we found that individual agencies could achieve savings in the short term by negotiating discount agreements, such as Interior has done for information technology products. Interior— pointing to convenience and simplicity as key benefits of the purchase card program—also commented that we should further highlight in our recommendations the need for purchase card managers to take into account the costs and benefits of obtaining comprehensive information and imposing unwarranted burden on cardholders and others. We believe that our recommendations, as stated, afford program managers sufficient flexibility to develop mechanisms for more favorable pricing while not inconviencing cardholders. Finally, Interior recommended that we incorporate into the report a table of “best practices.” The scope of our work did not include gathering information to verify that the agreements agencies have negotiated represent best practices. Interior’s written comments are reproduced in appendix IV. The Department of Veterans Affairs concurred with our recommendations and cited a number of planned and ongoing actions intended to provide cardholders with more favorable prices. In addition, Veterans Affairs expressed concern that our recommendation to OMB would impose a cumbersome and costly data-gathering burden on agencies. Veterans Affairs is apparently interpreting our recommendation as requiring agencies to report on discounts obtained on specific transactions. We agree that the availability of data to prepare such a report may be an issue and therefore are not recommending that OMB require such a report. Instead, we recommend that OMB require agencies to report on initiatives they have taken, such as analyzing purchase card expenditure patterns and negotiating discount agreements that cardholders can use. Veterans Affairs also endorsed our recommendation that GSA pursue point-of-sale discounts with large vendors and suggested that GSA consider encouraging vendors to program point-of-sale devices to recognize that federal purchases are exempt from sales taxes. Veterans Affairs’ comments are reproduced in appendix V. In comments sent via e-mail, the Department of Agriculture concurred with our recommendations and outlined a number of steps the department will take to implement them. Commenting on our finding that Agriculture’s discount agreement for office supplies did not take full advantage of competitive forces to ensure the most favorable prices, Agriculture stated that it reviews this agreement annually and will re- compete the agreement when these annual reviews indicate that re- competition is warranted. We believe that periodic—but not annual—re- competitions would provide the best information for assessing whether the agreement continues to offer the most advantageous prices for office supplies. In comments sent via e-mail, the Department of Transportation did not specifically agree or disagree with our recommendations, but noted that our report could benefit by explicitly recognizing that the greatest savings could by achieved by pooling the buying power of the entire federal government. We agree that leveraging governmentwide buying power would result in the greatest savings. While this would be the best end- state, we see this as a long-term effort with many obstacles to be overcome before it can be achieved. Our work identified initiatives— relatively simple to implement—that agencies can begin now to start achieving savings. In addition, Transportation commented that our report does not adequately depict the fundamental difficulties of complying with JWOD purchase requirements while at the same time achieving best value. We believe our report appropriately reflects the concerns agency officials expressed to us about complying with socioeconomic requirements, including JWOD, and we provide several examples of how some agencies have taken steps appropriately structure discount agreements so that they help ensure that cardholders purchase JWOD products when required. In addition, Transportation commented that the report should discuss some of the positive accomplishments of the purchase card program. Our report acknowledges that the purchase card has fundamentally changed the way agencies make small, routine, purchases and we believe the report appropriately reflects the administrative cost savings and convenience purchase cards have provided. Finally, Transportation suggested a technical correction, which we have incorporated in the report. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the Secretaries of Agriculture, DOD, Homeland Security, the Interior, Justice, Transportation, and Veterans Affairs; the director of OMB; the administrator of GSA; and other interested congressional committees. We will provide copies to others on request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report or need additional information please call David Cooper at (202) 512-4841 (cooperd@gao.gov) or Gregory Kutz at (202) 512-9505 (kutzg@gao.gov). Key contributors to this report are acknowledged in appendix VII. We reviewed laws and regulations relating to the purchase card program, held discussions with GSA officials responsible for governmentwide program management and OMB representatives responsible for program policy and oversight, and reviewed governmentwide policy and guidance for the program. We also performed our work at the Departments of Agriculture, Defense (DOD), the Interior, Justice, Transportation, and Veterans Affairs. These agencies accounted for over 85 percent of governmentwide purchase card expenditures during fiscal year 2002. Within DOD, we focused our work at the Departments of the Army, the Navy, and the Air Force, which represented 92 percent of all DOD purchase card expenditures during fiscal year 2002. We contacted all major component agencies—referred to as major commands in the Army and Air Force and as major claimants in the Navy. At the civilian departments, we contacted the component agencies that were the largest users of purchase cards. To determine whether agencies had taken advantage of opportunities to obtain more favorable purchase card prices, we held discussions with officials responsible for the purchase card program at each department to obtain information on (1) efforts to identify opportunities to obtain more favorable prices, (2) efforts to negotiate discount agreements that made more favorable prices available to cardholders, and (3) guidance and training provided to cardholders to inform them of opportunities to obtain more favorable prices. We reviewed policy and guidance manuals, training materials, and other agency documentation that provided information on these topics. We also contacted the components responsible for the largest volume of purchase card activity within each department. Finally, to assess cardholder buying practices and gain insight into whether they were obtaining favorable prices, we selected a limited number of fiscal year 2002 micropurchase transactions at each department for review. We obtained and reviewed documentation relating to the transactions, such as invoices, and discussed the transactions with cardholders. To identify the reasons why agencies had not taken advantage of opportunities to obtain more favorable purchase card prices, we discussed these issues with officials responsible for departmental purchase card programs and reviewed applicable agency documentation. To select transactions for review, we first obtained data files of fiscal year 2002 purchase card transactions from the banks that provided purchase cards to each of the departments reviewed. (In the case of the military services, we obtained data files from the Defense Manpower Data Center, which had previously obtained the files from the applicable banks.) We reviewed these files to determine that they did not contain any apparent erroneous data and then summarized the total number and dollar value of transactions for each department. We reconciled these totals with totals reported by GSA for each department. Having determined that the data files were generally reliable, we summarized the data to determine the total number and dollar value of transactions by vendor and identified major purchase card vendors at each department. We defined major purchase card vendors as those vendors where the department had purchase card expenditures of $1 million or more in fiscal year 2002. We then combined the data on major purchase card vendors for the eight departments and summarized the number and dollar value of transactions by vendor to identify those vendors where the eight departments had the highest purchase card expenditures. From this combined listing, we determined that vendors providing information technology products, office supplies, and cellular telecommunications services were among the top vendors at all eight departments. Accordingly, we selected two of the top information technology vendors, two of the top office supply vendors, and two of the top cellular telecommunications service providers as the vendors for which we would select transactions for review. For each department, we identified the population of micropurchase transactions with the selected vendors. If a department did not have $1 million or more in micropurchase transactions with the vendor, we excluded that vendor’s transactions from further analysis at that department. We then identified, for each vendor, the subpopulation of micropurchase transactions valued at $100.00 or more for information technology and office supply vendors or $25.00 or more for cellular telephone service providers at each department. We selected—using a random selection process—3 transactions with each vendor at each department for a total of 135 transactions. Although these transactions were selected at random, we cannot project the results of the selected transactions to the population of transactions. To assess the prices cardholders had paid on a transaction, we ascertained whether the vendor had a GSA contract or agency-negotiated discount agreement applicable to the items or services purchased. We obtained information on prices for the items or services under these contracts or agreements and used these prices as benchmarks for assessing whether the cardholder had obtained favorable pricing. In addition to making these price comparisons, we contacted the cardholders to discuss the transaction and gain insight into their buying practices and awareness of vehicles that provide favorable pricing. To assess the potential magnitude of savings that agencies can achieve by negotiating discount agreements with their major purchase card vendors, we considered the discounts individual departments had obtained on the agencywide discount agreements we reviewed during our work. Discounts offered under these agreements varied—for example, 8 percent under an Interior agreement for desktop computers, 10 percent under an Agriculture agreement for office supplies, and 35 percent under an Interior agreement for laptop computers. We considered the 10 percent discount that Agriculture obtained to represent a reasonable and conservative benchmark for the potential discounts departments could obtain from their major vendors. Our analysis showed that the agencies reviewed spent about $2.8 with major purchase card vendors in fiscal year 2002. Although some of these expenditures would have been covered by discount agreements the departments had negotiated, we found that agency discount agreements often did not cover all the items that cardholders purchased from those vendors. Further, we found that cardholders did not always know of, or take advantage of, the discounts agreements agencies had negotiated. A number of the transactions we reviewed were made at retail prices. If the agencies we reviewed obtained discounts of about 10 percent on the $2.8 billion spent with their major purchase card vendors, their savings would amount to about $282 million. Actual discounts would vary with factors such as sales volume, profit margin, and competitiveness of the industry. If agencies obtained discounts equivalent to the high end of the range we saw during our work, savings would amount to almost $1 billion, although it is unrealistic to expect savings of this magnitude. Nonetheless, we believe it is reasonable to anticipate that the federal government could save hundreds of millions of dollars if agencies negotiated discounts with major purchase card vendors. Finally, we engaged the Dun and Bradstreet Corporation to perform a spend analyses of Interior’s fiscal year 2002 purchase card transactions to illustrate how a detailed analysis could begin to identify opportunities for purchase card savings. In addition to performing analyses of Interior’s purchase card transactions, Dun and Bradstreet gathered information on the costs and benefits to merchants and other stakeholders of providing “level 3” data—which includes descriptions of the items and services purchased—and on barriers to vendors providing this information. We conducted our review between March 2003 and January 2004 in accordance with generally accepted government auditing standards. In addition to the individuals named above, Robert Ackley, Victoria Klepacz, James Moses, Jerrod O’Nelio, Monty Peters, Jose Ramos, Harold Reich, Sanford Reigle, Kenneth Roberts, Sylvia Schatz, Quan Thai, Najeema Washington, and Gary Wiggins made key contributions to this report. Products concerning purchase card internal controls: Purchase Cards: Steps Taken to Improve DOD Program Management, but Actions Needed to Address Misuse, GAO-04-156 (Washington, D.C.: Dec. 2, 2003). Audit Guide: Auditing and Investigating the Internal Control of Government Purchase Card Programs, GAO-04-87G (Washington, D.C.: Nov. 1, 2003). Forest Service Purchase Cards: Internal Control Weaknesses Resulted in Instances of Improper, Wasteful, and Questionable Purchases, GAO-03-786 (Washington, D.C.: Aug. 11, 2003). HUD Purchase Cards: Poor Internal Controls Resulted in Improper and Questionable Purchases, GAO-03-489 (Washington, D.C.: Apr. 11, 2003). FAA Purchase Cards: Weak Controls Resulted in Instances of Improper and Wasteful Purchases and Missing Assets, GAO-03-405 (Washington, D.C.: Mar. 21, 2003). Purchase Cards: Control Weaknesses Leave the Air Force Vulnerable to Fraud, Waste, and Abuse, GAO-03-292 (Washington, D.C.: Dec. 20, 2002). Purchase Cards: Navy Is Vulnerable to Fraud and Abuse but Is Taking Action to Resolve Control Weaknesses, GAO-02-1041 (Washington, D.C.: Sept. 27, 2002). Purchase Cards: Control Weaknesses Leave Army Vulnerable to Fraud, Waste, and Abuse, GAO-02-732 (Washington, D.C.: June 27, 2002). FAA Alaska: Weak Controls Resulted in Improper and Wasteful Purchases, GAO-02-606 (Washington, D.C.: May 30, 2002). Government Purchase Cards: Control Weaknesses Expose Agencies to Fraud and Abuse, GAO-02-676T (Washington, D.C.: May 1, 2002). Purchase Cards: Control Weaknesses Leave Two Navy Units Vulnerable to Fraud and Abuse, GAO-02-32 (Washington, D.C.: Nov. 30, 2001). Products concerning strategic purchasing: Contract Management: Restructuring GSA’s Federal Supply Service and Federal Technology Service, GAO-04-132T (Washington, D.C.: Oct. 2, 2003). Best Practices: Improved Knowledge of DOD Service Contracts Could Reveal Significant Savings, GAO-03-661 (Washington, D.C.: June 9, 2003). Contract Management: Taking a Strategic Approach to Improving Service Acquisitions, GAO-02-499T (Washington, D.C.: Mar. 7, 2002) Best Practices: Taking a Strategic Approach Could Improve DOD’s Acquisition of Services, GAO-02-230 (Washington, D.C.: Jan. 18, 2002)
From 1994 to 2003, the use of government purchase cards exploded from $1 billion to $16 billion. Most purchase card transactions are for small purchases, less than $2,500. While agencies estimate that using purchase cards saves hundreds of millions of dollars in administrative costs, the rapid growth of the purchase card presents opportunities for agencies to negotiate discounts with major vendors, thereby better leveraging agencies' buying power. To discover whether agencies were doing this, we examined program management and cardholder practices at the Departments of Agriculture, Army, Navy, Air Force, Interior, Justice, Transportation, and Veterans Affairs. GAO also examined why agencies may not have explored these opportunities. Although some agencies have begun to take actions to achieve savings through their purchase card programs, most have not identified and taken advantage of opportunities to obtain more favorable prices on purchase card buys--opportunities that could yield hundreds of millions of dollars in savings. For example, most agencies have established some discount agreements with major purchase card vendors (those vendors with whom they did more than $1 million in purchase card business in fiscal year 2002), but these agreements cover only a few of the hundreds of major vendors and a limited number of products. Further, because agency purchase card training programs lack practical information to help cardholders take advantage of existing discount agreements or GSA's Federal Supply Schedule contracts, cardholders paid higher prices than necessary. The agencies that have taken steps to obtain better prices by negotiating discounts with their major vendors have achieved notable savings on purchase card buys. For example, in fiscal year 2003, the Agriculture Department negotiated a discount agreement for office supplies that yielded savings of $1.8 million--about 10 percent off Schedule contract prices--and the Interior Department recently negotiated agreements with information technology vendors for discounts up to 35 percent off Schedule prices. A conservative approach indicates that, if the agencies we reviewed obtained discounts of only 10 percent with their major vendors, annual savings of up to $300 million could be achieved. Most agencies have not more aggressively pursued savings through the purchase card because of a lack of management focus--simply put, this issue has not been the center of attention for managers. Further, the Office of Management and Budget has not leveraged its governmentwide oversight role by collecting and disseminating information on the successful initiatives some agencies have undertaken. Agency officials also expressed concerns that imposing additional requirements on cardholders would undermine the program's intent to streamline acquisitions and that pursuing discount agreements with large suppliers would limit their ability to provide opportunities for small businesses. They also cited poor data as a barrier to identifying savings opportunities. However, as individual agencies have demonstrated, these concerns are not insurmountable. For example, the Air Force's Air Mobility Command provides its cardholders a list of community vendors--many of which are small businesses--that offer discounts, making it easy for the cardholders to obtain discounts from local small businesses. Despite data limitations, information such as vendor sales reports could be used to identify major vendors with whom to pursue discount agreements and to provide insight into cardholder activity.
The Social Security Administration (SSA) administers two programs under the Social Security Act that provide benefits to people with disabilities: (1) Disability Insurance (DI) and (2) Supplemental Security Income (SSI). Established in 1956, DI is an insurance program that provides benefits to workers who become unable to work because of a long-term disability. Workers who have paid into the Social Security Trust Fund are insured under this program. At the end of calendar year 2005, the DI program served about 8.3 million workers with disabilities, their spouses, and dependent children and paid out about $85 billion in cash benefits throughout the year. Once found entitled, individuals continue to receive benefits until they either die, return to work and earn more than allowed by program rules, are found to have improved medically and are able to work, or reach regular retirement age (when disability benefits convert to retirement benefits). SSI serves people with disabilities on the basis of need, regardless of whether they have paid into the Social Security Trust Fund. Created in 1972, SSI is an income assistance program that provides cash benefits for disabled, blind, or aged people who have low income and limited resources. At the end of calendar year 2005, the SSI program served about 6.8 million people and paid about $36 billion in federal cash benefits throughout the year. These cash benefits are paid from general tax revenues. SSI benefits generally can be discontinued for the same reasons as DI benefits, although SSI benefits also may be discontinued if a person no longer meets SSI income and resource requirements. Unlike the DI program, SSI benefits can continue even after the person reaches full retirement age. The Social Security Act’s definition of disability for adults is the same under both programs. A person’s physical or mental impairment must (1) have lasted or be expected to last at least 1 year or to result in death and (2) prevent or be expected to prevent him or her from being able to engage in substantial gainful activity (SGA) for that period of time. People are generally considered to be engaged in SGA if they earn above a certain dollar level. For 2006, SSA considers countable earnings above $860 a month to be SGA for an individual who is not blind and $1,450 a month for an individual who is blind. Prior to 1980, some studies indicated that many beneficiaries of the disability programs no longer had a disability and could work. To ensure that only eligible beneficiaries remained in the programs, Congress passed a law requiring SSA to conduct continuing disability reviews (CDR) beginning in January 1982. State Disability Determination Services (DDS) examiners began conducting medical CDRs under the same criteria used to evaluate initial disability claims. In 1981 and 1982, about 45 percent of those individuals who received a CDR had their benefits discontinued. There was no statutory requirement for SSA to show that a beneficiary had improved medically in order to remove him or her from the programs. Disability advocacy groups and others became concerned that some beneficiaries were being inappropriately removed from the disability programs, and by 1984 SSA placed a moratorium on all CDRs. To address concerns that some beneficiaries were being inappropriately removed from the programs, Congress enacted the Social Security Disability Benefits Reform Act of 1984. The act included a provision requiring SSA to find substantial evidence demonstrating medical improvement before ceasing a recipient’s benefits (the medical improvement standard). SSA resumed CDRs in January 1986 using this standard, which is among the first steps of the CDR evaluation process. The standard has the following two elements that need to be met Is there improvement in a beneficiary’s medical condition? The regulations implementing the act define improvement as any decrease in the medical severity of the beneficiary’s impairment(s) since the last time SSA reviewed his or her disability, based on changes in symptoms, signs, or laboratory findings. Is this improvement related to the ability to work? Improvement related to the ability to work is evaluated two different ways, depending on whether the comparison point decision (CPD) was based on: (1) meeting or equaling a prior disability listing or (2) a residual functional capacity (RFC) assessment. Meeting or equaling the prior listing: In this case, a disability examiner will determine if the beneficiary’s same impairment(s) still meets or equals the prior listing. A disability examiner compares the beneficiary’s condition with the list of impairments in effect at the time he or she was first awarded disability benefits. If the impairment(s) meets or equals the prior listing, then benefits are continued. If not, then the examiner proceeds with the CDR evaluation. Residual functional capacity assessment: In this case, a disability examiner compares the beneficiary’s previous functional capacity to the current functional capacity for the same impairment. If functional capacity for basic work activities has improved, then the examiner finds that the medical improvement is related to the ability to work and proceeds with the CDR evaluation. The act allows SSA to discontinue benefits even when the beneficiary has not improved medically if one of the specific “exceptions” to medical improvement appliesthe person benefits from advances in medical or vocational therapy or technology, the person has undergone a vocational therapy program that could help him or her work, new or improved diagnostic techniques or evaluations reveal that the impairment is less disabling than originally thought, or the prior decision was in error. In order to be removed from the disability programs for one of the exceptions, disability examiners must also show that the individual has the ability to engage in SGA. SSA does not conduct CDRs on all beneficiaries each year. At the time beneficiaries enter the DI or SSI programs, DDSs determine when they will be due for CDRs based on their likely potential for medical improvement. Based on SSA regulations, DDSs classify beneficiaries into one of three medical improvement categories medical improvement expected—CDR generally once every 6 to 18 medical improvement possible—CDR once every 3 years; or medical improvement not expected—CDR once every 5 to 7 years. SSA has also developed a method, called profiling, to determine the most cost-effective method of conducting a CDR. SSA applies statistical formulas that use data on beneficiary characteristics—such as age, impairment type, length of time on disability programs, previous CDR activity, and reported earnings—to predict the likelihood of medical improvement and, therefore, of benefit discontinuation. SSA assigns a “score” to beneficiaries indicating whether there is a high, medium, or low likelihood of medical improvement. In general, beneficiaries with a high score are referred for full medical CDRs. Beneficiaries with lower scores are, at least initially, sent a questionnaire, known as a “mailer.” Full medical CDRs involve an in-depth examination of a beneficiary’s medical and possibly his or her vocational status. This may include a review of the recipient’s case file, physical and psychological condition, and medical evidence by a disability examiner and physician. Unlike full medical CDRs, CDR mailers consist of a short list of questions asking beneficiaries to self- report information on their medical condition, treatments, and work activities. Appendix II describes the medical CDR evaluation process in detail. SSA will find that disability has ended and discontinue benefits if it determines that medical improvement related to the ability to work has occurred or that one of the exceptions applies, and the person’s impairments are not severe or the person can do past work or other work. If SSA determines that medical improvement has not occurred and that none of the exceptions apply, then benefits are continued (see fig. 1). If SSA finds that the individual no longer has a disability and discontinues benefits following a CDR, the individual has the right to appeal the CDR decision, first to another reviewer for a reconsideration, second to an administrative law judge, then to the Appeals Council, and finally to federal courts. At the hearing before the administrative law judge (ALJ), the ALJ reviews the file, including any additional evidence submitted after the DDS determination and may hear testimony from the individual as well as medical and vocational experts. SSA’s Office of Quality Performance conducts quality reviews of disability determination outcomes. To conduct these quality reviews, SSA selects a random sample of cases each month from all final CDR decisions, stratifying the selection of cases by state and outcome (cases where benefits are continued and discontinued). Then, a quality examiner reviews the case to ensure it adheres to SSA guidance, including a review of the DDS decision, the documentation of that decision, and the evidence contained in the case. During these reviews, physicians evaluate the evidence to ensure that the decision adheres to the medical improvement standard. In fiscal year 2005, SSA’s Office of Quality Performance reported nationwide accuracy rates for cases where CDR benefits were continued and discontinued of 95 percent and 93 percent respectively. The combined accuracy rate for all CDRs was about 95 percent. We found that on average, about 1.4 percent of all individuals who left the programs between fiscal years 1999 and 2005 were removed for medical improvement. More beneficiaries leave the disability programs because they either die or convert to social security retirement benefits. In addition, while full medical CDRs are the agency’s most comprehensive tool for determining whether a beneficiary continues to have a disability, about 2.8 percent of those who receive these CDRs are found to no longer have a disability under the medical improvement standard. Between fiscal years 1999 and 2005, annually, an average of 13,800 people—or about 1.4 percent of all individuals who left the disability programs—were removed because SSA found that they had improved medically. More people leave the programs when they die, convert to full retirement benefits, or leave for other reasons. For example, between fiscal years 1999 and 2005, each year an average of about 311,000 recipients (about 32 percent of all recipients who were removed from the disability programs) died, and about 209,000 (about 21 percent) converted from DI benefits to retirement benefits. In addition, each year about 444,000 beneficiaries (about 45 percent) were removed from the disability programs for other reasons. These include about 54,000 DI beneficiaries who SSA determined had earnings in excess of SGA, about 11,000 DI beneficiaries who either converted to old-age retirement benefits prior to reaching the full retirement age or were found to be erroneously eligible for benefits, and about 379,000 SSI beneficiaries who were removed from the SSI program for all reasons other than death and medical improvement (including earnings and resources above the limit allowed by program guidelines) (see fig. 2). During fiscal years 1999 to 2005, the proportion of all beneficiaries who were removed from the programs in each of the above categories remained fairly consistent. For example, during this period, the proportion of individuals removed from the disability programs in a fiscal year for medical improvement ranged from 1.0 percent to 1.7 percent; the proportion of individuals who died ranged from 31.1 percent to 33.0 percent; and the proportion of individuals who converted from disability benefits to retirement benefits ranged from 19.7 percent to 22.7 percent. SSA data show that few beneficiaries who receive medical CDRs are removed from the disability programs. Full medical CDRs are the agency’s primary tool to determine whether a beneficiary has improved medically. Between fiscal years 1999 and 2005, the number of full medical CDRs conducted ranged from a high of 608,000 in 2001 to a low of 333,000 in 2005 (see fig. 3). Between fiscal years 1999 and 2005, an average of about 26,000 individuals each year (about 5.3 percent) were removed from the disability programs as a result of receiving a medical CDR. Some of the officials we interviewed stated that the medical improvement standard may artificially limit the percentage of recipients who are found to have improved medically. However, we were unable to identify any empirical data regarding the impact of the standard on the percentage of recipients who have their benefits discontinued, or what a “proper” discontinuation rate should be. While the number of CDRs conducted between fiscal years 1999 and 2005 fluctuated, the percentage of beneficiaries removed from the programs remained fairly constant. For example, in fiscal years 1999, 2002, and 2004, the percentage of recipients who were removed from the disability programs as a result of receiving a CDR was 5.4 percent, 5.6 percent, and 5 percent respectively. In addition to medical improvement, SSA also removes beneficiaries for failing to cooperate during a CDR. For example, a beneficiary may fail to appear for scheduled meetings with disability examiners or physicians and thus may have their benefits discontinued. Of the individuals removed from the programs as a result of receiving a CDR between fiscal years 1999 and 2005, an average of about 13,800 individuals (or 2.8 percent of all CDRs conducted between fiscal years 1999 and 2005) were removed annually because SSA determined that they had improved medically, while an average of about 10,300 individuals (or about 2.1 percent) were removed each year for failure to cooperate. Our review suggests that several factors associated with the standard pose challenges for SSA’s ability to assess whether beneficiaries continue to be eligible for benefits. First, limitations in SSA guidance may result in inconsistent application of the standard. For example, we found that SSA does not clearly define the degree of improvement needed to meet the standard, and the DDS directors we surveyed reported using different thresholds to show medical improvement. As a result of this apparent limitation in SSA guidance, disability examiners may incorrectly decide to continue or discontinue benefits. In addition, while the act provides for certain exceptions that could result in additional individuals having their benefits discontinued following a CDR, most of the disability examiners we spoke with told us that they were uncertain about when to apply the exceptions. Second, we found that most DDSs are incorrectly conducting CDRs with the presumption that a beneficiary has a disability. Finally, other factors, such as inadequate documentation of evidence and the judgmental nature of the decision process for assessing medical improvement may make it more difficult to determine whether a beneficiary remains eligible for benefits. However, due to data limitations, we were unable to determine the extent to which these challenges impact decisions to continue or discontinue benefits during a CDR. Our work shows that SSA does not clearly define the degree of improvement needed for examiners to determine if a beneficiary has improved medically. Many disability examiners and DDS officials told us that they were unsure about the degree of improvement required to meet the standard, and some said this confusion stems from unclear SSA guidance. In particular, SSA guidance instructs examiners to disregard “minor” changes in a beneficiary’s condition. However, this guidance does not adequately describe what constitutes a minor change. When we asked SSA officials to clarify their understanding of what constitutes a minor change, they told us that only changes that would not affect a beneficiary’s ability to work should be considered minor. However, this explanation of minor changes is not included in the agency’s guidance. As a result, some DDSs may be inconsistently defining what constitutes a minor change. For example, five DDS directors told us that they define minor changes to include those that may actually improve functioning or allow the beneficiary to work. In doing so, our review suggests that some DDSs may be inconsistently applying the standard as to what constitutes medical improvement. However, DDS directors differed on the extent to which the guidance to disregard minor changes impacts CDRs. Of the 52 DDS directors who answered a question in our survey on “minor” changes, 21 reported that the practice of disregarding minor changes is not an impediment to making a disability determination, while 31 reported that it is an impediment. Similarly, we found that SSA guidance may not provide DDS examiners with sufficient detail to determine whether improvements in beneficiaries’ medical conditions are related to their ability to work. At this step of the CDR process, examiners look for changes in a beneficiary’s ability to perform basic work activities since the last review, such as lifting heavy objects or standing or sitting for periods of time. The guidance instructs examiners to ensure a “reasonable relationship” between the amount of improvement and the increase in the ability to perform basic work activities. However, the guidance does not require a specific amount of increase in functioning. The DDS directors we surveyed reported that they interpret this guidance differently. Specifically, 17 of 49 directors reported that a large or very large increase in a recipient’s ability to do basic work activities is required; 24 reported that a moderate increase is required; and 8 reported that a minor or any increase at all is required. Furthermore, two DDS directors in our survey inaccurately noted that the standard requires that a beneficiary’s improvement be great enough so that it actually enables the individual to work. One of these directors commented that because SSA guidance on this aspect of the standard is open to broad interpretation, it is difficult to document improvement to the extent the individual is able to work. As a result, some DDSs may be inconsistently applying this aspect of the standard that could potentially impact decisions to continue or discontinue benefits. However, we were unable to determine how much of an impact clarification of this guidance would have on CDR outcomes. The disability advocates we spoke with differed in their views on the clarity of SSA guidance on medical improvement. While some stated that it is clear and adequate, others stated that the guidance on assessing medical improvement in psychological impairments and determining if improvement is related to the ability to work is confusing and unclear. One advocate stated that current SSA policies contribute to some recipients remaining in the disability programs despite their ability to work. We also found that while the act provides for exceptions to medical improvement that could result in additional individuals having their benefits discontinued as a result of receiving a CDR, most of the disability examiners whom we spoke with on our site visits told us that they were uncertain about when to apply the exceptions. SSA policies allow for various exceptions, including when the prior decision was in error or when persons benefit from education or training programs that could help the individuals work. However, we found that while the examiners and ALJs routinely assess whether a beneficiary has improved medically, they do not routinely assess whether or not each of the exceptions applies to the case. Moreover, many of the DDS officials and examiners we interviewed told us that the guidelines for using the error exception are written in a way that precludes its use, except in the most extreme situations. SSA officials explained that the exceptions were written to intentionally limit their use in order to prevent examiners from circumventing the standard, and that their infrequent use is appropriate. In addition, SSA explained that when it issued the final rules governing the medical improvement standard, it intended the exceptions to be true “exceptions”—not to be routinely applied (including the error exception). The agency also noted that a broader application of the error exception could lead to a substitution of judgment by an adjudicator for the original finding of disability in instances where a person’s medical condition had not substantially improved. Some disability advocates we spoke with also noted that the narrow interpretation of the error exception is appropriate because it prevents substitution of judgment and arbitrary discontinuations. According to our survey, a majority of DDSs incorrectly presume that a beneficiary continues to have a disability when conducting CDRs, which may make it more difficult for examiners to determine if a beneficiary has improved medically. This is contrary to the act as well as SSA regulations and policy, which require that CDR decisions be made on a “neutral basis.” SSA defines neutral basis as a review that neither presumes that a beneficiary (1) is still disabled because he or she was previously found disabled and (2) is no longer disabled because he or she was selected for a CDR. Under a neutral review, it is assumed that beneficiaries had a disability at the time of the prior decision, but it is not assumed they still have a disability at the time of a CDR. However, in survey responses, 31 DDS directors responded that in practice, CDRs are conducted with the presumption that a beneficiary continues to have a disability. When asked to explain this response, directors cited various factors that likely contribute to the presumption of disability during a CDR. Thirteen directors commented that the individuals are already receiving disability benefits, and as a result, the directors assume that the beneficiary continues to have a disability. Some of these directors also noted that they make this presumption because the beneficiary was found disabled when initially awarded benefits, and examiners must show medical improvement to remove them from the programs. Since a majority of DDSs are conducting CDRs with a presumption that beneficiaries have a disability, those DDSs may be setting a higher bar than required by the standard for these reviews. Moreover, by requiring more evidence of medical improvement than is necessary under the standard, it may be harder to assess whether a recipient no longer has a disability and is able to work. Because 31 directors reported that examiners conduct CDRs with the presumption that beneficiaries continue to have a disability, a significant number of beneficiaries may be evaluated under this higher standard, and some may have their benefits erroneously continued. While these problems raise concerns about the consistency of decisions when determining if medical improvement has occurred, the ultimate impact of presuming that an individual has a disability on CDR decisions is unknown because we were unable to empirically test how the presumption of a disability impacts CDR decisions to continue or discontinue benefits. Inadequate documentation of evidence and the judgmental nature of the process for assessing medical improvement are two additional factors that make it challenging to assess medical improvement. The standard establishes the prior decision as the starting point for conducting a CDR and requires examiners to find evidence of medical improvement since this last decision. Some DDS directors reported that it may be difficult to assess medical improvement in cases where the prior disability decision was based on incomplete or poorly documented evidence. For example, in one of the CDR cases we reviewed, a beneficiary had his benefits continued following the CDR because the rationale for the prior disability decision was vague, according to the examiner who reviewed the case with us. This beneficiary was originally awarded benefits on appeal based on recurrent stomach problems and depression. When the case was selected for a CDR, the case file included a general description of the beneficiary’s medical condition, but lacked sufficient evidence to determine if medical improvement had occurred since the initial decision, according to the examiner. As a result, medical improvement could not be shown and benefits were continued. While many examiners and officials we interviewed agreed that it is difficult to show medical improvement in cases that lack adequate documentation, they differed in their opinions about how frequently this occurs. Of the directors who answered our survey question on insufficient documentation, 33 responded that they encounter cases with insufficient documentation infrequently or very infrequently, and 17 responded that such cases occurred more often. Survey respondents also differed in their opinions about the types of cases that more typically lack adequate documentation, but 15 directors commented that cases decided on appeal were the most likely to lack adequate documentation. One possible explanation for this may be streamlined processes at the appeals level. For example, one ALJ we interviewed noted that, in an effort to process cases in a timely manner, ALJs sometimes issue quick decisions in which most of the evidence is on tapes that are not transcribed or placed in the beneficiary’s case file. In such instances, it is unlikely that the DDS examiner would have complete information for conducting a CDR and determining if medical improvement had taken place. Furthermore, several officials told us that guidance instructs ALJs to include enough information in their decisions so that the decisions will be legally sufficient. However, the guidance does not specifically instruct ALJs to include all of the evidence that will be needed to assess medical improvement at a future CDR. However, in recent regulations to implement changes to its disability determination process, SSA is taking steps that may help to address the problem of incomplete documentation for future CDRs. Specifically, SSA is developing requirements for training examiners to ensure they understand the information needed to make accurate and adequately documented decisions, has adopted guides for decision writing at the appeals level, and is in the process of developing guides for use at the DDS level. In addition to the challenges associated with problems of inadequate documentation, many examiners also told us that the judgmental nature of the decision process concerning what constitutes an improvement can make it difficult to assess medical improvement. One examiner may determine that a beneficiary has improved medically and discontinue benefits, while another examiner may determine that medical improvement has not been shown and will continue the individual’s benefits. For example, in one of the CDR cases that we reviewed, the examiner conducting the initial CDR determined that medical improvement was shown and discontinued the individual’s benefits. The recipient was initially awarded disability benefits for a back injury with limited range of motion in the recipient’s back. When the CDR was conducted, the examiner evaluated all of the relevant evidence and concluded that the individual’s range of motion had improved. The examiner also noted that the individual’s allegations of pain did not correlate with the findings from both the physical exam and the laboratory findings. As a result, the examiner concluded that medical improvement had occurred. On appeal to reconsideration 6 months later, a different DDS examiner conducted a review using the same medical evidence as the original examiner, but determined that medical improvement had not occurred, and continued benefits. The examiner conducting the appeal concluded that the beneficiary continued to experience pain consistent with the back condition, and thus medical improvement was not shown. However, we had no basis for determining which decision was correct. The amount of judgment involved in the decision-making process increases when the process involves certain types of impairments that are difficult to assess. More specifically, assessing medical improvement may be more difficult in cases that involve certain types of psychological impairments, such as depression, than cases with physical impairments, such as amputations. In elaborating on their survey responses, 17 directors commented that assessing medical improvement is more difficult in cases with psychological impairments because evidence of these impairments is generally more subjective than evidence of many physical impairments. In addition, six directors commented that evaluations of psychological impairments tend to rely more heavily on assessment of functionality. According to some of these officials, an assessment of functionality is more subjective because it relies more on the beneficiaries’ account of their own conditions than on laboratory findings. Furthermore, some officials reported that the severity of psychological impairments can fluctuate over time, making it difficult to assess whether improvement has taken place. Two directors commented that determining whether there is medical improvement for some types of psychological impairments can also be complicated because medical experts’ opinions can vary. One of these directors commented that the evidence to support psychological impairments, such as evaluations for depression, rely less on laboratory findings and more on clinical judgment. In contrast, certain tests for physical impairments tend to be less open to interpretation. For example, one director commented that X-rays of joint deterioration can generally be interpreted consistently among radiologists. The potential difficulty of assessing medical improvement in beneficiaries whose disability is based on certain types of psychological impairments is especially relevant, given that the proportion of all individuals in the disability programs whose disability is based on a psychological impairment has grown in recent years. SSA is responsible for assuring that individuals who truly have a disability that prevents them from being able to work continue to receive benefits. At the same time, SSA has a stewardship responsibility to identify those beneficiaries who have improved medically and are no longer eligible for benefits. The medical improvement standard is intended to help SSA accomplish both of these responsibilities. However, several factors associated with the standard pose challenges for ensuring that the standard is implemented in a consistent and fair manner. Specifically, potential limitations in SSA guidance regarding the degree of improvement needed to meet the standard as well as a lack of clarity with respect to the appropriate use of the exceptions to medical improvement may make it difficult to assess if medical improvement has occurred. Clear guidance is especially important in view of the judgmental nature of the disability determination process. Additionally, while SSA guidelines regarding the presumption of disability during CDRs tend to be generally clear, incorrect application of these guidelines by several DDSs suggests that the outcomes of CDRs could be affected and may result in benefit continuation for some individuals who might otherwise been found to have improved medically. Other factors, including inadequate documentation of evidence, are more difficult to address in the short term. However, SSA is taking actions intended to address some of these problems. To ensure that SSA is able to consistently assess whether DI and SSI beneficiaries have improved medically, we recommend that the Commissioner of Social Security clarify guidance for assessing medical improvement when conducting CDRs. More specifically, SSA should clarify guidance concerning (1) what degree of improvement is required to meet the standard and (2) when the use of exceptions to medical improvement is appropriate. SSA should also work with DDSs to ensure that CDRs are conducted on a neutral basis, without a presumption that beneficiaries continue to have a disability. We obtained written comments on a draft of this report from the Commissioner of the Social Security Administration (SSA). The agency generally agreed with our recommendation, but expressed reservations about the need for further guidance on the use of exceptions. More specifically, SSA believed that its implementation of the statutory exceptions to medical improvement is appropriate and that its instructions are consistent with the intent of the law. As such, SSA was concerned about language in the draft report that characterized SSA’s guidance as discouraging and limiting the use of the exceptions. After considering these comments, we revised the report to include additional information on (1) examiners’ confusion on the use of the exceptions when conducting CDRs and (2) SSA’s rationale for its current exception guidance. Having made these changes, we continue to believe that additional guidance in this area is warranted if only, as the report notes, because most of the disability examiners whom we spoke with told us that they were uncertain about when to apply the exceptions. Moreover, while answering a survey question on the exceptions to medical improvement, 4 DDS directors commented that more guidance regarding the use of the exceptions is needed. SSA generally agreed with the need for clarifying guidance concerning the degree of improvement required to meet the medical improvement standard. However, the agency believed that the report was unclear with regard to whether this part of the recommendation applied only to guidance for determining if there has been any medical improvement, or also to the guidance for determining if any medical improvement is related to the ability to work. As stated in the draft report, our discussion of medical improvement encompasses both elements (improvement in a beneficiary’s medical condition and its relation to the ability to work). However, we did further clarify this throughout the entire report to minimize any confusion on this matter. Additionally, SSA indicated that clarification of this guidance would probably have little noticeable impact on the number of cases in which SSA finds that a disability has ended. As our report notes, we cannot quantify the impact that clearer guidance would have on the discontinuation of benefits. Even so, we continue to believe that it is important for DDSs to consistently apply this aspect of the medical improvement standard and that, towards that end, additional guidance would be useful. SSA agreed with the need for further training to ensure that CDRs be conducted on a neutral basis. However, it believed that more adjudicator training in this area would likely have little impact on discontinuing benefits. We cannot predict the impact additional guidance and training would have on continuing or discontinuing benefits. However, as the report points out, there are large numbers of DDS directors who are incorrectly applying the neutrality standard and, in our view, would benefit from additional guidance in this area. Beyond commenting on our recommendation, SSA suggested that we provide additional context for some of the statistical information presented in our discussion of the proportion of beneficiaries removed from the disability programs each year. For example, SSA commented that the disability discontinuation rates in the early 1980s may not have been representative of the discontinuation rates prior to the implementation of the medical improvement standard due to special targeted initiatives aimed at removing individuals from the DI program who no longer had a disability. We revised the report to take into account these suggestions. The Commissioner’s comments have been reproduced in appendix III. SSA also provided additional technical comments, which have been incorporated in the report as appropriate. Unless you publicly announce its contents earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will make copies available to other parties upon request. In addition, the report will be available at no charge on GAO’s Web site at http//:www.gao.gov. This report does not contain all the results from the survey. The survey and a more complete tabulation of the results can be viewed at http://www.gao.gov/cgi-bin/getrpt?rptno=GAO-07-4sp. If you or your staff have questions concerning this report, please contact me at (202) 512-7215. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. See appendix IV for a listing of major contributors to this report. This appendix provides additional details about our analysis of the medical improvement standard (the standard), including challenges the standard poses for the Social Security Administration (SSA) when conducting medical continuing disability reviews (CDR). To meet the objectives of this review, we reviewed prior studies by GAO, SSA, SSA’s Inspector General, Congressional Research Service, and external organizations related to the disability determination process and CDRs. We also reviewed the Social Security Disability Benefits Reform Act of 1984, regulations, and SSA policies and processes for assessing whether beneficiaries continue to be eligible for benefits. In addition, we analyzed SSA data on CDR outcomes over a 7-year period for fiscal years 1999 to 2005 as well as reports identifying the number of beneficiaries who leave the disability programs and the reasons why they leave. For the purposes of our study, we only assessed DI and SSI adult beneficiaries who left the programs as a result of receiving a full medical CDR. We did not include children or “age 18 re-determinations” in our analysis since there are differences between the medical CDR sequential evaluation processes for adults and children. We also did not assess the outcome of CDR mailers or work CDRs. We verified the statistical data on CDR outcomes for internal logic, consistency, and reasonableness. We determined that the data were sufficiently reliable for the purposes of our review. We also met with knowledgeable SSA officials to further document the reliability of these data. We interviewed 34 officials from SSA’s central offices (including officials from the Office of the Chief Actuary, the Office of Quality Performance, the Office of General Counsel, the Office of Research and Evaluation Statistics, the Office of Disability Programs, the Office of Disability Adjudication and Review, and the Office of Program Development and Research) to discuss the disability programs and the CDR process. We conducted a national Web-based survey of all 55 Disability Determination Services (DDS) directors in the 50 states, the District of Columbia, Puerto Rico, the Virgin Islands, the Western Pacific Islands, and the federal DDS. DDSs are the agencies responsible for conducting periodic CDRs to determine if beneficiaries’ medical conditions have improved and if they are able to work. We received 54 completed surveys for a response rate of 98 percent. The purpose of this survey was to assess the extent to which the standard impacts outcomes of CDRs and determine if the standard poses any special challenges for SSA when determining whether beneficiaries continue to be eligible for benefits. We asked the directors about particular elements of the standard and how these elements, alone or in combination with other factors, impact CDR outcomes. We also asked them how SSA guidance on implementing the standard affects CDR outcomes. We determined that the survey data are sufficiently reliable. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or were analyzed, can introduce unwanted variability into the survey results. We took steps in the development of the questionnaire, the data collection, and the data analysis to minimize these nonsampling errors. For example, social science survey specialists designed the questionnaire in collaboration with GAO staff with subject matter expertise. Then, the draft questionnaire was pretested with a number of state officials to ensure that the questions were relevant, clearly stated, and easy to comprehend. The questionnaire was also reviewed by an additional GAO survey specialist. When the data were analyzed, a second, independent analyst checked all computer programs. Since this was a Web-based survey, respondents entered their answers directly into the electronic questionnaire. This eliminated the need to have the data keyed into a database thus removing an additional source of error. We conducted three pretests of this survey with DDS directors in three different states. We modified the survey to take their comments into account. We also provided SSA with a copy of the survey and incorporated its technical comments into the final version. This report does not contain all the results from the survey. The survey and a more complete tabulation of the results can be viewed at http://www.gao.gov/cgi- bin/getrpt?rptno=GAO-07-4sp. To augment information from our state survey, we conducted independent audit work in three states (California, Massachusetts, and Texas) to examine how SSA policies and procedures are carried out in the field. We selected locations for field visits based on the following criteria: (1) geographic dispersion; (2) states with large numbers of CDRs conducted; (3) states with CDR discontinuation rates above, below, and at the national average; (4) states with varying DDS structures (i.e., centralized and decentralized); and (5) states with large numbers of Disability Insurance (DI) beneficiaries and large DI expenditures. In each state, we visited a DDS office, the SSA regional office, the regional Office of Quality Performance, and the regional Office of Disability Adjudication and Review (formerly known as the Office of Hearings and Appeals). In total, we conducted in-depth interviews with 80 SSA and DDS managers and line staff responsible for conducting medical CDRs, including DDS directors, CDR supervisors and examiners, medical consultants, and administrative law judges. During our meetings with SSA and DDS officials, we documented management and staff views on the challenges associated with applying the medical improvement standard. In particular, we documented management and staff views on (1) the impact of the standard on CDR outcomes, (2) the effectiveness of SSA policies and procedures for applying the standard, and (3) the degree to which factors external to the standard create challenges when determining if a beneficiary has improved medically and is able to work. To further assess how the standard is applied in practice, we took a nonprobability sample of 12 CDR case files from the DDSs in California and Texas. We asked CDR supervisors to provide several cases that were (1) discontinued for medical improvement, (2) continued because the beneficiary was clearly disabled, and (3) ambiguous cases where it was difficult to apply the standard and determine if benefits should be continued or discontinued. These case files serve to illustrate the difficulties examiners face when determining if a beneficiary has improved medically and is able to work. In addition, we interviewed seven disability policy experts from national disability research and advocacy organizations to obtain their input on the impact of the standard on the disability programs and any challenges it poses when assessing individuals’ continued eligibility for benefits. We spoke with individuals affiliated with the following organizations American Association of People with Disabilities, Center for Health Services Research and Policy at George Washington Center for the Study and Advancement of Disability Policy, Consortium for Citizens with Disabilities, Disability Law Center, Disability Policy Collaboration, National Organization of Social Security Claimants’ Representatives, National Organization on Disability. Finally, we spoke with representatives from the National Association of Disability Examiners, the National Council of Disability Determination Directors, and the Social Security Advisory Board. We spoke with these disability experts about the effect of the standard on CDR outcomes and any challenges it presents when conducting CDRs. We conducted our work from October 2005 through June 2006 in accordance with generally accepted government auditing standards. In the first step of the CDR evaluation process for adult beneficiaries, an SSA field office representative determines if the beneficiary is working at the level of substantial gainful activity (SGA). A beneficiary who is found to be not working or working but earning less than the SGA level (minus allowable exclusions) has his or her case forwarded to the state Disability Determination Services (DDS). The second step is to determine if the individual’s current impairment(s) is included on the current list of disabilities that SSA maintains. The list describes impairments that, by definition, are so severe that they are disabling. If the individual’s current impairment(s) does meet or equal a current listing, then the DDS continues the individual’s benefits and does not continue with the evaluation process. If the individual’s current impairment(s) does not meet or equal a current listing, then the DDS proceeds to step three in the evaluation process. The third step is to determine if improvement in the individual’s medical condition has occurred. This improvement is any decrease in the medical severity of the impairment(s) that was present at the time of the most recent favorable medical decision (i.e., the initial decision to award disability benefits or the most recent CDR continuance—usually referred to as the comparison point decision, or CPD). At this step, the DDS examiner compares the current signs, symptoms, and laboratory findings associated with the beneficiary’s impairment(s) to those recorded from the last review. If improvement has not occurred, the disability examiner skips to the fifth step in the evaluation. If improvement has occurred, the disability examiner proceeds to next step, the fourth step. The fourth step is to determine if the improvement found in step three is related to the ability to work. Improvement related to the ability to work is evaluated two different ways, depending on whether the CPD was based on: (1) meeting or equaling a prior listing or (2) a residual functional capacity (RFC) assessment: Meeting or equaling the prior listing: In this case, the disability examiner will determine if the beneficiary’s same impairment(s) still meets or equals the prior listing. Unlike step two, the examiner compares the beneficiary’s condition with the list of impairments in effect at the time he or she was first awarded disability benefits. If the impairment(s) no longer meets or equals the prior listing, then the examiner finds that the improvement is related to the ability to work and proceeds to step six of the evaluation process. If the impairment(s) meets or equals a prior listing, then benefits are continued. Residual functional capacity assessment: In this case, the disability examiner compares the beneficiary’s previous functional capacity to the current functional capacity for the same impairment. If functional capacity for basic work activities has improved, then the examiner finds that the improvement is related to the ability to work and proceeds to step six of the evaluation process. If the current assessment does not show improvement, then the disability examiner proceeds to step five. The fifth step is to determine whether an exception to medical improvement applies. The law provides for certain limited situations when the DDS may discontinue a recipient’s benefits even though medical improvement has not occurred. The specific group I exceptions are (a) the individual is the beneficiary of advances in medical or vocational therapy or technology (related to the ability to work), (b) evidence shows that the individual has undergone vocational therapy (related to the ability to work), (c) evidence shows that, based on new or improved diagnostic or evaluative techniques, the individual’s impairment(s) is not as disabling as it was considered at the time of the CPD, and (d) evidence shows that any prior determination or decision was in error. If an exception applies, the examiner continues through to step six of the evaluation process. If an exception does not apply, benefits are continued. The sixth step is to determine if the current impairments are severe. At this step, the examiner considers all of the beneficiary’s impairments— those present at the previous decision as well as any new impairments found in the current review. If the DDS determines that the beneficiary’s current impairment(s) is not severe, benefits are discontinued without further development. If it is determined that the impairment(s) is severe, then the examiner considers the impact of the beneficiary’s impairment(s) on his or her ability to function. This consideration will result in a current residual functional capacity (RFC) assessment that shows the beneficiary’s ability to do basic work activities and the evaluation continues to the seventh step. The seventh step is to determine whether the beneficiary has the capacity to do the work that he or she did before having a disability. If the beneficiary has the ability to do past work, then benefits are discontinued. If the beneficiary does not have the ability to do work he or she has done in the past, the evaluation continues to the eighth step. The eighth step is to determine if the beneficiary has the ability to do other work. At this step, the disability examiner considers the complete vocational profile (the beneficiary’s age, education, and past relevant work experience) together with the beneficiary’s RFC to determine if he or she has the ability to do other work. If the beneficiary has the ability to do other work, disability benefits are discontinued. If he or she does not have the ability to do other work, benefits are continued. Robert E. Robertson, Director, (202) 512-7215. The following team members made key contributions to this report: Kelly Agnese; Jeremy D. Cox; Susan E.M. Etzel; Stuart M. Kaufman; Luann M. Moy; George H. Quinn, Jr.; Daniel A. Schwimer; Salvatore F. Sorbello; Wayne T. Turowski; Vanessa R. Taylor; and Rachael C. Valliere. Social Security Administration: Agency Is Positioning Itself to Implement Its New Disability Determination Process, but Key Facets Are Still in Development. GAO-06-779T. Washington, D.C.: June 15, 2006. Social Security Disability Insurance: SSA Actions Could Enhance Assistance to Claimants with Inflammatory Bowel Disease and Other Impairments. GAO-05-495. Washington, D.C.: May 31, 2005. High Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. SSA’s Disability Programs: Improvements Could Increase the Usefulness of Electronic Data for Program Oversight. GAO-05-100R. Washington, D.C.: December 10, 2004. Disability Insurance: SSA Should Strengthen Its Efforts to Detect and Prevent Overpayments. GAO-04-929. Washington, D.C.: September 10, 2004. Social Security Administration: More Effort Needed to Assess Consistency of Disability Decisions. GAO-04-656. Washington, D.C.: July 2, 2004. Social Security Disability: Commissioner Proposes Strategy to Improve the Claims Process, but Faces Implementation Challenges. GAO-04-552T. Washington, D.C.: March 29, 2004. SSA Disability Decision Making: Additional Steps Needed to Ensure Accuracy and Fairness of Decisions at the Hearings Level. GAO-04-14. Washington, D.C.: November 12, 2003. Social Security Disability: Reviews of Beneficiaries’ Disability Status Require Continued Attention to Achieve Timeliness and Cost- Effectiveness. GAO-03-662. Washington, D.C.: July 24, 2003. Social Security Disability: Reviews of Beneficiaries’ Disability Status Require Continued Attention to Improve Service Delivery. GAO-03-1027T. Washington, D.C.: July 24, 2003. High Risk Series: An Update. GAO-03-119. Washington, D.C.: January 2003. Major Management Challenges and Program Risks: Social Security Administration. GAO-03-117. Washington, D.C.: January 2003. SSA and VA Disability Programs: Re-Examination of Disability Criteria Needed to Help Ensure Program Integrity. GAO-02-597. Washington, D.C.: August 9, 2002. SSA Disability Programs: Fully Updating Disability Criteria Has Implications for Program Design. GAO-02-919T. Washington, D.C.: July 11, 2002. Social Security Disability: SSA Making Progress in Conducting Continuing Disability Reviews. GAO/HEHS-98-198. Washington, D.C.: September 18, 1998. Supplemental Security Income: SSA Is Taking Steps to Review Recipients’ Disability Status. GAO/HEHS-97-17. Washington, D.C.: October 30, 1996. Social Security Disability: SSA Quality Assurance Improvements Can Produce More Accurate Payments. GAO/HEHS-94-107. Washington, D.C.: June 3, 1994. Social Security: Disability Rolls Keep Growing, While Explanations Remain Elusive. GAO/HEHS-94-34. Washington, D.C.: February 11, 1994. Social Security Disability: Implementing the Medical Improvement Review Standard. GAO/HRD-88-108BR. Washington, D.C.: September 30, 1988. Social Security Disability: Implementation of the Medical Improvement Review Standard. GAO/HRD-87-3BR. Washington, D.C.: December 16, 1986. Review of the Eligibility of Persons Converted from State Disability Rolls to the Supplemental Security Income Program. HRD-78-97. Washington, D.C.: April 18, 1978.
The Social Security Act requires that the Social Security Administration (SSA) find an improvement in a beneficiary's medical condition in order to remove him or her from either the Disability Insurance (DI) or Supplemental Security Income (SSI) programs. GAO was asked to (1) examine the proportion of beneficiaries who have improved medically and (2) determine if factors associated with the standard pose challenges for SSA when determining whether beneficiaries continue to be eligible for benefits. To answer these questions, GAO surveyed all 55 Disability Determination Services (DDS) directors, interviewed SSA officials, and reviewed pertinent SSA data. Each year, about 13,800 beneficiaries, or 1.4 percent of all the people who left the disability programs between fiscal years 1999 and 2005, did so because SSA found that they had improved medically. More beneficiaries leave because they convert to regular retirement benefits, die, or for other reasons--including having earnings above program limits. In addition, while continuing disability reviews (CDR) are SSA's most comprehensive tool for determining whether a recipient continues to have a disability, on average, 2.8 percent of beneficiaries were found to have improved medically and to be able to work following a CDR during this 7-year period. Several factors associated with the medical improvement standard (the standard) pose challenges for SSA when assessing whether beneficiaries continue to be eligible for benefits. First, limitations in SSA guidance may result in inconsistent application of the standard. For example, SSA does not clearly define the degree of improvement needed to meet the standard, and the DDS directors GAO surveyed reported that they use different thresholds to assess if medical improvement has occurred. Second, contrary to existing policy, disability examiners in a majority of the DDSs are incorrectly conducting CDRs with the presumption that a beneficiary has a disability rather than with a "neutral" perspective. Other challenges associated with the standard include inadequate documentation of evidence as well as the judgmental nature of medical improvement determinations. All these factors have implications for the consistency of CDR decisions. However, due to data limitations, GAO was unable to determine the extent to which these problems affect decisions to continue or discontinue benefits.
The Department of Veterans Affairs (VA) operates one of the nation’s largest health care systems, including • a health benefits program for over 26 million eligible veterans and • a health care delivery program consisting of 173 hospitals, 376 outpatient clinics, 136 nursing homes, and 39 domiciliaries in fiscal year 1996. The two programs are closely intertwined. For example, VA outpatient clinics are not allowed to use available resources to provide services to many veterans because (1) the services, such as prosthetics, are not covered under a particular veteran’s health care benefits and (2) the clinics are not permitted under the law to sell noncovered services to veterans. In administering the veterans’ health benefits program authorized under title 38 of the U.S. Code, some of VA’s responsibilities are similar to those of the Health Care Financing Administration (HCFA) in administering Medicare benefits and to those of private insurance companies in administering health insurance policies. For example, VA is responsible for determining under the statute (1) which benefits veterans are eligible to receive, (2) whether and how much veterans must contribute toward the cost of their care, and (3) where veterans can obtain covered services (in other words, whether they must use VA-operated facilities or can obtain needed services from other providers at VA expense). Similarly, VA, like HCFA and private insurers, is responsible for ensuring that the health benefits provided to its beneficiaries—veterans—are (1) medically necessary and (2) provided in the most appropriate care setting (such as a hospital, nursing home, or outpatient clinic). In operating a health care delivery program, VA’s role is similar to that of the major private sector health care delivery networks such as those operated by Columbia/HCA and Kaiser Permanente. For example, VA strives to ensure that its facilities (1) provide high quality care, (2) are used to optimum capacity, (3) are located where they are accessible to their target population, (4) provide good customer service, (5) offer potential patients services and amenities comparable to competing facilities, and (6) operate effective billing and collection systems. For fiscal year 1996, VA received an appropriation of about $16.6 billion to maintain and operate its facilities, which are expected to provide inpatient hospital care to 930,000 patients, nursing home care to 35,000 patients, and domiciliary care to 18,700 patients. In addition, VA outpatient clinics are expected to handle 25.3 million outpatient visits. Any person who served on active duty in the uniformed services for the minimum amount of time specified by law and who was discharged, released, or retired under other than dishonorable conditions is eligible for some VA health care benefits. The amount of required active duty service varies depending on when the person entered the military, and an eligible veteran’s health care benefits depend on factors such as the presence and extent of a service-connected disability, income, and period or conditions of military service. Persons enlisting in one of the armed forces after September 7, 1980, and officers commissioned after October 16, 1981, must have completed 2 years of active duty or the full period of their initial service obligation to be eligible for benefits. Veterans discharged at any time because of service-connected disabilities and those discharged because of personal hardship near the end of their service obligation are not held to this requirement. Also eligible are members of the armed forces’ reserve components who were called to active duty and served the length of time for which they were activated. Although all veterans meeting the basic requirements are “eligible” for hospital, nursing home, and at least some outpatient care, the VA law establishes a complex priority system—based on such factors as the presence and extent of any service-connected disability, the incomes of veterans with nonservice-connected disabilities, and the type and purpose of care needed—to determine which services are covered and which veterans receive care within available resources. Generally, veterans can obtain health services only in VA-operated health care facilities. There are three primary exceptions: • VA-operated nursing home and domiciliary care is augmented by contracts with community nursing homes and by per diem payments for veterans in state-operated veterans’ homes. • VA pays private sector physicians and other health care providers to extend care to certain veterans when the services needed are unavailable within the VA system or when the veterans live too far from a VA facility (commonly referred to as fee-basis care). VA has limited the use of fee-basis physicians primarily to veterans with service-connected disabilities. • Veterans can obtain emergency hospitalization from any hospital and then be transferred to a VA hospital when their conditions stabilize. In addition, veterans being treated in VA facilities can be provided specific scarce medical resources from other public and private providers through sharing agreements and contracts between VA and non-VA providers. All veterans’ health care benefits include medically necessary hospital and nursing home care, but certain veterans, referred to as Category A, or mandatory care category, veterans, have the highest priority for receiving care. More specifically, VA must provide hospital care, and, if space and resources are available, may provide nursing home care to veterans who • have service-connected disabilities, • were discharged from the military for disabilities that were incurred or aggravated in the line of duty, • are former prisoners of war, • were exposed to certain toxic substances or ionizing radiation, • served during the Mexican Border Period or World War I, • receive disability compensation, • receive nonservice-connected disability pension benefits, or • have incomes below the means test threshold (as of January 1996, $21,001 for a single veteran or $25,204 for a veteran with one dependent, plus $1,404 for each additional dependent). For higher-income veterans who do not qualify under these conditions, VA may provide hospital and nursing home care if space and resources are available. These veterans, known as Category C, or discretionary care category, veterans, must pay a part of the cost of the care they receive. VA provides three basic levels of outpatient care benefits: • comprehensive care, which includes all services needed to treat any medical condition; • service-connected care, which is limited to treating conditions related to a • hospital-related care, which provides only the outpatient services needed to (1) prepare for a hospital admission, (2) obviate the need for a hospital admission, or (3) complete treatment begun during a hospital stay. Separate mandatory and discretionary care categories apply to outpatient care. Only veterans who have service-connected disabilities rated at 50 percent or more (about 465,000 veterans) are in the mandatory care category for comprehensive outpatient care. VA may provide comprehensive outpatient care to veterans who (1) are former prisoners of war, (2) served during the Mexican Border Period or World War I, or (3) are housebound or in need of aid and attendance. In other words, all medically necessary outpatient care is covered for these groups of veterans, subject to the availability of space and resources. All veterans with service-connected disabilities are in the mandatory care category for treatment related to their disabilities. Veterans seeking outpatient services needed to treat medical conditions related to injuries suffered as a result of VA hospitalization or while participating in a VA rehabilitation program are also in the mandatory care category for such services. Other medically necessary care is noncovered unless the veteran also qualifies for comprehensive care or meets the conditions for hospital-related care. Veterans (1) with service-connected disabilities rated at 30 or 40 percent and (2) whose annual incomes do not exceed VA’s pension rate for veterans in need of regular aid and attendance are in the mandatory care category for hospital-related outpatient care. VA may, to the extent resources permit, furnish limited hospital-related outpatient care to veterans not otherwise eligible for outpatient care, providing they agree to pay a part of the cost of care. For veterans qualifying for outpatient care only under the hospital-related care provisions, all other medically necessary outpatient care is noncovered. Figure 1.1 summarizes VA eligibility provisions. The distinction between “covered” and “noncovered” services in discussing veterans’ health benefits is important because VA facilities are generally restricted to providing covered services to veterans. In addition, VA can sell health care services in only a few situations. Specifically, statutes authorize VA hospitals and outpatient clinics to enter into agreements to sell • health care services to Department of Defense (DOD) and other federal • specialized medical resources to federal and nonfederal hospitals, clinics, and medical schools. VA cannot, however, sell health care services directly to veterans or others. To allow VA’s resources to be more effectively used and avoid unnecessary duplication and overlap of activities, VA has been authorized for over 60 years to sell or share its resources with other federal agencies. For example, all VA medical centers within 50 miles of a DOD hospital currently have sharing agreements to provide one or more services to DOD beneficiaries. In 1989, the Congress enacted legislation specifically authorizing the use of Civilian Health and Medical Program of the Uniformed Services (CHAMPUS) funds to reimburse VA for care provided to CHAMPUS beneficiaries under sharing agreements. As of April 1996, three VA medical centers were providing services to CHAMPUS beneficiaries. Finally, in June 1995, VA and DOD completed work on an agreement that will allow VA facilities to compete with private sector facilities to serve as providers under DOD’s new TRICARE program. Since 1966, VA facilities have also had limited authority to share health care resources with federal and nonfederal hospitals, clinics, and medical schools. This authority, however, is limited to sharing of “specialized medical resources,” medical techniques, and education. Such resources include equipment, space, or personnel, which, because of their cost, limited availability, or unusual nature, are either unique in the medical community or can be fully used only through mutual use. VA facilities cannot provide routine patient care services to veterans’ dependents or other nonveterans, even if they have the capacity to do so and the patients are willing to pay for the services. Similarly, VA facilities cannot sell noncovered services to veterans. This restriction primarily affects outpatient care because hospital care is a covered service for all veterans. However, routine outpatient care is not a covered service for most veterans, and VA cannot sell routine outpatient care to most veterans even if they are willing to pay for the care. In July 1995 and March 1996, respectively, we testified before the House and Senate Committees on Veterans’ Affairs on major issues affecting reform of VA health care eligibility. At the request of the Chairman, Senate Committee on Veterans’ Affairs, this report expands on the information presented at those hearings. Specifically, it discusses the evolution of the VA health care system and VA eligibility; the problems that VA’s current eligibility and health care contracting provisions create for veterans and providers; the extent to which VA provides veterans with health care services for which they are not eligible; legislative proposals to reform VA eligibility and contracting rules and their potential effect on the ease of administration, equity to veterans, costs to VA, and clarity of eligibility for veterans’ health benefits; and • approaches that could be used to limit the budgetary effects of eligibility reforms. In addressing these objectives, we relied primarily on the results of reviews that we conducted over the last 5 years that detailed problems in administering VA’s outpatient eligibility provisions, compared VA benefits and eligibility with those of other public and private health benefits programs and with the veterans’ health benefits programs in other countries, and assessed VA’s role in a changing health care marketplace. A list of related GAO products is at the end of this report. In addition, in developing information on the evolution of the VA health care system and veterans’ health benefits, we relied on the legislative history of the veterans’ health care provisions of title 38 of the U.S. Code and articles and reports prepared by or for the Brookings Institution (1934), the House Committee on Veterans’ Affairs (1967), the National Academy of Sciences (1977), VA’s Commission on the Future Structure of Veterans Health Care, the Congressional Research Service, the Twentieth Century Fund (1974), and VA. In assessing the extent to which VA hospitals and clinics provide inappropriate and noncovered services, we relied primarily on studies prepared by VA researchers and VA’s Office of Inspector General (OIG). In reviewing these studies, we paid particular attention to the underlying causes for the problems identified to determine the extent to which the problems were attributed to VA eligibility provisions. In evaluating eligibility reform proposals, we focused on those proposed by members of the Senate or House Veterans’ Affairs Committees, VA, and the major veterans service organizations (VSO). We focused on the extent to which the proposals would (1) change VA health care funding from discretionary to mandatory, (2) expand eligibility for VA health care services, (3) create a uniform benefit package(s), (4) guarantee availability of covered services, and (5) provide new sources of funding for expanded benefits. On the basis of this work and discussions with officials from VA and the major VSOs, we identified a series of issues that could be considered in future debate on eligibility reform. We did our work between March 1995 and June 1996 in accordance with generally accepted government auditing standards. The United States has a long tradition of providing benefits to those injured in military service, but the role of the federal government in providing for the health care needs of other veterans has evolved and expanded over time. The federal role, initially limited to a program of financial assistance for those injured in combat, has expanded to include a combination of financial assistance and direct provision of health care services to a wide range of combat and noncombat veterans. Just as VA’s role in meeting veterans’ health care needs has broadened over time, the role of public and private health insurance in meeting the health care needs of veterans (and other Americans) has also grown. About 90 percent of veterans now have public or private health insurance or both in addition to their VA health care benefits. As a result, many veterans now have multiple options for paying for basic hospital and physician services. Changes in the veteran population have also contributed to the evolution of VA from a system focused on treatment of war injuries to a system increasingly focused on treatment of veterans with no service-connected disabilities and on treatment of disabilities associated with aging. For example, the number of veterans is declining, fewer in the veteran population served during wartime, and a growing proportion of veterans are over age 65. Our work identified many difficult questions facing the Congress as it considers future changes in the mission of the veterans’ health care system. For example, what do veterans perceive as the nation’s obligation to meet their health care needs and how does that perception differ from the commitment made by the Congress and the administration? Similarly, with the growth of public and private health insurance, are changes needed in VA’s role as a safety net provider? Finally, with an aging veteran population, are changes needed in VA’s role in meeting the long-term care needs of veterans? In the nation’s early years, the federal role was limited to direct financial payments to veterans injured during combat; direct medical and hospital care was provided by the individual colonies, states, and communities. The first colonial law establishing veterans’ benefits, enacted by the Pilgrims of Plymouth Colony in 1636, provided that any soldier injured in the war with the Pequot Indians would be maintained by the colony for the rest of his life. Other colonies enacted similar provisions. The Continental Congress, seeking to encourage enlistments during the Revolutionary War, provided federal compensation for veterans injured during the war and their dependents. Similarly, the first U.S. Congress passed a veterans’ compensation law. The federal role began to expand in 1833 with the opening of the first domiciliary and medical facility for veterans—the U.S. Naval Home. A second federal home for disabled and invalid soldiers—the Old Soldiers and Sailors Home—authorized in 1851, is still in operation in Washington, D.C. Although the federal role was no longer limited to financial support for war-disabled veterans, medical care was only an incidental part of the homes, which were primarily residential facilities. The federal role in veterans’ health care significantly expanded during and following the Civil War. During the war, the government operated temporary hospitals and domiciliaries in various parts of the country for disabled soldiers until they were physically able to return to their homes. Following the war, the number of disabled veterans, and veterans unable to cope with the economic struggle of civilian life, became so great that the government built a number of “homes” to provide domiciliary care.Incidental medical and hospital care was provided to residents for all diseases and injuries, whether or not they were service related. In addition to indigent and disabled veterans of the Civil War, eligibility for admission to the homes was subsequently extended to veterans of the Indian Wars, Spanish-American War, Mexican Border Period, and discharged regular members of the armed forces. The modern era of the veterans’ health care system began with the onset of World War I. During World War I a series of new veterans benefits were added: voluntary life insurance, allotments to take care of the family during service, reeducation of those disabled, disability compensation, and medical and hospital care for those suffering from wounds or diseases incurred in the service. Throughout the 1800s, the federal role had been limited to the provision of (1) compensation to war-disabled veterans and (2) domiciliary care and incidental medical care to veterans with injuries incurred during wartime service or to veterans who are incapable of earning a living because of a permanent disability, tuberculosis, or neuropsychiatric disability suffered after their wartime service. During World War I, however, Public Health Service (PHS) hospitals treated returning veterans and at the end of the war, several military hospitals were transferred to PHS to enable it to continue serving the growing veteran population. In 1921, those PHS hospitals primarily serving veterans were transferred to the newly established Veterans’ Bureau. Casualties returning from World War I soon overwhelmed the capacity of veterans’ hospitals to treat injured soldiers. The Congress responded by increasing the number of veterans’ hospitals with an emphasis on treatment of veterans’ disabling conditions. In 1921, eligibility for hospital care was expanded to include treatment for all service-connected conditions. After most of the immediate, postwar, service-connected medical problems of veterans were met, VA hospitals began to experience excess capacity instead of a shortage of beds. Proposals were made to close underutilized hospitals. The VSOs lobbied for free hospital care for medically indigent veterans without service-connected disabilities. The Congress, in 1924, gave wartime veterans with nonservice-connected conditions access to Veterans’ Bureau hospitals, provided space was available and the veterans signed an oath indicating they were unable to pay for their care. During the 1920s, three federal agencies—the Veterans Bureau, the Bureau of Pensions in the Interior Department, and the National Home for Disabled Volunteer Soldiers—administered various benefits for the nation’s veterans. With the establishment of the Veterans Administration (VA) in 1930, previously fragmented care for veterans was consolidated under one agency. During the Great Depression, demand for VA hospital care was unprecedented. As part of efforts to curtail federal spending, President Roosevelt, in 1933, issued regulations making veterans ineligible for hospital treatment of nonservice-connected conditions. The following year, however, the Congress restored eligibility for treatment of nonservice-connected conditions. Subsequently, in 1937, President Roosevelt authorized construction of additional VA hospital beds to (1) meet the increased demand for neuropsychiatric care and treatment of tuberculosis and other respiratory illnesses and (2) provide more equitable geographic access to care. Rapidly rising demand for hospital care brought on by the onset of U.S. involvement in World War II led to construction and expansion of VA hospitals. Because of the heavy demand for care, World War II veterans were initially eligible only for treatment of service-connected disabilities. In 1943, however, new eligibility requirements were established for World War II veterans identical to those for World War I veterans. Demand for care was so great, however, that in March 1946 VA had a waiting list of over 26,000 veterans seeking care for nonservice-connected conditions. As had occurred following the end of World War I, the initial high demand for medical services for returning casualties soon declined and VA once again had excess hospital capacity. In 1947, the Congress created a presumption that a diagnosis of a chronic psychiatric condition within 2 years of discharge would be regarded as service-connected. The next significant expansion of hospital eligibility occurred in 1962, when legislation was enacted that defined as a service-connected disability any condition traceable to a period of military service, regardless of the cause or circumstances of its occurrence. Before that time, care for service-connected conditions was not assured unless they were incurred or aggravated during wartime service. In 1973, eligibility for hospital care was extended to treatment of nonservice-connected disabilities of peacetime veterans unable to defray the cost of care. Previously, treatment of nonservice-connected disabilities was limited to wartime veterans. Finally, in 1986, the Congress extended eligibility to higher-income veterans with no service-connected disabilities. Previously, only those veterans with nonservice-connected disabilities who signed a poverty oath were eligible for VA hospital care. To be eligible for VA hospital care, higher-income veterans must agree to contribute toward the cost of their care. “The possible adverse effects of the proposed legislation should also, I believe, be considered. This bill would for the first time mean that non-service-connected veterans would be receiving outpatient treatment even though we have endeavored to make revisions which would relate this only to hospital care. The outpatient treatment of the non-service-connected might be an opening wedge to a further extension of this type of medical treatment.” Thirteen years later, the Veterans Health Care Expansion Act of 1973 (P.L. 93-82) further expanded eligibility for outpatient care. The act (1) made veterans with service-connected disabilities rated at 80 percent or higher eligible for free comprehensive outpatient care and (2) authorized outpatient treatment for any nonservice-connected disability to “obviate the need of hospital admission.” Three years later, in 1976, the mandatory care category for free comprehensive outpatient services was extended to include veterans with service-connected disabilities rated at 50 percent or higher. In 1986, the Congress expanded eligibility for outpatient care to include higher-income veterans agreeing to contribute toward the cost of their care. Previously, only those veterans with nonservice-connected disabilities who signed a poverty oath were eligible for outpatient care. The last major expansion of outpatient eligibility occurred in 1988 when veterans with (1) service-connected disabilities rated at 30 or 40 percent or (2) with incomes below the maximum pension rate were placed in the mandatory care category for outpatient treatment for prehospital and posthospital care and for care that would obviate the need for hospital care. When the VA health care system was established, there was no public or private health insurance program to assist veterans in paying for needed health care services. Private health insurance, which typically pays for services provided by physicians and health care facilities on a fee-for-service basis, began to emerge in the 1930s with the establishment of Blue Cross and Blue Shield and commercial plans. The industry expanded rapidly during the 1950s, and in 1959, the Federal Employees Health Benefits Act authorized the federal government to provide health care benefits to millions of federal employees and retirees and their dependents through private health insurance. By 1993, over 182 million Americans were covered by private health insurance. In 1965, the Congress enacted legislation establishing the two largest public health insurance programs—Medicare, serving elderly and disabled Americans, and Medicaid, a jointly funded federal-state program serving low-income Americans. The following year, the Congress established CHAMPUS to enable military retirees and the dependents of active duty and retired military personnel to obtain health care in the private sector when services are not available or not accessible in DOD facilities. Although each of the major public and private programs has a different target population, overlaps between target populations result in many veterans having coverage under multiple programs. Table 2.1 describes potential overlaps in populations served by the VA health care system and other health care programs. With the growth of public and private health insurance, more than 9 out of 10 veterans now have alternate health insurance coverage, decreasing the importance of VA’s safety net mission. (See fig. 2.1.) 1.6% VA and Other Combinations of Coverage (Includes Medicaid) Veterans with higher incomes, alternate health insurance coverage, and no service-connected disabilities are significantly less likely to seek care from VA health care facilities than are veterans with service-connected disabilities, low incomes, and no health insurance. The following data illustrate: • Over 82 percent of veterans with health insurance had never used VA, compared with about 56 percent of veterans with no health insurance.• Over 88 percent of veterans with incomes of $40,000 or more had never used VA, compared with over 63 percent of veterans with incomes under $10,000. • Over 70 percent of veterans with no service-connected disabilities had never used VA health care services, compared with 30 percent of those with service-connected disabilities. Changes in the size and composition of the veteran population also contribute to the evolution of the VA health care system from one primarily treating war-related injuries to one increasingly focused on veterans with no service-connected disabilities. As the nation’s large World War II and Korean War veteran populations age, their needs for nursing home and other long-term care services are increasing. The veteran population, which totaled about 26.4 million in 1995, is both declining and aging. The number of veterans has steadily declined since 1980 and is expected to decline at an accelerated rate through 2010. Between 1990 and 2010, VA projects the veteran population will decline 26 percent. (See fig. 2.2.) As the veteran population continues to age, the decrease will not be evenly distributed among age groups. The decline will be most notable among veterans under 65 years of age—from about 20.0 million to 11.5 million (42 percent). The number of veterans aged 65 to 84 will increase from 7.0 million to 8.9 million in the year 2000, then will drop to about 7.2 million by 2010. In contrast, the number of veterans aged 85 and older will increase more than eight-fold, from 154,000 to 1.3 million by 2010. At that time, veterans aged 85 and older will constitute about 6.3 percent of the veteran population. (See fig. 2.3.) Old age is often accompanied by the development of chronic health problems, such as heart disease, arthritis, and other ailments. These problems, important causes of disability among the elderly population, often result in the need for nursing home care or other long-term care services. With the veteran population continuing to age rapidly, VA faces a significant challenge in trying to meet increasing demand for nursing home care. Over 50 percent of veterans over 85 years old are expected to need nursing home care compared with 13 percent of those 65 to 69 years old. Coinciding with the overall decline in the number of veterans is a decline in the percentage of the veteran population that served during wartime. Because of the higher death rate of veterans who served in World War II (they currently account for almost three of every four veteran deaths), the population of veterans who served during wartime will decrease faster than the total veteran population—35 percent verses 26 percent. VA projects the number of total wartime veterans will decline from 21.0 million in 1990 to 13.6 million in 2010. (See fig. 2.4.) Even more dramatic is the shift in the number of wartime veterans by period of service. In 1990, the largest group of wartime veterans were World War II veterans, followed by Vietnam and Korean War veterans, respectively. By 1995, however, deaths of World War II veterans had reached the point where Vietnam-era veterans outnumbered surviving World War II veterans by about 826,000. By 2010, Persian Gulf War veterans are expected to outnumber both Korean War and World War II veterans. (See fig. 2.5.) Most veterans who served during wartime saw no combat exposure. As a result, about 35 percent of U.S. veterans were actually exposed to combat. (See fig. 2.6.) About 8.3 percent of veterans have compensable service-connected disabilities. Veterans who served during peacetime are almost twice as likely to have service-connected disabilities as veterans of the Korean War and only slightly less likely to have service-connected disabilities than Vietnam-era and Persian Gulf War veterans. Most likely to have service-connected disabilities are World War II veterans. (See fig. 2.7.) Of the over 2.2 million veterans with compensable service-connected disabilities, over half have disability ratings of 10 or 20 percent. Of the remaining veterans with service-connected disabilities, about 464,000 had disabilities rated at 50 percent or higher and 488,000 had disabilities rated at 30 or 40 percent. (See fig. 2.8.) Many of the health care benefits for which veterans are now eligible were added after they were discharged from the military. For example, most World War II and Korean War veterans were discharged before nursing home benefits were added to the VA system in 1964. Similarly, higher-income veterans were not eligible for VA health care until 1986, when the means test was added. More importantly, outpatient benefits, other than for treatment of service-connected disabilities, were not available even for pre- and posthospital care until 1960. And broader outpatient benefits to cover services needed to obviate the need for hospital care were not added until after the Vietnam War. In other words, not one of the three largest groups of veterans—World War II, Korean War, or Vietnam War—was discharged with a promise of comprehensive health care for both service-connected and nonservice-connected conditions. Although many of the health benefits for which veterans are now eligible were not covered at the time they were discharged, were servicemembers led to believe, either as an inducement to enlist or as a promise upon discharge, that the government would provide for their health care needs for the remainder of their lives? The first, and perhaps most important, issue to be addressed in considering changes in veterans’ health care eligibility is the nation’s commitment to its veterans. But what is and what should that commitment be? Since colonial times, there has been little doubt that servicemembers injured in combat are entitled to compensation for their injuries. There is less agreement, however, on the role and responsibility of the federal government in meeting the other health care needs of veterans. Decisions made with regard to what the nation’s commitment is to its veterans will largely drive decisions on whether eligibility distinctions should continue to be based on factors such as degree of service-connected disability and income. If a decision is made that all veterans should be eligible for the same comprehensive health benefits, then eligibility distinctions will, in the future, be used only to determine veterans’ relative priorities for care. If, however, a decision is made that certain veterans should be given more extensive benefits than others, then such distinctions will continue to be used to define the differences in benefits. For example, certain categories of veterans might be eligible for a broader range of services or lower cost sharing. The question then would become whether to keep the same distinctions as in the current law or base the distinctions on other factors. In three other countries that operated direct delivery systems for veterans (United Kingdom, Australia, and Canada), declining use of veterans’ hospitals prompted actions to open them to nonveterans. It was hoped that caring for community patients would allow the hospitals and staff to maintain their medical expertise and expand services. Should our veterans’ health care system similarly be opened to nonveterans? Among the options that could be considered would be extending veterans’ benefits to more dependents. If a veteran is uninsured and lacks health care options, his or her family is also likely to be uninsured and without adequate health care. Once a benefit has been established, it can be difficult to change the cost-sharing requirements. As new benefits are added, however, an opportunity exists to determine to what extent the government and the veteran will share the cost of the added benefits. Because of the limitations on coverage of routine outpatient services, VA’s health care safety net is structured more like a catastrophic health insurance plan than comprehensive health insurance. Most veterans are responsible for paying for routine health care services not needed to obviate the need for hospital care. For veterans with other public or private insurance, this limitation likely has a minimal effect on their use of health care services. But low-income veterans without public or private insurance must either use their own funds to obtain routine health care services or forgo needed care. An important issue, then, in considering eligibility reform is whether changes need to be made in VA’s safety net mission. Veterans frequently have unmet needs for nursing home and other long-term care services. Medicare and most private health insurance cover only short-term, post-acute nursing home, and home health care. Although private long-term care insurance is a growing market, the high cost of policies places such coverage out of the reach of many veterans. As a result, most veterans must pay for long-term nursing home and home care services out of pocket until they spend down most of their income and assets on health care and qualify for Medicaid. Although VA has a nursing home benefit, it is a discretionary benefit for all veterans. Should changes be made in the nursing home benefit to enable VA to meet the long-term care needs of more veterans? Because of the overlapping populations, changes in one health care program can have a significant effect on demand for care under other programs. For example, expanded availability of private health insurance would likely decrease demand for VA health care. Similarly, changes in the Medicare program, such as those proposed by some in the Congress, could affect future demand for VA health care services, although it is unclear whether they would increase or decrease demand for VA care. To what extent should changes in other health care programs affect the design of VA eligibility reforms? These issues are discussed in more detail in appendix I. Unlike public and private health insurance, the VA health benefits program does not (1) have a well-defined benefit package or (2) entitle veterans to services or guarantee that services are covered. Similarly, as a health care provider, VA, unlike private sector providers, is severely limited in its ability to both buy health care services from and sell health care services to individuals and other providers. These differences help make VA’s eligibility provisions a source of frustration for veterans, VA physicians, and VA’s administrative staff. The problems created by these provisions include the following: • Veterans are often uncertain about which services they are eligible to receive and what right they have to demand that VA provide them. • Physicians and administrative staff find the eligibility provisions hard to administer. • Veterans have uneven access to care because the availability of covered services is not guaranteed. • Physicians are put in the difficult position of having to deny needed, but noncovered, health care services to veterans. Because of these problems, veterans may be unable to consistently obtain needed health care services from VA facilities. Designing solutions to these problems will require both administrative and legislative actions. VA and the Congress will face many difficult choices. For example, in designing legislative solutions, decisions will need to be made on whether the availability of services should be guaranteed for one or more groups of veterans and whether a defined benefit package should be developed. Because public and private insurance policies generally have a defined benefit package, both policyholders and providers generally know in advance which services are covered and what limitations apply to the availability of services. Defined benefit packages also preserve insurers’ flexibility by permitting them to trade benefits against program costs. For example, by eliminating certain benefits (such as dental care or prescription drugs), an insurer can restrain the growth in premiums. An insurer can also offer multiple policies with varying benefits, but individuals with the same policy have the same benefits. Like private insurance, VA essentially offers multiple health benefits “policies” with varying benefits. Unlike private insurance, however, veterans with the same “policy” will not necessarily receive the same services. Only those veterans whose “policy” covers all medically necessary care—primarily veterans with service-connected disabilities rated at 50 percent or higher—have clearly defined, uniform, benefits. Because coverage of outpatient services for most veterans varies on the basis of their medical conditions, a veteran may be eligible to receive different services at different times. For example, if a veteran with no service-connected disabilities is scheduled for admission to a VA hospital for elective surgery, he or she is eligible to receive any outpatient service needed to prepare for the hospital admission, including a physical examination with X rays and blood tests. However, if the same veteran sought a routine physical examination from a VA outpatient clinic, he or she would not be eligible because there is no apparent need for hospital-related care. The benefit packages under public and private insurance programs frequently cover preventive health services, such as routine physical examinations and immunizations. In contrast, VA health benefits are focused on the provision of medical services needed for treatment of a “disability.” For example, a woman veteran may obtain treatment for the complications of pregnancy, but may not obtain prenatal care or delivery services for a routine pregnancy through the VA health care system. Because of the lack of a well-defined benefit package, veterans are often confused by VA’s complex eligibility provisions. The services they can get from VA depend on such factors as the presence and extent of any service-connected disability, income, period of service, and the seriousness of the condition. The VA system limits veterans’ access to covered services (that is, it rations care to certain veterans), rather than narrowing the scope of services offered to all veterans in the same coverage group. To further add to veterans’ confusion about which health care services they are eligible to receive from VA, title 38 of the U.S. Code specifies only the types of medical services that cannot be provided on an outpatient basis. Except for service-connected disabilities, VA outpatient clinics generally cannot provide, for example, • prosthetic devices, such as wheelchairs, crutches, eyeglasses, and hearing aids, to veterans not eligible for comprehensive outpatient services; • dental care to most veterans unless they were examined and had their treatment started while in a VA hospital; and • routine prenatal care and delivery services. Veterans are not the only ones confused by VA eligibility provisions. Those tasked with applying and enforcing the provisions on a daily basis—VA physicians and administrative staff—express similar frustration in attempting to interpret the provisions. Although the criterion limiting outpatient services to those needed to obviate the need for hospitalization is most often cited as the primary source of frustration, VA administrative staff must also enforce a series of other requirements, which add administrative costs not typically incurred under other public or private insurance programs. “shall be based on the physician’s judgment that the medical services to be provided are necessary to evaluate or treat a disability that would normally require hospital admission, or which, if untreated would reasonably be expected to require hospital care in the immediate future. . . .” To assess medical centers’ implementation of this criterion, we used medical profiles of six veterans developed from actual medical records and presented them to 19 medical centers for eligibility determinations.At these 19 centers, interpretations of the criterion ranged from permissive (care for any medical condition) to restrictive (care only for certain medical conditions). In other words, from the veteran’s perspective, access to VA care depends greatly on which medical center he or she visits. For example, if one veteran we profiled had visited all 19 medical centers, he would have been determined eligible by 10 centers but ineligible by 9 others. “. . . is so vaguely worded that every doctor can come up with one or more interpretations that will suit any situation. . .. Having no clear policy, we have no uniformity. The same patient with the same condition may be denied care by one physician, only to walk out of the clinic the next day with a handful of prescriptions supplied by the doctor in the next office.” With thousands of VA physicians making eligibility decisions each working day, the number of potential interpretations is large. In addition to interpreting the obviate-the-need criterion, VA physicians or administrative staff must evaluate a series of other eligibility requirements before deciding whether individual veterans are eligible for the health care services they seek. For example, they must • determine whether the disability for which care is being sought is service-connected or aggravating a service-connected disability, because different eligibility rules apply to care for service-connected and nonservice-connected disabilities; • determine the disability rating for veterans with service-connected disabilities because the outpatient services they are eligible for and their priority for care depend on their rating; • determine the income and assets of veterans with no service-connected disabilities because their eligibility for (and priority for receiving) care depends on a determination of their ability to pay for care; and • determine whether the veteran’s medical condition may have been related to exposure to toxic substances or environmental hazards during service in Desert Storm or Vietnam, in which case care may be provided without regard to other eligibility provisions. Under private health insurance, Medicare, and Medicaid, the coverage of services is guaranteed. For example, all beneficiaries who meet the basic eligibility requirements for Medicare are entitled to receive all medically necessary care covered under the Medicare part A benefit package. Similarly, those Medicare beneficiaries who enroll for part B benefits are entitled to receive all medically necessary care covered under the part B benefit package. Medicare is authorized to spend as much as necessary to pay for covered services, creating guaranteed access to covered services. Under private health insurance, policyholders are essentially guaranteed coverage of medically necessary services under their benefit package. In other words, under both Medicare and private insurance, the insurer—either the government in the case of Medicare or an insurance company in the case of private health insurance—assumes the financial risk for paying for covered services. Under the VA health care system, however, the government does not assume the same degree of financial risk for providing covered services. Being in the mandatory care category for VA health care services does not entitle veterans to, or guarantee coverage of, needed services. The VA health care system is funded by a fixed annual appropriation; once appropriated funds have been expended, the VA health care system is not allowed to provide additional health care services—even to veterans in the mandatory care category. Although title 38 of the U.S. Code contains frequent references to services that “shall” or “must” be provided to mandatory care group veterans, in practical application the terms mean that services “shall” or “must” be provided up to the amount the Congress has authorized to be spent. Being in the mandatory care category essentially gives veterans a higher priority for treatment than veterans in the discretionary care category. In effect, veterans, rather than the government, assume a significant portion of the financial risk in the VA health care system because there is no guarantee that sufficient funds will be appropriated to enable the government to provide services to all veterans seeking care. Historically, however, sufficient funds have been appropriated to meet the health care needs of all veterans in the mandatory care category as well as most of those in the discretionary care categories. Rationing of health care has occurred when individual facilities or programs run short of funds because of unanticipated demand, inefficient operations, or inequitable resource allocation. Because the provision of VA outpatient services is conditioned on the availability of space and resources, veterans cannot be assured that health care services are available when they need them. Even veterans in the mandatory care category are theoretically limited to health care services that can be provided with available space and resources. If demand for VA care exceeds the capacity of the system or of an individual facility to provide care, then health care services are rationed. The Congress established general priorities for VA to use in rationing outpatient care when resources are not available to care for all veterans. VA delegated rationing decisions to its medical centers; that is, each must independently make choices about when and how to ration care. Using a questionnaire, we obtained information from VA’s 158 medical centers on their rationing practices. In fiscal year 1991, 118 centers reported that they rationed outpatient care for nonservice-connected conditions and 40 reported no rationing. Rationing generally occurred because resources did not always match veterans’ demands for care. When the 118 centers rationed care, they also used differing methods. Some rationed care according to economic status, others by medical service, and still others by medical condition. The method used can greatly affect who is turned away. For example, rationing by economic status will help ensure that veterans of similar financial means are treated similarly. On the other hand, rationing by medical service or medical condition helps ensure that veterans with similar medical needs are treated the same way. The 118 medical centers’ varying rationing practices resulted in significant inconsistencies in veterans’ access to care both among and within centers. For example, higher-income veterans frequently received care at many medical centers, while lower-income veterans or those who also had service-connected disabilities were turned away at other centers. Some centers that rationed care by either medical service or medical condition sometimes turned away lower-income veterans who needed certain types of services while caring for higher-income veterans who needed other types of services. A recent VA survey of its medical centers found that 6 of 162 facilities had either turned away or provided only a single limited treatment to category A (mandatory care) veterans who needed hospital care. The survey also found that 22 VA outpatient clinics had denied treatment or provided only a single treatment to category A veterans. One major source of frustration for VA facilities is their inability to provide needed health care services to veterans when those services are not covered under their veterans’ benefits. Unlike private sector physicians, who can generally provide any available outpatient service to patients willing to pay, VA facilities and physicians are generally unable to provide noncovered services to veterans. In the private sector, physicians and clinics can sell their services to any person regardless of whether the service is covered by insurance. Essentially, the patient assumes the financial responsibility for any services not covered under his or her health insurance. Although VA health care facilities are in general restricted to use by veterans, VA actually has greater authority to sell health care services to, for example, medical school hospitals serving nonveterans through sharing agreements than it does to sell the same services directly to veterans. Specifically, VA hospitals and clinics cannot, under current law, sell veterans those services not covered under their veterans’ health care benefits even if the veterans (1) have public or private health insurance that would pay for the care or (2) agree to pay for the services out of their own funds. By contrast, VA hospitals and clinics can share or sell any available health care service to (1) other federal health care facilities and (2) CHAMPUS beneficiaries. VA facilities can also share with federal and nonfederal hospitals, clinics, and medical schools, but such sharing is limited primarily to sharing of specialized medical resources. VA has no authority to sell these or other health care services directly to nonveterans. VA’s inability to sell noncovered health care services to veterans makes eligibility decisions more difficult. For private sector providers, a determination of eligibility under public or private health insurance is essentially a determination of the source of payment; if the service is not covered under the patient’s insurance, the physician can still provide the service and bill the patient. But for VA physicians, a determination that a service is not covered under a veteran’s health benefits means that the patient must be denied care. Even if the patient has private health insurance that would pay for the care or is willing to purchase the service, VA physicians are not allowed to provide noncovered services. This puts the physician in the difficult position of examining veterans to identify their need for health care but then turning them away without providing needed health care services if the service is not one the veteran is eligible to receive from VA. In a 1993 review, we examined veterans’ efforts to obtain care from alternative sources when VA medical centers did not provide it. Through discussions with 198 veterans turned away at six medical centers, we learned that 85 percent obtained needed care after VA medical centers turned them away. Most obtained care outside the VA system, but some veterans returned to VA for care, either at the same center that turned them away or at another center. The 198 veterans turned away needed varying levels of medical care. Some had requested medications for chronic medical conditions, such as diabetes or hypertension. Others presented new conditions that were as yet undiagnosed. In some cases, the conditions, if left untreated, could be ultimately life-threatening, such as high blood pressure or cancer. In other cases, the conditions were potentially less serious, such as psoriasis. Developing solutions to the problems discussed in this chapter will require both administrative and legislative actions. Several approaches could be used to improve veterans’ equity of access to VA health care services without legislation. First, VA could better define the conditions under which the provision of outpatient care would obviate the need for hospitalization. Such action would help promote consistent application of eligibility restrictions, but VA physicians would still be placed in the difficult position of having to deny needed health care services to veterans when treatment of their conditions would not obviate the need for hospitalization. This part of the problem can be addressed only through legislative action to (1) make veterans eligible for the full range of outpatient services or (2) authorize VA to sell noncovered services to veterans. A second approach VA could take to reduce inconsistencies in veterans’ access to care would be to better match veterans integrated service networks’ (VISN), and individual medical centers’, resources with the volume and demographic makeup of eligible veterans requesting services at each center. A third approach to improving equity of access would be to place greater emphasis on use of the fee-basis program to equalize access for those veterans with service-connected disabilities who do not live near a VA facility or who live near a facility offering limited services. Solutions to some of the eligibility-related problems would, however, require changes in law. For example, legislation would be needed before VA could (1) sell noncovered services to veterans, (2) provide prostheses and equipment to most veterans on an outpatient basis, (3) admit veterans with no service-connected disabilities directly to community nursing homes, (4) develop uniform benefit packages, or (5) provide routine prenatal and maternity care. An important part of the decision about the nation’s commitment to its veterans is the extent to which VA health care benefits are “earned” benefits, which the government should have a legal obligation to provide. Currently, the provision of VA health care services, even for treatment of service-connected disabilities, is discretionary. Guaranteed benefits would have important advantages for veterans. For example, veterans with guaranteed benefits would no longer face the uncertainty about whether health care services will be available when they need them. Guaranteed funding, however, could significantly increase government spending unless limits are placed on the number of veterans covered by the entitlement. One way to control the increase in workload likely to be generated by eligibility expansions is to develop a defined benefit package patterned after public and private health insurance. This could be used to trade off services veterans obtain from VA against the level of funding available. VA could adjust the benefit package periodically on the basis of the availability of resources. The significance of VA eligibility restrictions could be lessened if legislation was enacted authorizing VA to sell to veterans those health care services not covered under their veterans’ health benefits. With enactment of such legislation, VA physicians would no longer be placed in the difficult position of having to deny needed health care services to veterans when not covered under their health benefits package. Instead, physicians, or administrative staff, would decide whether the veteran would be expected to pay for the service. Eligibility reform would address some but not most veterans’ unmet health care needs. This is because many of the problems veterans face in obtaining health care services appear to relate to distance from a VA facility or the availability of the specialized services they need rather than to their eligibility to receive those services from VA. Legislation to expand VA’s authority to purchase care from private sector providers would be needed to address unmet needs created by geographic inaccessibility. These issues, including advantages and disadvantages of alternate approaches where appropriate, are addressed in more detail in appendix II. VA may be spending billions of dollars providing health care services to veterans not eligible for the services provided. VA officials estimate that 20 percent of the patients treated in their hospitals do not need hospital care but would not be eligible to receive the services they are provided on an outpatient basis. In addition, VA’s OIG estimated that from $321 million to $831 million of the money VA spent on outpatient care in fiscal year 1992 was used to provide veterans outpatient services that they were not eligible to receive. VA cites a series of studies to support its view that 20 percent of VA hospital patients were admitted to circumvent restrictions on their eligibility to receive needed health care services on an outpatient basis. Our review of the studies, however, revealed that they do not contain the types of data needed to link nonacute admissions (meaning the patients did not need to be admitted to the hospital) to eligibility restrictions. The studies, and reviews conducted by the OIG, suggest that most of the nonacute admissions were the result of inefficiencies in VA facilities and conservative physician practice patterns. If most nonacute admissions are caused by inefficiencies rather than ineligible treatments, then changes in the law to expand eligibility would probably not significantly reduce nonacute admissions to VA hospitals. VA’s announced plans to implement a preadmission certification program, if the program is effectively implemented, could essentially eliminate nonacute admissions with or without eligibility reform. As a result, it has important implications for veterans. If 20 percent of VA’s hospital patients would not be eligible to receive needed health care services on an outpatient basis, then a preadmission certification program that denies admission of patients not needing a hospital level of care could result in significant unmet health care needs. On the other hand, if treatment of most of the patients on an outpatient basis would obviate the need for hospital care, then the certification program would reduce costs without creating unmet needs. VA studies issued in 1991 and 1993 found that over 40 percent of the admissions to VA acute care hospitals could have been avoided if the patients had been treated on an outpatient basis. VA officials contend that these studies show that remaining restrictions on veterans’ eligibility for outpatient care are causing inappropriate hospitalizations. In addition, VA officials cite anecdotes to suggest that its hospitals are admitting patients who do not need hospital care in order to give them crutches and eyeglasses they are not eligible to receive on an outpatient basis. They estimate that 20 percent of all VA hospitalizations could be avoided if eligibility were expanded to give all veterans coverage of comprehensive outpatient care. Our review, however, found little basis for linking most inappropriate hospitalizations to VA eligibility provisions. A 1991 VA-funded study of admissions to VA acute medical and surgical bed sections estimated that 43 percent (+/- 3 percent) of admissions were nonacute. Nonacute admissions in the 50 randomly selected VA hospitals ranged from 25 to 72 percent. A 1993 study by VA researchers reported similar findings. At the 24 VA hospitals studied, 47 percent of admissions and 45 percent of days of care in acute medical wards were nonacute; 64 percent of admissions and 34 percent of days of care in surgical wards were nonacute. VA officials believe that 20 percent of veterans admitted to VA hospitals are admitted to provide them services that they are not eligible to receive on an outpatient basis. In addition, they believe that veterans admitted to VA hospitals to circumvent outpatient eligibility restrictions are kept in the hospital an average of 7 days. In other words, VA estimates that it is spending over $750 million dollars a year to provide noncovered outpatient services to veterans on an inpatient basis. We believe that VA overestimates the extent to which it provides noncovered services to veterans on an inpatient basis to circumvent the law. Linking the problems identified in the studies to eligibility restrictions is problematic because the studies did not contain the types of data needed to make such a link. Specifically, the studies did not ascertain the eligibility category of the veterans. For example, the studies did not determine whether the patients inappropriately admitted to VA hospitals had service-connected or nonservice-connected disabilities, the degree of any service-connected disability, whether they were in the mandatory or discretionary care category for outpatient care, or whether they would have been eligible to receive the services they needed on an outpatient basis. Had such information been included in the studies, it would be possible to determine whether a higher incidence of nonacute admissions occurred for veterans eligible for only hospital-related outpatient services than for those eligible for comprehensive outpatient services. The studies point more toward inefficiency, conservative physician practice patterns, and the slow development of ambulatory care alternatives as the primary causes of nonacute admissions. Our evaluation of the studies and VA’s efforts to link their findings to the need for eligibility reform are discussed in more detail in appendix V. Similarly, while the anecdotes VA cites, such as one about a veteran admitted to a VA hospital in order to get a pair of crutches, represent real limitations in VA eligibility provisions that need to be addressed, VA lacks data to show how many inappropriate hospital admissions resulted from the limitations. For example, how many of the approximately 7,000 patients admitted to VA hospitals in fiscal year 1994 for fractures of the arms and legs were treated on an outpatient basis and then admitted for the purpose of providing crutches? Only 765 of the 7,000 admissions were for 1 day, the most likely length of stay for patients admitted to enable VA to give them a pair of crutches or other routine outpatient care. In a May 10, 1996, letter to the Ranking Minority Member of the Senate Committee on Veterans’ Affairs, the Veterans Health Administration (VHA) said that all nonacute admissions are not the result of eligibility limitations but that such limitations have been the precursor explanation influencing many of the more specific clinical reasons documented in the medical records. VHA said that VHA has very conservatively estimated that less than half of the totally nonacute admissions can be attributed to the need for eligibility reforms and thus could be shifted to alternative levels of care. VHA’s estimate of nonacute admissions attributable to eligibility restrictions is not conservative because VHA assumed that 20 percent of all admissions would be shifted to outpatient settings, including admissions to long-term psychiatric and intermediate care units, when the studies address only acute medical and surgical care; and for veterans currently eligible for comprehensive outpatient services (veterans with service-connected disabilities rated at 50 percent or higher, former prisoners of war, World War I veterans, and veterans receiving a pension with aid and attendance). To shift the number of patients VA assumed would be shifted to outpatient settings from only acute medical and surgical wards, and from only veterans not already eligible for comprehensive outpatient care, would require that VA shift over 30 percent of acute medical and surgical admissions. Studies by the VA OIG show problems in VA’s enforcement of eligibility provisions for outpatient care that have continued for over 12 years. VA has yet to initiate action to strengthen enforcement of its eligibility requirements, stating that rather than enforce current requirements, it would seek eligibility reforms that would make the provision of the services legal. In a 1983 review at nine VA medical centers, the OIG found treatment of ineligible veterans ranging from 7.2 percent to 26.8 percent of outpatient visits. The study evaluated only determinations of whether outpatient care provided to veterans with nonservice-connected disabilities was necessary to obviate the need for hospital care or reasonably necessary to complete hospital care for which the veteran was eligible. Although medical center directors generally agreed with the findings and promised corrective actions, the OIG, in subsequent reviews completed in 1991 through 1992, identified a continued and possibly growing problem. For example, the OIG found the following: • About 24 percent of the outpatient visits reviewed at the Muskogee, Oklahoma, medical center were provided to veterans not eligible for the care provided. The OIG reviewed a random sample of visits provided to veterans with service-connected disabilities rated at 20 percent or lower and veterans with no service-connected disabilities who were not receiving VA pension benefits.• About 37 percent of the outpatient visits reviewed at the Fort Lyon, Colorado, medical center were determined to be ineligible for the outpatient services provided. The OIG found that the medical center did not have an effective system to ensure that eligibility certifications were complete and current.• About 38 percent of the outpatient visits reviewed at the Denver medical center were for treatments for which the veteran was not eligible. The OIG found veterans with nonservice-connected disabilities whose outpatient treatment (1) was not discontinued after their conditions became stable, (2) was for conditions unrelated to the condition for which they were hospitalized, and (3) was not needed to obviate the need for immediate hospitalization. In a review of the Allen Park, Michigan, medical center, the OIG found that the outpatient clinic was incorrectly reporting discretionary care patients as mandatory care patients. The OIG estimated that about one-half of the patients and one-third of outpatient visits were provided to veterans in the discretionary care category. Further, the OIG estimated that more than 50 percent of the visits provided to veterans in the discretionary care category were provided for ineligible conditions. The OIG estimated that from $321 million to $831 million of the $1 billion to $1.5 billion VA spent on discretionary outpatient care in fiscal year 1992 may have been for ineligible outpatient treatments. As of April 1996, VHA had not issued guidelines to ensure that outpatient visits are properly reported in accordance with outpatient eligibility criteria. “. . . VHA has never requested a legal opinion of the meaning or intent of the language. Also, we are unaware of any attempt by VHA to define the term in its own program guides or other instructions to clinical staff. Instead, VHA’s practice has been to allow each clinician to interpret its meaning and application for each individual patient. In practice, we found the concept is either ignored or perfunctorily applied to every treatment provided to every patient.” “The phrase ‘obviate the need for hospital care’ is, however, a very difficult, if not impossible concept to define and to apply at the clinical level. It is one of the major problems clinicians face in attempting to determine eligibility for treatment. Often, conditions which appear stable and chronic, will deteriorate and result in hospitalization if treatment is discontinued. The decision to obviate the need for hospital care is made on individual cases by the clinician caring for the patient . . . .” “We do not believe there is a basis to conclude it is an ‘impossible concept to define,’ rather the absence of a definition creates a significant weakness in controls over VA’s outpatient programs. Without a policy definition or other instructions to clinical staff, inconsistent application of criteria among facilities and clinicians is certain.” VHA officials said that they have no plans to further define the concept of obviating the need for hospital care. They said that the practice of medicine does not determine whether to treat patients on the basis of whether they would otherwise be hospitalized. VHA is focusing its efforts on legislation to expand outpatient eligibility rules to eliminate the obviate-the-need provisions and permit VA facilities to provide comprehensive health care services to all veterans. VA submitted such a legislative proposal to the Congress in September 1995. In its May 10, 1996, letter, VHA said that VA’s General Counsel found that VHA had defined the concept of obviating the need for hospitalization reasonably well in its guidance. VHA said that what GAO does not recognize, and has not assessed, is that applying the guidance at the clinical level does not automatically result in the type of consistency of application GAO seeks because of the complexities presented by each patient and the decisions of the clinicians providing the care. We do recognize, and have assessed, the inconsistencies that result from application of the VA guidance at the clinical level. As discussed in chapter 3, we asked clinicians at 19 VA medical centers to make eligibility determinations of six veterans based on medical profiles developed from actual medical records. The interpretations ranged from permissive (care for any condition) to restrictive (care only for certain medical conditions). We agree with VHA that because of differences among patients and differences in the way doctors view patients, there will always be inconsistencies in how patients are treated. Clearer guidance, however, should help reduce the level of inconsistency. VHA also said that while documentation may have been lacking to demonstrate that the care provided was consistent with the guidance, it should not be assumed on the basis of the OIG study that the care is neither appropriate nor advisable, nor that it was not necessary to obviate the need for hospitalization. The results of the OIG’s study of one facility should not, VHA said, be extrapolated to the system. “selected as the review site in consultation with VHA program officials because it was considered to be a typical outpatient environment in an urban tertiary care facility.” In addition, the report found lax enforcement of eligibility provisions at many other medical centers as described previously. One of the recommendations in the report was that VHA conduct reviews of each facility’s outpatient workload in order to identify the proportion of visits properly classified as mandatory, discretionary, and ineligible using the definitions relevant to current law. VHA, however, was apparently unwilling to conduct such reviews, which might potentially have disproved the OIG’s findings or shown the problems to be isolated to a few facilities. As of June 1996, VHA had not conducted the reviews. Many issues need to be addressed in strengthening enforcement of VA eligibility provisions. Strict enforcement of VA eligibility requirements, or VA’s planned implementation of a preadmission certification program, could increase veterans’ unmet health care needs. Enforcement of existing eligibility rules, with VHA’s interpretation of the obviate-the-need criterion, would force many veterans to seek routine outpatient care outside the VA system or forgo needed health care. Similarly, to the extent that VA hospitals admit veterans in order to provide health care services the veterans are not eligible to receive as outpatients, then preadmission certification procedures to prevent admission of patients who do not need a hospital level of care could increase unmet needs. The VA health care benefit was not designed to meet all of the health care needs of most veterans. Under current law, VA is intended to provide comprehensive health care services primarily to veterans with service-connected disabilities rated at 50 percent or higher. Other veterans must find health care services from other sources when the needed services exceed the limits of their VA eligibility or if VA lacks the resources to provide covered services. Unlike private sector providers, VA facilities are not financially at risk for inappropriate admissions, unnecessary days of care, and treatment of ineligible beneficiaries. Private sector health care providers are facing increasing pressures both from private health insurers and public health benefits programs such as Medicare and Medicaid to eliminate inappropriate hospitalizations and reduce hospital lengths of stay. For example, private health insurers increasingly use preadmission screening to ensure the medical necessity of hospital admissions and set limits on approved lengths of stay for their policyholders. While private sector hospitals are not prevented from admitting patients without an insurer’s authorization, the hospital and the patient, rather than the insurer, become financially responsible for the care. Significant savings can accrue from shifting a sizable portion of VA’s inpatient workload to other settings if entire wards or facilities are closed. Current eligibility provisions do not, however, appear to prevent VA from shifting much of its current workload to ambulatory care settings through administrative actions. Twice before, in 1960 and 1973, the Congress expanded VA outpatient eligibility for the express purpose of reducing inappropriate admissions to and unnecessary days of care in VA hospitals. In 1960, the Congress enacted Public Law 86-639 authorizing provision of outpatient care to veterans with nonservice-connected conditions if such care was needed in preparation for or as a follow-up to hospital care. VA hospitals are still not effectively using this authority more than 30 years after the enactment of this law. Among the primary reasons for nonacute days of care identified in the studies discussed in this chapter are premature admission of patients and delayed discharge of patients who could have been treated as outpatients. Issues related to the enforcement of VA eligibility requirements and the potential effects of eligibility expansions on nonacute admissions to VA hospitals are discussed in more detail in appendix III. Each of the eligibility reform proposals developed during the past year would make VA benefits easier to understand and administer. Four of the proposals would retain the discretionary funding of VA health care but would expand the number of veterans eligible for comprehensive VA outpatient services from about 465,000 to over 26 million. Such expansions are likely to generate significant new demand for VA care. If appropriations are not increased to satisfy the increased demand, VA faces the prospect of extensive rationing, including turning away many current users. The fifth proposal, developed by the American Legion, would avert the potential for increased rationing by converting veterans’ health benefits into a true entitlement for about 9 million to 11 million veterans, potentially adding billions of dollars to VA appropriations. Other veterans, and veterans’ dependents, would be allowed to buy into VA managed care plans. Our work identified a number of issues concerning the potential effect of the eligibility reform proposals on demand for VA health care services. For example, to what extent would increased demand for outpatient services result in corresponding increases in demand for hospital and nursing home care? Similarly, would VA efforts to improve customer service and make VA care more accessible to veterans further increase demand? Although each of the five eligibility reform proposals would significantly expand eligibility for VA health care, the House Veterans’ Affairs Committee bill would provide the most modest expansion. Table 5.1 compares the key provisions of the five proposals. Following are other major provisions of eligibility reform proposals: • S. 1345 (VA) (1) expands the definition of covered services to include virtually any necessary inpatient or outpatient care, drugs, supplies, or appliances and (2) allows VA to retain a portion of third-party recoveries. • S. 1563 (VSO) (1) includes nursing home care as mandatory service; (2) provides that the mandatory care category would include catastrophically disabled veterans; (3) allows adult dependents to become eligible for VA care, provided they reimburse VA; and (4) allows VA to bill and retain collections from Medicare. • H.R. 1385 (Montgomery/Edwards) (1) requires VA to provide veterans similar access regardless of their home state, (2) allows VA to use a system of enrollment and priorities for care, and (3) allows VA to retain a portion of third-party recoveries to expand outpatient care. • H.R. 3118 (House Veterans’ Affairs) (1) requires VA to establish a system of annual enrollment based on priorities for care, and (2) creates a new category of priority for catastrophically disabled veterans. • American Legion proposal (1) funds VA appropriations on a capitated basis; (2) establishes separate benefit packages for basic, supplemental, and specialized services; (3) allows VA to bill and retain payments from Medicare, Medicaid, the Federal Employees’ Health Benefits Program, and private insurers; (4) allows dependents to enroll in VA health plans; (5) exempts VA from federal procurement laws; (6) deems VA to be a qualified provider under federal and state health programs; and (7) allows VA to preempt state and local regulations relating to health insurance or plans. Appendix VI contains a more detailed summary of each proposal. H.R. 3118 would, like the other proposals, expand eligibility for comprehensive outpatient services to all veterans. It contains provisions, however, intended to make it easier for VA and the Congress to ration care. Specifically, the bill does the following: • Expressly states that the availability of health care services for veterans in the mandatory care category is limited by the amounts appropriated in advance by the Congress (S. 1345 also contains this provision). Although services for mandatory care category veterans are currently subject to the availability of resources, such services are frequently viewed as an entitlement. The language of H.R. 3118 and S. 1345 would make it clear that mandatory care category veterans do not have an entitlement to VA care. • Removes about 1.2 million veterans with noncompensable service-connected disabilities from the mandatory care category. H.R. 1385 would also shift such veterans from the mandatory to discretionary care category. By contrast, S. 1345 would move veterans with noncompensable service-connected disabilities to a higher priority within the mandatory care category than most low-income veterans with no service-connected disabilities. • Requires VA to establish an enrollment process as a means for managing demand within available resources. Veterans with disabilities rated at 30 percent or higher would have the highest priority for enrollment. A similar enrollment process would be optional under H.R. 1385. • Allows VA to determine the extent to which eyeglasses and hearing aids would be covered and limits the provision of prosthetics to veterans under VA care. Other than the American Legion proposal, which would require enrollment, the other bills would essentially remove all restrictions on provision of prosthetics on an outpatient basis, allowing veterans to come to VA for the sole purpose of having a prescription for eyeglasses or hearing aids filled. Each of the five proposals would make VA health care benefits easier to administer and understand by eliminating the obviate-the-need criterion for accessing outpatient care. The proposals generally do not, however, address the other provisions in current law that contribute to inappropriate use of VA health care resources and uneven access to health care services. Eliminating the obviate-the-need restriction on access to ambulatory care would simplify administration of health care benefits because VA physicians would no longer need to determine whether a patient would likely end up in the hospital if he or she was not treated. Eliminating the restriction would also promote greater equity by reducing the inconsistencies in eligibility decisions. Finally, eliminating the restriction would make benefits more understandable by essentially making veterans eligible for the full continuum of inpatient and outpatient care. Most of the proposals do not address the other major restrictions on VA eligibility and the ability of VA to sell noncovered services to veterans. Specifics follow: • Four of the proposals would retain the discretionary funding of VA health care. The American Legion proposal would create new funding mechanisms resulting in guaranteed benefits. • Under the four bills that would retain the discretionary funding of VA health care services, VA would continue to be unable to provide noncovered services directly to veterans. Because all veterans would become eligible for comprehensive outpatient services, there would, however, be fewer noncovered services. If adequate funds are not appropriated to allow VA facilities to serve all veterans seeking care, veterans turned away could not use their insurance or other resources to buy care from VA. • Current restrictions on provision of dental care would not be changed under any of the proposals. Restrictions on the provision of prenatal and maternity care would be removed only under the American Legion proposal. • S. 1345 and the American Legion proposal would remove the restriction on direct admission of veterans with no service-connected disabilities to community nursing homes. The other bills would not, however, remove this restriction. • Of the four proposals that would retain discretionary funding of VA health care, only H.R. 1385 specifically addresses the uneven availability of VA care. That bill would require VA to expand its capacity to provide outpatient care and allocate resources to its facilities in a way that would give veterans access to care that is reasonably similar regardless of where they live. The other bills do not address the uneven availability of VA health care services caused by resource limitations, VA’s limited provider network, and inconsistent VA rationing policies. These problems could, however, be addressed through the expanded contracting authority VA would be given under S. 1345 and H.R. 3118. The American Legion proposal contains specific provisions intended to make the availability of services more equitable. In addition, the American Legion proposal would force VA to address the uneven availability of services through the use of contracting because benefits would be guaranteed. The American Legion proposal to grant VA exemptions to most federal contracting and personnel laws and regulations and deem VA facilities to be qualified providers under both federal and state health programs could create significant risks. Specifically, the American Legion proposal would • deem a VA health plan or facility to be a qualified provider or carrier under a federally administered health care program, including Medicare, Medicaid, CHAMPUS, the Indian Health Service, and the Federal Employees Health Benefits Program; • authorize VA to plan and implement administrative reorganization, consolidation, elimination, or redistribution of offices, facilities, functions, or activities notwithstanding any other provision of law; • allow VA to enter into agreements with non-VA health care plans, insurers, health care providers, health care professionals, health care facilities, medical equipment suppliers, and related entities notwithstanding any law or regulation pertaining to competitive procedures, acquisition procedures or policies, source preferences or priorities, or bid protests; • preempt and supersede any state or local law or regulation that relates to health insurance or health plans to the extent such law or regulation is inconsistent with provisions of the VA law; and • require that a VA plan be considered a qualified provider or carrier under any state health care reform plan, law, or regulation. Reducing contracting requirements heightens the potential for fraud and abuse. VA has a long history of problems in administering contracts and sharing agreements. Because VA medical centers’ senior managers often receive part-time employment incomes from medical schools that receive millions of dollars through VA contracts, conflicts of interest could arise. The expanded contracting envisioned under the American Legion proposal would greatly increase the potential for conflicts of interest. In addition to exemptions from general contracting requirements, VA health plans would be exempt from specific requirements relating to risk contracting, such as those that apply to Medicare health maintenance organizations (HMO). Because VA has little experience in risk contracting, such exemptions might heighten the potential for fraud and abuse and could affect veterans’ access to needed medical services. VA facilities and health plans would also not be accountable to Medicare or other federal, state, or local health plans because of their deemed status. Other programs would have little recourse against VA health plans and facilities if they failed to enforce program safeguards. The five reform proposals would likely generate significant new demand for both outpatient and inpatient care. The increased demand could be heightened by the synergistic effects of other changes in the VA health care system to improve access and customer service and expand contracting. Under the four bills that would retain the discretionary nature of VA funding, over 26 million veterans would become eligible to receive services that currently are available primarily to the approximately 465,000 veterans with service-connected disabilities rated at 50 percent or higher. Similarly, under the American Legion proposal, about 9 million to 11 million veterans with service-connected disabilities would become entitled to free VA health care services. The American Legion proposal would make veterans with service-connected disabilities rated at 50 percent or higher entitled to any needed health care service included in the comprehensive and supplemental care packages; other veterans currently in the mandatory care group for hospital care, with the exception of those with noncompensable service-connected disabilities, would be entitled to the basic benefit package for free. Two additional groups of veterans would become entitled to the basic benefit package: veterans with catastrophic illnesses that render them destitute and veterans proven uninsurable in the private market. Increased demand would likely come from both increased use of VA services by current users unable to obtain all of the health care services they need from VA and from veterans seeking VA services for the first time. Even many veterans who rely on other health care coverage for most of their needs are likely to attempt to take advantage of added VA benefits such as prescription drugs, which are not typically covered under other health insurance. Medicare does not cover most outpatient prescription drugs, making VA an attractive alternative. Medicare-eligible veterans already make significant use of VA outpatient prescriptions even with the current eligibility limitations. Removing the restrictions on access to outpatient care would likely significantly increase demand for outpatient prescriptions. Another area where workload would likely increase dramatically is prosthetic devices, such as eyeglasses, contact lenses, and hearing aids. In addressing the restriction in current law on provision of crutches to veterans with broken legs, the five proposals would also eliminate the restriction on provision of other prosthetic devices, such as eyeglasses, contact lenses, and hearing aids. H.R. 3118 would, however, give the Secretary of Veterans Affairs the authority to restrict the provision of eyeglasses, contact lenses, and hearing aids. A 1992 VA eligibility reform task force developed estimates of the changes in demand likely to be generated through several alternative approaches to eligibility reform. VA’s task force estimated that if eligibility was reformed to make all current VA users (defined by the task force as veterans who had used VA in the past 2 years) eligible for the full continuum of VA health care services, then demand for outpatient care would increase by about 8.4 million visits annually. Similarly, expanding eligibility to all veterans would increase demand for outpatient care by about 32.8 million visits annually. The task force further estimated that demand for inpatient care would increase by 1.8 million patients treated, primarily because of demand generated by new users. The methods VA used to develop its projections were reviewed by the Congressional Budget Office (CBO). CBO found VA’s methods reasonable. If concurrent changes are made in the accessibility of VA health care services, in VA customer service, and in the extent to which veterans are allowed to use private providers under contract to VA, the effect of eligibility reforms on demand for VA care will likely be heightened. As it strives to make the transition from a hospital-based system to an ambulatory-care-based system, VA is attempting to bring ambulatory care closer to veterans’ homes. Because distance is one of the primary factors affecting veterans’ use of VA health care, actions to give veterans access to outpatient care closer to their homes, either through expansion of VA-operated clinics or through contracts with community providers, will likely increase demand for services. VA’s recent efforts to improve access by establishing separate access point clinics have attracted many new users. As we reported in April 1996, 12 new access points operate in a variety of locations, including three areas that are more than 100 miles from a VA facility; six areas between 50 and 100 miles from a VA facility; and three areas less than 50 miles from a VA facility (including 1 access point located 8 miles from a VA medical center in a large urban area). Four clinics are operated by VA; the remaining eight are operated via contracts with county and private clinics. The clinics have been successful in attracting veterans who have not used VA health care for several years as well as veterans who have never used VA health care. Forty percent of the 5,000 veterans enrolled at the 12 clinics had not received VA care in the past 3 years—1 clinic served only new users. Three proposals, S. 1345, H.R. 3118, and the American Legion proposal, would facilitate the expansion of access points by giving VA broader authority to contract with private sector providers. Such contracting might enable veterans to use the same physicians, clinics, and hospitals they use now but have VA rather than their private insurance or Medicare pay for the care. More importantly, they would no longer be required to meet the cost-sharing requirements of Medicare and private health insurance. Similarly, our reports over the past 5 years have identified continuing problems in VA customer service, including long waiting times, poor staff attitudes, and lack of such amenities as bedside telephones. As part of its response to the National Performance Review, VA has developed detailed plans to improve customer service that include installing bedside telephones, reducing waiting times, and training staff. These efforts are likely to help VA retain current users and will likely attract new users as VA’s reputation for customer service improves. These improvements also heighten the potential for increased demand to be generated through eligibility expansions. Expanding eligibility without providing adequate funds to pay for the expected increase in demand could significantly increase the number of veterans turned away from VA facilities. The four bills that would retain the discretionary funding of VA health care services would, however, provide little or no new revenue to offset the costs of increased demand. Expanding eligibility with a fixed or declining budget could give veterans false expectations of what services they can obtain from VA. In addition, many current users might be shut out of the VA system as veterans with higher priority increase their use of VA services. Both the President and the House of Representatives propose declining VA medical care budgets after fiscal year 1997, although these budgets would increase slightly after the turn of the century. (See table 5.2.) Because low-income veterans would be the third or fourth highest priority for care, and the law does not differentiate between low-income veterans with and without other health care coverage, reforms that provide a richer benefit package or increase the number of higher-priority veterans, or a combination of both, could reduce funds available to treat low-income, uninsured veterans. For example, under the new definition of health care in VA’s reform proposal (S. 1345), veterans in the top three priority categories would be in the mandatory care category for virtually any service other than nursing home care offered by VA. Under the VA proposal, about 1.8 million veterans currently eligible for limited outpatient care would be placed in the highest priority group for comprehensive care. The VA proposal would also place veterans with noncompensable service-connected disabilities (estimated to number about 1.2 million) above low-income veterans with no service-connected disabilities in the priority ranking of veterans in the mandatory care category for comprehensive outpatient services. Increased demand for routine health care services generated by these expansions could leave fewer resources available to pay for essential health care services for uninsured veterans. Only after the increased demand for nonservice-connected care generated by the 3 million veterans VA proposes to add to the mandatory care category for free comprehensive outpatient services was met could VA use its resources to provide essential hospital and other services to low-income, uninsured veterans without service-connected disabilities. With steady or declining budgets it could be increasingly difficult for VA to fulfill its safety net mission after meeting the increased demand for care generated through eligibility expansions. Although two bills (H.R. 3118 and H.R. 1385) propose establishing an enrollment process to help VA ration care if adequate funds are not appropriated to meet the increased demand likely to be generated by eligibility expansions, such a process would not protect VA’s safety net mission. Only after veterans in the top three priority categories were enrolled for comprehensive health care services could low-income veterans with no public or private health insurance enroll. One VA official told us that she did not think VA would enroll veterans below the highest priority category under H.R. 3118—veterans with service-connected disabilities rated at 30 percent or higher. As a result, veterans with no health care options might no longer be able to use VA health care services, including the hospital-related services they now receive. The four bills that retain discretionary funding of VA health care contain few new sources of revenues to offset the costs of eligibility expansions. The bills essentially assume that eligibility reform will not require new sources of revenue because the savings generated by shifting patients from inpatient to outpatient care would offset the costs of increased demand for outpatient care. Although we agree that savings can occur by shifting nonacute hospital admissions to outpatient settings, it is not clear that sufficient savings will occur to offset the potential increase in demand, especially if hospital beds emptied by shifts to outpatient care are filled with new users enticed to use VA by the eligibility expansion. As discussed in chapter 3, problems in VA’s methods for allocating resources to its facilities result in unequal access to VA health care services. Some facilities have adequate resources to treat veterans in both the mandatory and discretionary care categories while others are forced to ration care to veterans in the discretionary care category. Because most of the reform proposals do not address the uneven availability of VA services, the increased demand for care generated by eligibility expansions could heighten the problems VA already faces in trying to equitably distribute available resources. In the past, VA has been unable to provide the Congress the types of data on VA users that the Congress would need to make informed decisions on appropriate funding levels. The increased demands for care generated by the eligibility expansion proposals would put pressure on the Congress to appropriate the additional funds needed to avoid extensive rationing. A 1992 VA eligibility reform task force estimated that, without resource constraints, expanding eligibility for comprehensive VA care could increase VA spending by about $38 billion per year. Although VA and CBO arrived at strikingly different conclusions about the budgetary effects of the current reform proposals, we find CBO’s arguments about the potential costs of eligibility expansions more compelling because they incorporate the costs of meeting the potential increased demand predicted by VA’s 1992 eligibility reform task force. Historically, the Congress has fully funded both VA’s anticipated mandatory and discretionary workload. VA does not, however, provide the Congress data on the extent to which its resources are used to provide services to veterans in the mandatory and discretionary care categories for hospital and outpatient care in justifying its budget request. Considering the significant portion of VA resources currently used to provide services to veterans in the discretionary care category and the limited data VA provides the Congress on which to base funding decisions, it would be difficult for the Congress to appropriate funds for the care of only a portion of the veterans in the mandatory care category. As a result, the Congress has little basis for determining which portion of VA’s discretionary workload to fund. Our work shows that a significant portion of appropriated funds are used to serve veterans in the discretionary care category. We matched VA’s fiscal year 1990 treatment records against federal income tax records and found that about 15 percent of the veterans with no service-connected disabilities who used VA medical centers had incomes that placed them in the discretionary care category for both inpatient and outpatient care. In a May 10, 1996, letter to the Ranking Minority Member, Senate Committee on Veterans’ Affairs, VHA said that our estimate was either inaccurate or a very old estimate. According to VHA, only 4 percent of all veterans treated in 1994 were in the discretionary care category. Our estimate more accurately reflects the extent to which care is provided to veterans in the discretionary care category. VHA’s estimate is apparently based on unverified data provided by veterans when they apply for care; such data underestimate veterans’ incomes. We developed our estimate through a match of VA treatment records and income tax data. Our match showed that VA may have incorrectly placed as many as 109,230 veterans in the mandatory care category in 1990. Tax records for these veterans showed they had incomes that should have placed them in the discretionary care category. We estimated that VA could have billed as much as $27 million for care provided to these veterans. Although data from our study are now 6 years old, data from VA’s own tax matches are yielding similar results. VA has now established its own income verification program. Its initial match found that about 18 percent of veterans with no service-connected conditions underreported their income. VA’s matching agreement with the Internal Revenue Service indicates that VA expects its match of fiscal year 1996 treatment records against tax data to generate about $30.5 million in copayment collections for care provided to veterans who were incorrectly classified as mandatory care category veterans. Accordingly, our estimate—and VA’s own data—show that about 15 percent of veterans with nonservice-connected disabilities using VA medical centers are in the discretionary care category for both inpatient and outpatient care. VHA recently advised us that it cannot provide the Congress with information on the extent to which VA services are provided to veterans in the mandatory and discretionary care categories for inpatient and outpatient care. VHA advised us that VA does not have accounting systems in place that would allow VA to differentiate between mandatory and discretionary care. VHA said that developing the accounting systems capable of differentiating between the categories would be extremely difficult and may not be cost-effective. Without such information, the Congress could find it difficult to set limits on VA appropriations. For example, it would not know whether the funds appropriated were adequate to meet the health care needs of all veterans with service-connected disabilities likely to seek VA care. In March 1992, the Acting Secretary of Veterans Affairs established a task force to develop alternative proposals for reforming eligibility for VA health care. The task force developed four proposals, which ranged from retaining current eligibility provisions to expanding eligibility to make all veterans eligible for a full continuum of services. Specifically, the four proposals were as follows: • Alternative 1: Limit the system to current users with no eligibility reform. • Alternative 2: Limit the system to current users with no eligibility reform, but implement managed care. • Alternative 3: Limit the system to current users, but expand eligibility to cover the full continuum of services without budgetary constraints. • Alternative 4: Expand eligibility to cover the full continuum of care for all veterans with no resource constraints. The task force also developed cost estimates for each alternative, assuming both no budget offsets and different combinations of veteran cost sharing and third-party recoveries from private insurers, Medicare, and Medicaid. The cost estimates ranged from $11.0 billion (alternative 3 with offsets) to $53.6 billion (alternative 4 with no offsets). (See table 5.3.) The task force noted that the cost increases would result more from the number of new users attracted to the VA health care system than from providing existing users the full continuum of care. Much of the cost increases, the task force notes, are for inpatient and outpatient care for new users. Although its eligibility reform task force had developed detailed estimates of the increased demand and costs of reform options, VA developed a new formula for estimating the effects of eligibility reform as part of its National Performance Review efforts. Neither the original formula, nor the recent revision to it, adequately considered the increased demand for outpatient care likely to be generated by the proposed eligibility expansions. In addition, if VA had accurately applied its original formula and assumptions, it would have predicted an increase rather than a decrease in costs resulting from eligibility reform. VA made a number of other questionable assumptions in its calculations. VHA originally developed what appears to be a complex formula for estimating the cost effects of eligibility reform on the basis of the overall assumption that eligibility reform would enable VA to divert 20 percent of its hospital patients to outpatient care. The results from applying VHA’s original formula were sensitive to a series of assumptions about such things as how many veterans are inappropriately admitted to VA hospitals because of restrictions on outpatient eligibility; how long, on average, those veterans stay in the hospital; how the average costs of treating patients remaining in VA hospitals after eligibility reform would be affected; and how eligibility reform would affect demand for outpatient care. The original formula could show either a decrease or increase in costs depending on the assumptions made. VA did not include a key portion of the original formula—a 10-percent increase in the costs of treating those patients remaining in VA hospitals after eligibility reform—in its calculations and, therefore, reported that its analysis showed that eligibility reform would result in savings of about $268 million. Including that portion of the formula in the calculation results in the claimed savings becoming a cost increase of $51 million. VA subsequently revised its formula to delete the adjustment for the costs of treating those patients remaining in the hospital. As a result of this change, whatever assumptions are made about the percentage of care shifted and the average days of hospital care avoided, the formula will result in net savings. Even under the assumption that no inpatients are transferred to outpatient care, the formula shows that expanding eligibility would result in savings of about $39 million. What appeared on the surface to be a formula taking many factors into account is, in its current form, actually a simple calculation—eligibility reform will save 30 percent of the costs of inpatient care shifted to outpatient settings plus 10 percent of the total costs of fee-basis and travel reimbursements. The formula includes no adjustments for increased demand for outpatient care by veterans other than those shifted from inpatient to outpatient care. VA’s revised formula for estimating the cost effects of eligibility reform is also independent of the provisions of eligibility reform. In other words, it would yield the same result when applied to any of the five reform proposals or if changes were made in the proposals to increase or reduce the number of veterans in the mandatory care category. Specifically, it would yield the same savings estimate regardless of • which benefits are included, • whether and to what extent veterans are required to contribute toward the costs of the expanded benefits, the number of veterans placed in the mandatory and discretionary care categories, and • whether veterans’ health benefits remain discretionary or are made an entitlement. Our specific concerns about VA’s analysis are discussed in the following sections. The formula assumes that an increase in demand for outpatient care would not occur other than demand generated by veterans shifted from inpatient to outpatient care. VA anticipates limited new demand because, according to headquarters officials, the administration proposal and H.R. 3118 were designed to give VA added flexibility by eliminating the obviate-the-need-for-hospitalization criterion, not to attract new users. VA’s 1992 task force, however, estimated that most new demand would be generated through new users. Although headquarters officials anticipate few new users, some medical centers are already aggressively pursuing new users. As discussed earlier, about 40 percent of the veterans using VA access points had not used VA health care within the 3 years preceding their enrollment at the access point. “ecause less sick patients will be shifted to outpatient care, the remaining in-patients will be sicker and will have a 10% higher cost per admission . . . .” VHA, however, did not include the calculation in its savings estimates. VHA officials indicated that they would provide an explanation for why the adjustment was not included in the calculations, but in later discussions, the VHA economist who applied the formula declined to provide an explanation for why the adjustment was not made. Including this adjustment in the original formula would have turned VHA’s projected savings of $268 million into a cost increase of $51 million. In a May 10, 1996, letter to the Ranking Minority Member of the Senate Committee on Veterans’ Affairs, VHA said that GAO has consistently misunderstood that no change is taking place with the actual length of stay of the admissions not shifted. The patients with longer lengths of stay would remain as inpatients, but, according to VHA, neither their lengths of stay nor the costs of their care would increase. Research has consistently shown that moving the least costly patients out of hospitals increases the average cost of caring for the patients who remain even though there is no change in an individual patient’s length of stay or cost of care. This phenomenon occurs because removing a group of patients with shorter lengths of stay and fewer care needs (none of the patients VA envisions shifting needed hospital-related care) raises a hospital’s average length of stay and average cost per discharge. The following example illustrates this. A VA hospital treats two inpatients. Patient A has congestive heart failure and spends 7 days in the hospital. Treatment for this patient costs the hospital $10,000. Patient B is treated on an outpatient basis for a broken leg and then admitted to the hospital and provided a pair of crutches. Patient B stays in the hospital 1 day, and the cost of providing the care is $1,000. The average length of stay for the two patients was 4 days [(7 days + 1 day)/2 patients], and the average cost per day of care provided to the two patients was $1,375 . If, following eligibility reform, patient B is provided crutches on an outpatient basis rather than being admitted to the hospital, the average length of stay and cost per day for the remaining patient(s) would increase. The hospital’s average length of stay for the remaining patient would be 7 days (7 days/1 patient), and the average cost of treating the patient would be $1,429 a day ($10,000/7 days). Our review identified a number of other concerns about the reasonableness of VA’s assumptions and calculations. The following paragraphs illustrate some of these concerns: Eligibility reform would enable VA to eliminate 20 percent of hospital admissions. One argument frequently used to promote the need for eligibility reform is that the obviate-the-need provision prevents VA from providing care in the most cost-effective setting. The presumed savings from removing the restrictions on access to ambulatory care services would then be used to offset the costs of expanded benefits. It is possible to achieve savings by shifting inappropriate inpatient services to other settings. But, as discussed earlier in this report, current eligibility provisions are not a major contributor to inappropriate admissions, nor do those provisions prevent VA from shifting a significant portion of inappropriate inpatient services to ambulatory care settings. Actions such as the preadmission certification program previously discussed could, however, generate savings that could be used to offset some of the costs of eligibility reform. VA applied the assumed 20-percent reduction in hospital admissions across all inpatient care, not just acute medical and surgical admissions. Although the studies VA cites as supporting its assumption that 20 percent of admissions could be shifted to outpatient care addressed only acute medical and surgical admissions, VA applied the 20-percent reduction to all inpatient care, including intermediate care and both acute and long-term psychiatric admissions. Such admissions account for over 25 percent of VA admissions. Applying the 20-percent reduction only to acute medical and surgical admissions would reduce projected savings. To maintain the total number of shifted admissions, VA would have to assume that more than 27 percent of acute medical and surgical admissions would be shifted under eligibility reform. VA assumed a 10-percent savings in fee-basis costs. The fee-basis program is used to pay for outpatient care veterans obtain from private sector providers when VA care is either not available or not convenient. Therefore, shifting veterans from VA hospital beds to outpatient settings should have no effect on current fee-basis use or costs. VA claims the savings in fee-basis costs will result from establishment of access points. As of April 1996, VA operated 12 access points on a pilot basis, and it is too early to tell whether they will affect fee-basis costs. Moreover, because access points are attracting new users, they may increase rather than decrease VA’s fee-basis costs. VA provides no other basis for estimating that eligibility reform will reduce fee-basis costs. VA assumes that travel reimbursements will decline by 10 percent as a result of eligibility reform. VA indicates that travel reimbursements will decline because of the creation of access points. While travel reimbursements might decline for those veterans living near an access point, any such reduction would not result from eligibility reform. Under VA’s assumption that veterans shifted from hospital care to outpatient care will receive an average of 17 additional outpatient visits, beneficiary travel could significantly increase rather than decrease. Rather than receiving travel reimbursement for one trip to the hospital, veterans qualifying for beneficiary travel would, under VA’s assumptions, receive travel reimbursement for 17 outpatient visits. Beneficiary travel includes (1) medically necessary ambulance travel; (2) medically necessary travel by wheelchair van, stretcher, or other means of special travel; (3) intrafacility travel; (4) travel for compensation and pension examinations; and (5) all other travel, which includes transportation by common carrier, bus, taxi, or privately owned vehicle. Beneficiary travel is provided at the discretion of the Secretary of Veterans Affairs to certain types of veterans: (1) veterans with service-connected disabilities rated at 30 percent or higher; (2) veterans with service-connected disabilities of 20 percent or less for travel related to treatment of their service-connected disabilities; (3) veterans receiving a VA pension; (4) veterans traveling in connection with an examination for compensation or pension, or both; and (5) veterans whose income is less than or equal to the maximum VA pension rate with aid and attendance. Most of the veterans eligible to receive beneficiary travel are already eligible to receive, on an outpatient basis, the care that qualifies them for travel reimbursement. For example, veterans with service-connected disabilities rated at 20 percent or less are in the mandatory care category for outpatient treatments related to their service-connected disabilities, the only care for which they are eligible to receive travel reimbursement. An average of 7 days of hospital care would be saved for every patient diverted to outpatient care. This assumption may not be sound given VA’s argument that the patients it would be diverting were admitted in order to provide them routine outpatient care. Because the inpatients VA expects to shift to outpatient care are essentially self-care patients with no acute medical need, VA would most likely be drawing from patients with the shortest lengths of stay—such as veterans admitted to provide them crutches or as a prerequisite to placement in a community nursing home. In fiscal year 1994, about 37 percent of VA medical and surgical patients had 1- to 3-day stays. It appears that it would be more reasonable to assume the average length of stay of patients to be diverted to outpatient care to be 1 to 3 days. In providing comments to the Ranking Minority Member, Senate Committee on Veterans’ Affairs, on our March 20, 1996, testimony, VHA said that it has a sound basis for its assumption that the average length of stay for shifted admissions would be 7 days. VHA said that the same research that initiated the estimates of VA nonacute days of hospital stays also provided VA information on the average length of stay of the totally nonacute admissions included in the study. According to VHA, the research showed the average length of stay to be a little longer, not less, than 7 days. VHA said that VA’s estimate of 7 days was also confirmed by preliminary current VA utilization management information. However, the average length of stay for the totally nonacute admissions in the study cited was 5.5 days, not over 7 days. In addition, the average length of VA acute medical/surgical admissions in fiscal year 1986—the year studied—was slightly over 16 days. By fiscal year 1995, however, the average length of stay of VA acute medical/surgical patients had declined to 11.6 days, a 28-percent decline. VA’s progress in reducing its average length of stay should also be considered in its assumptions. Finally, VA’s 1992 eligibility reform task force estimated that 1- and 2-day admissions would be shifted to outpatient settings following eligibility reform. Changing the assumption about average length of stay alters VA’s savings estimates. Substituting 3 days for VA’s assumption of a 7-day average length of stay would decrease VA’s projected savings of $268 million from eligibility reform to about $137 million. Last year, CBO estimated that the eligibility reform provisions contained in H.R. 3118 could increase the deficit by $3 billion or more annually if the Congress fully funds the increased demand for outpatient care that the eligibility expansions would likely generate. CBO’s estimates were based in part on tables contained in what at the time was VA’s newly released 1992 National Survey of Veterans. VA claimed that CBO misinterpreted one of the tables in the survey—which VA acknowledged was confusing—and raised concerns about CBO’s methodology and the accuracy of its projections. After reviewing VA’s concerns, CBO determined that any problem in interpreting the survey data did not affect its overall conclusion that the bill would not be budget neutral because the expanded eligibility would generate significant new demand. CBO assumed in conducting budgetary impact analyses that if demand increases under a discretionary program, funds will be appropriated to meet that demand. CBO estimated that the cost of providing outpatient care to the 10.5 million veterans who are currently eligible only for hospital-related outpatient care would far outweigh the savings from shifting inpatients to outpatient care. Further, CBO concluded that VA could incur significant costs under provisions that expand VA’s authority to provide prosthetic devices on an outpatient basis. Finally, CBO noted that the bill could increase costs by billions more if the induced demand for outpatient care resulted in corresponding increases in demand for hospital care. On July 15, 1996, CBO provided the House Veterans’ Affairs Committee a revised cost estimate for H.R. 3118, as reported by the Committee on May 8, 1996. Expanding eligibility for outpatient services would, CBO estimated, ultimately increase the cost of veterans’ medical care by $3 billion a year, assuming appropriation of the necessary amounts. CBO noted that the bill would affect direct spending and is subject to pay-as-you-go procedures under section 252 of the Balanced Budget and Emergency Deficit Control Act of 1985. In its July 18, 1996, report on H.R. 3118, the House Committee on Veterans’ Affairs disagreed with CBO’s cost estimate and estimated that the bill would be budget neutral for annual outlays in fiscal year 1996 and in each of the 5 following fiscal years. Eligibility reforms that would increase the number of veterans eligible for comprehensive outpatient services would likely generate new demand for outpatient care in three primary ways. First, current VA users are likely to seek previously noncovered services, such as preventative health care. Second, veterans who previously had not used VA because of its eligibility restrictions might begin using VA, particularly for those services not covered under their public or private health insurance. Third, some care might be shifted from inpatient to outpatient settings as patients admitted to circumvent eligibility restrictions are treated on an outpatient basis. VA’s 1992 Eligibility Reform Task Force conducted the most comprehensive study of the potential effects of eligibility reform, but it was not based on any of the current proposals. The current VA evaluation assesses only one of three ways eligibility reforms are likely to increase demand for outpatient care and is based on questionable assumptions. Among the issues that could be considered in future analyses are the following: Increased demand could be lower than anticipated if VA facilities are currently circumventing the eligibility restrictions and providing noncovered services. As discussed in chapter 4, studies by VA’s OIG found that VA outpatient clinics are providing significant numbers of noncovered services. This suggests that at least some current VA users may already receive comprehensive health care services from VA and, therefore, their use of VA services might not significantly increase under eligibility reforms that essentially make legal what is already happening in practice. • Expanded outpatient eligibility could result in a corresponding increase in demand for hospital care. After removing 1- and 2-day hospital stays (assumed to be shifted to outpatient care), VA’s 1992 eligibility reform task force estimated that demand for inpatient care could nearly triple from 987,000 to about 2.8 million patients treated. • Eligibility reform that would authorize direct admission of veterans with nonservice-connected disabilities to contract community nursing homes could increase demand. As VA moves patients from costly inpatient care to less intensive settings, demand for nursing home care is likely to increase. The increased demand for nursing home care could, however, be offset to some degree by greater use of home care and residential care for patients requiring less intensive treatment. • Concurrent changes to make VA health care services more accessible to veterans could increase the potential effect of eligibility reform on outpatient, and, indirectly, on inpatient workload. As it strives to make the transition from a hospital-based system to an ambulatory-care-based system, VA is attempting to bring ambulatory care closer to veterans’ homes. Because distance is one of the primary factors affecting veterans’ use of VA health care, actions to give veterans access to outpatient care closer to their homes, either through expansion of VA-operated clinics or through contracts with community providers, will likely increase demand for services even without eligibility reform. • Giving VA broader authority to contract for health care services with private hospitals and providers might give veterans greater freedom to choose health care providers closer to their homes. If this happens, then increased demand for VA-supported health care is likely with or without eligibility reform. In addition to further assessing the potential effects of eligibility and other reforms on demand for outpatient care, further assessments appear warranted to determine how reforms would affect the availability of specialized services. Provisions in the major VA eligibility reform proposals could have both positive and negative effects on VA’s specialized services. Reforms that increase VA’s efficiency could free resources that could be reprogrammed to increase specialty services. Unanticipated new demand for routine outpatient services could, however, outstrip VA’s capacity to provide specialized services such as treatment of spinal cord injuries, substance abuse, and the blind. These issues are discussed in more detail in appendix IV. The cost of eligibility reform depends on a number of factors, including the benefits covered, the number of veterans offered the benefits, and the extent to which veterans are expected to pay for or contribute toward the cost of their health care benefits. The four proposals that would retain the discretionary funding of the VA health care system would essentially make all 26 million veterans eligible for comprehensive inpatient and outpatient care with little or no change in the system’s sources of revenue or in the methods used to establish VA’s appropriation. Our work identified five basic approaches that could be used, individually or in combination, to limit the budgetary impact of eligibility reforms. These are (1) setting limits on covered benefits, (2) limiting the number of veterans eligible for health care benefits, (3) generating increased revenues to pay for expanded benefits, (4) allowing VA to “reinvest” savings achieved through efficiency improvements in expanded benefits, and (5) providing a methodology in the law for setting a limit on VA’s medical care appropriation. The American Legion proposal, which as of July 1, 1996, had not been introduced, combines some of the above approaches that could be used to constrain the growth of the VA budget. It would make significant changes in VA funding streams and would turn VA health benefits into an entitlement for certain veterans. In addition, it would authorize VA to sell health benefit plans to other veterans and veterans’ dependents. The number of veterans to be covered under the entitlement—9 million to 11 million—would likely result in the proposal, in its current form, adding billions of dollars to the budget deficit. One way to control the increase in workload likely to result from eligibility expansions would be to develop one or more defined benefit packages patterned after public and private health insurance. This would narrow the range of services veterans could obtain from VA, allowing workload reductions from the eliminated services to offset the workload from increased demand for other services. Like private health insurers, VA could adjust the benefit package periodically on the basis of the availability of resources. Creating a defined benefit package could result in some veterans receiving a narrower range of services than they receive now, while others would receive additional benefits. This approach would essentially take some benefits away from veterans with the greatest service-connected disabilities and give additional benefits to veterans with lesser service-connected disabilities and to veterans with no service-connected disabilities. One option for addressing the redistribution of benefits issue is to establish separate benefit packages for each type of veterans. For example, veterans with disabilities rated at 50 percent or higher might continue to be eligible for any needed outpatient service, while a narrower package of outpatient benefits—perhaps excluding such items as eyeglasses, hearing aids, and prescription drugs—could be provided to higher-income veterans with no service-connected disabilities. Of the five major reform proposals, only the American Legion proposal would require VA to develop defined benefit packages. The American Legion proposal would require VA to establish both comprehensive and basic packages as well as a supplemental benefit package to cover specialized services. Another way to limit the budgetary effects of eligibility reform would be to pay for expanded eligibility for some veterans by restricting or eliminating eligibility for others. Under current law, all veterans are eligible for VA hospital and nursing home care and at least some outpatient care, but there is a complex set of priorities for care based on such factors as presence and degree of service-connected disability, period of military service, and income. In practical application, however, these priorities have little effect on the VA health care system. In the preparation of VA budget justifications, no distinction is made between veterans in the mandatory and discretionary care categories, let alone those in different priority groups within the mandatory and discretionary care categories. Among the approaches that could be used to limit the number of veterans taking advantage of expanded benefits is to limit VA eligibility to those veterans who lack other public or private insurance. Exceptions could be made for treatment of service-connected disabilities and for services not covered under veterans’ public or private insurance. Such an approach might help target available funds toward those veterans most in need. The Congress would face a difficult choice, however, in determining whether VA health care is (1) a benefit of military service that should be available regardless of alternate coverage or (2) a safety net available only to those veterans who lack health care options. Limiting eligibility of veterans with nonservice-connected disabilities to those whose income is below the current, or some new, means test limit would allow VA to retarget some resources currently used to provide services to higher-income veterans. Because about 15 percent of veterans with no service-connected disabilities who use VA health care services have incomes above the means test threshold, eliminating their eligibility would make additional resources available to offset increased demand for outpatient services by veterans in higher-priority categories. Such veterans could be allowed to purchase services from VA facilities on a space-available basis. Another way to limit the number of veterans eligible for expanded VA benefits is to restrict enrollment in VA health care to current VA users. This approach would limit the potential for nonusers to be enticed by improved benefits into becoming users and thereby reduce the costs of eligibility reforms. While current users might increase their use of VA health care in response to expanded benefits, most of these veterans already obtain those services they are unable to get from VA from private sector providers through their public and private insurance. As a result, this approach might enable those higher-income veterans with nonservice-connected disabilities already using VA services to shift all of their care to VA, while veterans who had not previously used VA services, but would like to start using them, would essentially be shut out of the system. This would include veterans with higher priorities for care, such as those with service-connected disabilities and low incomes. Similarly, restricting enrollment to current users might prevent VA from fulfilling its safety net mission by denying care to veterans whose economic circumstances change. The American Legion proposal is the only major proposal that would specifically limit the number of veterans, and the number of services, covered under VA’s medical care appropriation. The expanded benefits to be provided for veterans covered under the entitlement would, however, likely result in a significant increase in VA’s medical care appropriation. Several approaches could be used to generate additional revenues to pay for expanded benefits. These include increased cost sharing, authorizing recoveries from Medicare, and allowing VA to retain funds from third-party recoveries. Increased veteran cost sharing could help offset the costs of increased demand. For example, through contracting reform, VA might be authorized to sell veterans any available health care service not covered under their current veterans’ benefits without changing existing eligibility provisions. In other words, veterans could purchase, or use their private health insurance to purchase, additional health care services from VA. Such an approach would not eliminate the problems VA physicians have in interpreting the obviate-the-need provision, but it would lessen the importance of the decision. Physicians would no longer be forced to turn away veterans needing health care services. Instead, obviate-the-need decisions would determine who would pay for needed health care services—the government or the veteran. In addition, VA could issue regulations better interpreting the obviate-the-need provision. Because uninsured veterans may be unable to pay for many additional health care services, an exception could be made to help such veterans. A second approach for offsetting the costs of eligibility expansions through cost sharing could be to impose new cost-sharing requirements for existing services. For example, VA could be authorized to increase cost sharing for nursing home care—a discretionary benefit for all veterans—either through increased copayments or estate recoveries. Resulting funds could be used to help pay for benefit expansions. Similarly, copayments and deductibles for hospital and outpatient care could be adjusted to be more comparable with other public and private sector programs. Cost sharing could also be increased by redefining the mandatory care group. In other words, the income levels for inclusion in the mandatory care category could be lowered or copayments imposed for nonservice-connected care provided to veterans with service-connected disabilities of 0 to 20 percent. Proposals have been made in the past few years to authorize VA recoveries from Medicare either for all Medicare-eligible veterans or for those with higher incomes. For example, S. 1563 would allow VA to bill and retain recoveries from Medicare. Such proposals, though, appear to offer little promise for offsetting the costs of eligibility expansions. First, many of the services, such as hearing aids and prescription drugs, that Medicare-eligible veterans are likely to obtain from VA are not Medicare-covered services. Second, most such proposals would not require VA to offset the recoveries against its appropriation. As a result, they would not affect VA’s budget request and would increase overall federal expenditures for health care. Authorizing VA recoveries from Medicare would, however, further jeopardize the solvency of the Medicare trust fund. Such an action would essentially transfer funds between federal agencies while adding administrative costs. Allowing VA to bill and retain recoveries from Medicare would create incentives for VA facilities to shift their priorities toward providing care to veterans with Medicare coverage. VA facilities would essentially receive duplicate payments for care provided to higher-income Medicare beneficiaries unless recoveries were designated to fund services or programs for which VA did not receive an appropriation. For example, if VA was authorized to sell noncovered services to veterans and did not receive an appropriation for such services, then veterans should be allowed to use their Medicare benefits to help pay for the services just as they would use private health insurance to do so. The American Legion proposal would allow VA to recover and retain funds from Medicare. The proposal is not clear, however, on whether recoveries would be limited to those services not covered by VA’s medical care appropriation. American Legion officials agreed that the proposal is unclear, but said that they intended for VA to recover and retain funds from Medicare only for those veterans not covered under VA’s appropriation. Assuming that VA receives payments from Medicare at rates no higher than private sector providers, it would be appropriate for VA to retain recoveries under this scenario. One limitation to this approach, however, is that VA does not have accounting and information systems adequate to keep funds appropriated for patient care separate from funds generated through such third-party recoveries. Another limitation is that the American Legion proposal would deem VA facilities to be Medicare providers without requiring them to meet Medicare quality, utilization, and reporting requirements. Proposals, such as the ones contained in S. 1345 and H.R. 1385, that would allow VA to retain a portion of recoveries from private health insurance beyond what it needs to finance its recovery program would also represent a form of double payment. For the same reasons already discussed related to Medicare, unless recoveries from private insurance were earmarked for some purpose other than to pay for care covered by an appropriation, proposals to allow VA to retain a portion of its third-party recoveries would essentially result in duplicate payments. During the past 5 to 10 years, we, VA’s OIG, VHA, and others have identified numerous opportunities to improve the efficiency of the VA health care system and enhance revenues from sales of services to nonveterans and care provided to veterans. Savings from such initiatives could be “reinvested” in the VA health care system to help pay for eligibility expansions. VA has historically used savings from efficiency improvements to fund new programs. For example, VA is allowing its facilities to reinvest savings achieved by consolidating administrative and clinical management of nearby facilities into providing more clinical programs. Similarly, VA allows medical centers to use savings from efficiency improvements to fund access points. Through establishment of a preadmission certification requirement similar to those used by many private health insurers, VA could reduce nonacute admissions and days of care in VA hospitals and save hundreds of millions of dollars, assuming that facilities that are made excess by this are eliminated. While such inappropriate admissions and days of care to a large extent are unrelated to problems with VA eligibility provisions, savings resulting from administrative actions to address the problem could nonetheless be targeted to pay for expanded benefits. Actions to reinvest savings from efficiency improvements would, however, limit VA’s ability to contribute to deficit reduction. One way to control increases in VA appropriations in response to the increased demand likely to be generated through eligibility expansions would be to state in the law which portion of the demand would be funded. For example, the law would state which groups of veterans, such as those with service-connected disabilities rated at 30 percent or higher, would be covered by the appropriation. Other groups that might be included in the appropriation could be veterans already eligible for comprehensive care, such as former prisoners of war and veterans of World War I and the Mexican Border Period. To preserve VA’s safety net mission, funds might also be appropriated to cover veterans with no public or private health insurance who have incomes below the means test threshold or some other level. Such an approach would make it easier to limit appropriation increases, but they would result in significant rationing (see ch. 5) unless revenues from other sources were available to VA. This approach could be combined with other approaches that increase VA revenues to enable VA to provide any available health care service to any veteran. For example, VA might be authorized to sell available health care services to veterans in eligibility categories not covered by the appropriation. (Such an approach would be used under the American Legion’s eligibility reform proposal.) Because VA would have received no appropriation to serve these veterans, VA might be authorized to bill and retain recoveries from private health insurers, Medicare, Medicaid, and CHAMPUS. Veterans’ copayments and deductibles could be administered in accordance with the provisions of their insurance coverage. In effect, care for veterans not covered by the appropriation would be fully funded through insurance recoveries and veterans’ cost sharing. Such an approach would help control budgetary increases without forcing VA to ration care. All veterans would have the opportunity to choose VA as their health care provider. VA would, however, for those veterans not covered by the appropriation, be competing with private sector providers on a more level playing field. By limiting VA’s appropriation to specified categories of veterans, VA would be given an incentive to focus outreach efforts on those veterans with the highest priority and greatest need for VA services in order to maximize its appropriation. In addition, VA facilities would have a stronger incentive to provide cost-effective care because they would be more dependent on recoveries from public and private insurance to offset their operating costs. In becoming more dependent on outside payers, VA would be subject to many of the cost-containment pressures exerted on private sector hospitals over the past decade. For example, VA facilities could no longer count on appropriations to cover the costs of care denied by private insurers as not medically necessary or not requiring hospitalization. H.R. 3118, as passed by the House of Representatives, would set a limit on the growth of VA medical care appropriations. It would authorize medical care appropriations not to exceed $17,250,000 for fiscal year 1997 and $17,900,000 for fiscal year 1998. If funds are appropriated at the authorized levels, H.R. 3118 would allow essentially no increase in VA medical care spending for fiscal year 1997 over the levels contained in the administration’s 7-year balanced budget plan and the House budget resolution. For fiscal year 1998, H.R. 3118 would limit the increase in budget authority to $1.7 billion over the administration’s budget plan and $1.1 billion over the House budget resolution. The final House bill also contains provisions requiring VA to assess the effects of the bill on demand for VA health care. For example, VA would be required to include in a report to the Veterans’ Affairs committees detailed information on the numbers of and costs of providing care to veterans who had not received care from VA within the preceding 3 fiscal years. The VA health care system was neither designed nor intended to be the primary source of health care services for most veterans. It was initially established to meet the special care needs of veterans injured during wartime and those wartime veterans permanently incapacitated and incapable of earning a living. Although the system has evolved since that time, even today it focuses on meeting the comprehensive health care needs of only about 465,000 of the nation’s 26.4 million veterans. In other words, its primary mission is to meet the comprehensive health care needs of veterans with service-connected disabilities rated at 50 percent or more. For other veterans, the system is primarily intended to provide treatment for their service-connected disabilities and to serve as a safety net to provide health care to veterans with limited access to health care through other public and private programs. Because 9 out of 10 veterans now have other public or private health insurance that meets their basic health care needs, relatively few veterans today need to rely on VA as a safety net. Rather, most of them turn to private sector providers for all or most of their care, using VA either not at all or to supplement their use of private sector health care. Reforms of VA eligibility that would significantly expand veterans’ eligibility for comprehensive care in VA facilities would significantly alter VA’s health care mission and place VA in more direct competition with the private sector. To the extent veterans are given expanded benefits that are either free or have lower cost sharing than other public and private health insurance, the VA system will gain a competitive price advantage over its private sector competitors. Coupling eligibility reform with other changes, such as improved accessibility and customer service, could heighten the increased demand for VA services. Because most veterans currently use private sector providers, any increased demand generated by eligibility expansions would come largely at the expense of those providers. For most veterans, VA eligibility reform might provide an additional option for health care services or additional services not covered under their public or private insurance. For those veterans who do not have public or private health insurance, however, eligibility reform is more important. It could improve their access to comprehensive health care services, including preventive health care services. Historically, VA’s mandatory and discretionary care workload has been fully funded. The four eligibility reform bills that would retain the discretionary nature of funding of veterans’ health benefits could significantly increase demand for VA health care services by expanding all veterans’ benefits to include comprehensive inpatient and outpatient care services. This could result in increased VA appropriations to fully fund at least the demand generated by the 9 million to 11 million veterans added to the mandatory care category for comprehensive free outpatient services. However, by not fully funding VA’s anticipated increase in workload, VA would be faced with developing rationing policies that would ensure the funds appropriated are directed toward those veterans with the highest priorities for care. This would likely entail turning away many of the veterans currently using VA health care. Depending on the level of funding, those turned away could include low-income uninsured veterans. The funds needed to meet the increased demand for routine health care services could also jeopardize VA’s ability to provide specialized services, such as treatment of spinal cord injuries, not readily available through other providers. If eligibility reforms focus on strengthening VA’s safety net mission while preserving its ability to provide specialized services veterans may be unable to obtain through their public and private insurance, several approaches could be pursued that would also limit the extent to which the government competes with the private sector. These approaches generally involve placing limits on the number of veterans given expanded benefits, narrowing the range of benefits added, or increasing cost sharing to offset the costs of added benefits. The American Legion proposal contains a framework for accomplishing such changes, but is unrealistic in the number of veterans who would be covered under the entitlement it would create. A significant reduction in the number of veterans covered by the entitlement would be needed if the proposal was to be budget neutral. For example, the entitlement for low-income veterans might be restricted to those who lack other public or private insurance coverage, or the income cutoff might be lowered to reduce the number of veterans covered by the new entitlement. VA said that GAO’s report, in presenting a summation of many years of discussion concerning eligibility reform issues, shows how confusing, convoluted, and difficult even debate on the issues can be. VA noted that unanimous passage of H.R. 3118 by the House of Representatives and the recent reporting of a bill by the Senate Committee on Veterans’ Affairs support the need for change. See appendix VII for VA’s comments.
Pursuant to a congressional request, GAO reviewed various proposals that would simplify and expand eligibility for veterans' health care benefits. GAO found that: (1) the VA health care system was neither designed nor intended to be the primary source of health care services for most veterans; (2) as the eligibility requirements for VA health care have evolved over the years, they have become increasingly complex and a source of frustration to veterans who are often uncertain about which services they are eligible to receive and to VA physicians and administrators who find them difficult to administer; (3) unlike private health insurance, VA health care does not have a defined, uniform benefit package and cannot guarantee the availability of covered services, and VA is limited to providing only those services covered by an individual veteran's VA benefits; (4) a VA facility is not permitted to provide a noncovered service even if it has the resources to provide the service and the veteran is willing to pay for it; (5) GAO recognizes the need for eligibility reform, which, for most veterans, might result in additional health care services not covered under their public or private insurance; (6) for veterans who do not have other insurance to meet their health care needs, eligibility reform is more important and could result in access to comprehensive health care services, including preventive care; (7) four legislative proposals would simplify and expand veterans' eligibility for VA care, and a fifth proposal, by the American Legion, has not yet been introduced as a legislative proposal; (8) each of the proposals has significant implications regarding the number of eligible veterans as well as the cost of providing care; (9) four of the proposals, which retain the discretionary funding of VA health care, could more than double demand for VA outpatient services, forcing VA to either ration care or seek larger appropriations; (10) the American Legion proposal, which would create an entitlement, would likely require significantly increased appropriations; (11) other issues in the proposal include provisions to exempt VA from most federal contracting laws and to deem VA as a Medicare provider; (12) GAO's work suggests that eligibility reforms could be developed to both strengthen VA's safety net mission and preserve its ability to provide specialized services; (13) among the approaches that could be pursued are placing limits on the number of veterans given expanded benefits, narrowing the range of benefits added, or increasing cost sharing to offset the costs of added benefits; and (14) the American Legion proposal provides a good starting point for developing future reform proposals, but changes would be needed to reduce the number of veterans covered by the entitlement if significant increases in VA appropriations are to be avoided.
The existing NGA West campus consists of 15 facilities on 27 acres. Some of the buildings’ original construction dates back to the early 1800s, and 22 acres of the site are on the National Register of Historic Places, according to NGA documents. In 2009 through 2010, NGA contracted with an independent firm to assess the condition of the existing NGA West. This assessment gave the facilities an overall condition rating of “poor,” generally because of the insufficiency of the anti-terrorism and force protection measures, the average age of the structures, numerous code and accessibility shortfalls, and lack of seismic protection. Near the end of the completion of the NGA East headquarters consolidation in 2011, NGA focused its attention on the need to improve the operational capacity, security requirements, and modernization of its NGA West facilities. From approximately 2009 through 2012, NGA conducted a series of evaluations to inform its efforts to modernize NGA West. These analyses included a condition assessment of the existing facilities; an economic analysis of alternatives to evaluate the options of building a new facility (“build new”), fully renovating the existing facilities (“modernize”), or remaining in the current facilities with minimum essential repairs (“status quo”); and a qualitative analysis of non-cost considerations for the build new, modernize, and status quo options identified in the economic analysis. In 2012 NGA determined that a new NGA West would best meet the agency’s mission and resource needs. After examining the options of renovating its current facility, leasing, or building a new government- owned facility, NGA determined that building a new, government-owned facility was the preferred option. NGA officials stated that they are in the process of soliciting design-build proposals and that the final design-build contract is planned for award near the end of fiscal year 2018. Construction is expected to begin approximately in the summer of 2019. We identified 22 best practices for an AOA process in October 2015, based on government and private-sector guidance and input from subject-matter experts. Many federal and industry guides have described approaches to analyses of alternatives; however, there was no single set of practices for the AOA process that was broadly recognized by both government and private-sector entities. We developed these best practices by (1) compiling and reviewing commonly mentioned AOA policies and guidance used by different government and private-sector entities and (2) incorporating experts’ comments on a draft set of practices to develop a final set of practices. The 22 best practices are grouped into four characteristics that describe a high-quality, reliable AOA process and can be used to evaluate whether an AOA process meets the characteristics of well-documented, comprehensive, unbiased, and credible. These practices can be applied to AOA processes for a broad range of capability areas, projects, and programs, including military construction projects and decision-making processes, in which an alternative must be selected from a set of possible options. In September 2016, we recommended that DOD develop guidance that requires the use of AOA best practices, including those practices we identified, when conducting AOA processes for certain types of military construction decisions. DOD did not concur with this recommendation and disagreed that these best practices apply to military construction decision-making processes. We continue to believe that this recommendation is valid and that the principles demonstrated by the best practices—and the practices themselves—draw from related DOD and other practices. Our best practices also parallel those found in DOD and Air Force guidance on military construction and analysis for decision making. For example, according to an Air Force instruction governing the planning and programming of military construction projects, one of the required planning actions is to evaluate alternative solutions. According to a DOD directive pertaining to military construction, DOD must monitor the execution of its military construction program to ensure—among other things—that the program is accomplished in the most cost-effective way. This guidance for cost effectiveness aligns with our AOA best practice Develop Life-cycle Cost Estimates, which focuses on providing decision makers with the information they need to assess the cost- effectiveness of alternatives. Further, DOD Instruction 7041.03, on economic analysis for decision making, contains numerous cost estimating principles and procedures that align with those called for in our AOA best practices. As we reported in 2016, these policy documents and instructions align with the general intent of our best practices, and there are many similarities between our best practices and the department’s guidance. Additionally, in our previous work reviewing AOA process for other national security facilities, agencies generally concurred with our recommendations to consider including our best practices in future guidance. For example, in 2014 we assessed three National Nuclear Security Administration construction projects and found each project’s AOA partially met our best practices for conducting an AOA process. The Department of Energy agreed with our recommendation and has begun implementation. NGA launched its search for a new NGA West site in 2012 with a site location study conducted by an outside real estate firm, and it concluded the search with the issuance of a record of decision in June 2016. The site location study included a check for existing federal sites that could accommodate NGA West’s workforce and mission. This search resulted in a total of 186 sites being identified initially as possible options; the list was narrowed to 6 sites in the St. Louis metropolitan area for further study. During preliminary master planning, 4 of the 6 sites identified by the site location studies were determined to be suitable for further analysis to select the agency’s preferred alternative. Three of these sites are in the Missouri cities of Fenton, Mehlville, and St. Louis, and one is in St. Clair County, Illinois, near Scott Air Force Base. See figure 1 for the geographic distribution of the 4 sites. The subsequent site selection process included an environmental impact statement as required by the National Environmental Policy Act of 1969, analysis of NGA and the Corps of Engineers’ compliance with related DOD policies and other federal laws and requirements, preliminary master planning conducted by the Corps of Engineers, and a site evaluation process initiated by the NGA West Program Management Office (PMO), which is responsible for managing the NGA West project. To select the final site from the four alternatives, NGA initiated a site evaluation process in August 2015 that was led by the NGA West PMO. This process involved various teams of experts analyzing the sites and evaluating them against defined criteria to identify the advantages and disadvantages of each site. Figure 2 provides an overview of the key elements and milestones of NGA’s site selection process, beginning with its earlier decision to build and concluding with its 2016 selection of the new site and issuance of its record of decision. According to NGA officials, there was no NGA or DOD policy or set of practices to comprehensively guide NGA’s site selection and AOA process. As a result, NGA relied on various DOD policies and instructions, other federal guidance, and industry standards. It incorporated these practices into the site selection process to ensure that it complied with federal requirements and industry practice to develop its AOA process, according to NGA and Corps of Engineers officials. Additionally, NGA officials stated that our AOA best practices would have been helpful in planning the site selection process for NGA West, but the process began in 2012, and our 22 best practices were not published until October 2015. At the outset of the site evaluation process in August 2015, the PMO set forth broad sets of criteria to use in analyzing the four alternatives. These broad sets of criteria, referred to as “evaluation factors,” were mission, security, development and sustainability, schedule, cost, and environment. In addition, each site was assessed to ensure that it complied with key laws, regulations, and directives. The PMO divided the analysis of the evaluation factors among NGA and Corps of Engineers teams. The mission, security, and development and sustainability factors were assigned to two NGA evaluation teams of subject-matter experts—the “mission evaluation team” and “security, infrastructure and schedule evaluation team” (referred to here as security evaluation team). Each of these teams used its expertise to develop “sub-factors” to assess the advantages and disadvantages of each site. For example: The mission evaluation team developed 10 mission-related sub- factors based on the PMO guidance, NGA’s mission, and the strategic goals outlined in the 2015 NGA Strategy. The mission-related sub- factors focused largely on elements pertaining to NGA’s workforce and partnerships, such as the sites’ proximity to the existing workforce, their distance from NGA’s Arnold facility, and the likelihood that the sites would attract mission partners to create a “GEOINT Valley.” The security evaluation team developed 13 sub-factors related to security and infrastructure based on PMO guidance, DOD and other federal security and energy requirements, threat analysis, and other subject-matter expertise. Examples of the sub-factors include a 500- foot setback, perimeter security elements, sustainable characteristics, and infrastructure resilience. Separate evaluations of cost, schedule, and environmental considerations were conducted by the Corps of Engineers in its role as construction agent as part of the environmental impact analysis. In addition, NGA and the Corps of Engineers conducted an assessment of relevant laws and regulations. The PMO integrated these analyses and provided an additional layer of review to each of the evaluation factors, in some cases adjusting them. For instance, the PMO reorganized the 10 mission-related sub-factors for its review. Specifically, while the mission evaluation team focused the sub-factors largely on NGA’s strategic goals related to workforce and partnerships, the PMO’s analysis reorganized those same mission-related sub-factors by how they supported all four of the 2015 strategic goals. The PMO listed under three “strategic effects”—“Create GEOINT Valley,” “Enhance Operations,” and “Attract and Sustain the Workforce”—all of the sub-factors related to that strategic effect. The PMO re-analyzed the sites by weighting those strategic effects and sub-factors that were linked to multiple strategic goals higher than those that were linked to fewer such goals. The PMO also adjusted some of the sub-factors used in the evaluation for security and for development and sustainability. The PMO’s additional analysis did not change the overall outcome of the evaluation of the sites; rather, it validated the mission evaluation team’s conclusion and generally supported all but one of the overall findings of the other analyses. At the conclusion of the PMO’s analysis in December 2015, the PMO’s conclusion was that no one site had emerged as a clear preferred alternative. Because the master planning and site evaluation process concluded that all four sites—Fenton, Mehlville, St. Louis City, and St. Clair—could meet the overall requirements and that no single site held substantial advantage over another, the NGA Director requested additional analysis with refined criteria to more clearly differentiate among the final four sites. Consequently, in January 2016 NGA initiated a new site selection team— consisting of NGA and Corps of Engineers personnel who had previously been involved in various stages of the process—to reassess the sites against refined criteria and perspectives in order to determine the agency- preferred alternative. The site selection team carried forward five of the six original evaluation criteria from the start of the site evaluation process, as well as compliance with federal law, policy, and other regulations, to develop its six “refined criteria.” In reviewing these refined criteria, the site selection team determined that cost and schedule accounted for the greatest differences among the sites. The team therefore used the cost and schedule assessments completed as part of the PMO process to narrow the sites, concluding that because the Mehlville and Fenton sites were the most expensive and posed the greatest schedule risk they should be eliminated from final consideration. The site selection team then focused its analysis on the final two sites— St. Clair and St. Louis City—to inform the Director’s selection. The team used the following six refined criteria to evaluate the sites: (1) cost, (2) schedule, (3) security, (4) mission efficiency and expansion, (5) applicability of and compliance with federal policies, executive orders, and federal initiatives; and (6) environmental considerations. The team proposed narrowing the relevant sub-criteria to those that provided the greatest differentiation among the sites, according to officials on the team. For example, the security criterion was narrowed to include 3 of the original 13 security and infrastructure evaluation sub-factors, and the adjusted “mission efficiency and expansion” criterion included one of the mission evaluation team’s 10 original mission sub-factors. Subsequently, the NGA Director provided additional direction, including adding a review of potential support from Scott Air Force Base, based on the support NGA East receives from being located at Ft. Belvoir, as well as ensuring that the security-related sub-factors carried over from prior analyses were consistently defined. Additionally, the director added 2 sub-criteria to the mission-related criterion to ensure that the site evaluation continued in terms of NGA’s strategic goals of partnership and people: 1. “Team GEOINT,” which refers to NGA’s current and future partnerships with academic, public, and private sector partners, and which parallels the “GEOINT Valley” element evaluated by the mission evaluation team and PMO. 2. “Team NGA,” which refers to the potential effects of workforce recruitment and retention that were also analyzed in the mission evaluation team and PMO analyses. According to NGA officials, while certain sub-factors or criteria were adjusted to provide further layers of analysis, the most important factors were always seen as mission and security. Additionally, NGA and Corps of Engineers officials said that adding these two sub-criteria expanded the analysis of the mission-related criteria to resemble the scope of the PMO’s analysis and incorporated the NGA Director’s mission and vision perspective. Finally, the NGA Director determined the weighting of the final criteria to evaluate the last two sites, the site selection team provided input on which of the sites was more advantageous with respect to each criterion, and in March 2016 this information was used to inform the NGA Director’s selection of the agency-preferred alternative. The weighting and final decisions are shown in table 1. The NGA Director selected the St. Louis City site as the agency-preferred alternative. It was identified in the publication of the final environmental impact statement and finalized with the issuance of the record of decision in June 2016. We compared NGA’s AOA process for selecting a site for the new NGA West campus to our AOA best practices and determined that NGA’s process substantially met three and partially met one characteristic of a high-quality, reliable AOA process. Although NGA’s AOA process substantially met most of the characteristics, we did find areas where the process could have been strengthened if NGA had more fully incorporated the AOA best practices. See table 2 for a summary of our assessment and appendix I for additional details on our scoring of NGA’s alignment with each of the 22 best practices. NGA’s AOA process for selecting a site for the new NGA West substantially met the well-documented characteristic of a high-quality, reliable AOA process, although we did find areas for improvement. For example, NGA’s AOA body of work demonstrated that the assumptions and constraints for each alternative for the site selection process were documented. NGA West’s Prospective Sites Master Plan included a set of overall assumptions that guided the preliminary planning process and provided specific assumptions and constraints for each alternative. Specifically, the plan identified various assumptions and constraints for the four final sites, such as calculations of the site boundaries, the estimated number of parking spaces, the square footage of the buildings and estimates of the building’s height, site utilities, and environmental constraints, among other things. In one instance, the plan documented the assumption that if the Mehlville site were to be used, all utilities would need to be removed from within the property line and existing buildings, parking lots, and roads would have to be demolished. In another example, the Corps of Engineers conducted a schedule and negotiation risk assessment and recorded scores for each site and some mitigation strategies for specific issues. The assessment documented risks to meeting the site acquisition schedule with the St. Louis site because, among other reasons, the site needed environmental cleanup that was expected to take six months. The Fenton site had high negotiation risks, in part because the asking price of the site was significantly higher than the appraised value. However, NGA did not provide information on other risks, such as technical feasibility and resource risks, and did not rank the risks or provide over-arching mitigation strategies for each alternative. According to the best practice, not documenting the risks and related mitigation strategies for each alternative prevents decision makers from performing a meaningful trade- off analysis, which is necessary to select an alternative to be recommended. NGA’s AOA process for selecting a site for the new NGA West substantially met the comprehensiveness characteristic of a high-quality AOA process, but although it had strengths, we identified some limitations. NGA’s AOA process considered a diverse range of alternatives to meet the mission need and conducted market surveillance and market research to develop as many alternative solutions as possible. According to our best practices, an AOA process that encompasses numerous and diverse alternatives ensures that the study provides a broad view of the issue and guards against biases to the AOA process. Specifically, NGA’s AOA process included a site location study that provided a summary of the thorough analysis that NGA conducted to identify potential site locations for the new NGA West campus. The study relied on information from local real estate market databases and input from the local real estate community, multiple municipal officials and organizations, and the public to identify an original set of 186 possible sites and narrow that list to a final 6 for further analysis. However, although the NGA body of work provides evidence that the Corps of Engineers developed initial cost estimates that compared each alternative using different cost categories, NGA’s AOA process did not include life-cycle cost estimates for the final 4 sites. NGA officials chose not to analyze total construction and other facility sustainment costs, because they assumed that since the sites were in the same geographic area, construction and operating costs would be similar. However, the estimates did not include sufficient details regarding all of the costs examined—specifically, how the cost estimates were developed for information technology trunk line costs. NGA stated that this best practice had limited application to its AOA process because it had determined that variation in the life-cycle cost estimates based on the location of the four sites—all in the St. Louis metropolitan area—was negligible. NGA officials also stated that the lack of final project design details constrained their ability to develop full life-cycle cost estimates. However, without estimates for full life-cycle costs, decision makers may not have a complete picture of the costs for each alternative and may have difficulty comparing the alternatives, because comparisons may not be based on accurate and complete information. NGA and Corps of Engineers officials said that they are in the process of developing full life-cycle cost estimates for the construction and design of the new NGA West campus, for the agency- preferred alternative. NGA’s AOA process for selecting a site for the new NGA West substantially met the characteristic of an unbiased AOA process, although we did identify some limitations. NGA’s AOA body of work demonstrated that NGA had developed functional requirements based on the mission need without a predetermined solution and that the requirements were realistic, organized, and clear. For example, NGA’s AOA body of work provided facilities requirements and specifically listed 11 site location and campus requirements that were tied to mission needs, including requiring a facility that will support future changes to mission requirements and allow for continuity of NGA operations. NGA’s AOA body of work also identified physical requirements for the new NGA West campus, for example, that the new facility must have at least 800,000 gross square feet and a 500- foot security buffer, and it must allow for a possible expansion in the future. However, although the NGA AOA body of work demonstrated a thorough comparison of the alternatives throughout the site evaluation process, it did not provide evidence that net present value was used to compare or differentiate among the alternatives, nor did it provide a rationale for why net present value could not be used. NGA officials acknowledged that they did not compare the alternatives using net present value. They stated that they had normalized some of the costs but that it was not necessary to normalize all costs, because the estimates were all done during the same time period. According to our best practice, if net present value is not used to compare the alternatives, then the AOA team should document the reason why and explain and describe the other methods applied. Additionally, comparing items that have been discounted or normalized with net present value allows for time series comparisons, since alternatives may have different life cycles or different costs and benefits. NGA’s AOA process for selecting the site for the new NGA West campus partially met the credible characteristic for an agency’s AOA process. Although NGA’s AOA process had strengths, it also had limitations, such as lacking important information related to cost risks and sensitivity analyses for both cost and benefits identified. NGA’s AOA body of work described the alternatives in sufficient detail to allow for robust analysis. Specifically, it provided descriptions of each of the alternatives at varying levels of detail. For example, the first site location study provided descriptions of the top 6 potential sites, including information on size, the sites’ strengths and weaknesses, and any acquisition or development issues. The NGA AOA body of work also provided evidence that site master planning was conducted to provide additional details on the physical and environmental attributes of each site, as well as constraints and benefits. For example, the NGA West Prospective Sites Master Plan described the Mehlville site as having landscape features such as mature trees, waterways, areas of steep topography, options for public transportation, bike-friendly streets, and existing utility infrastructure. However, NGA did not fully include key information on either the risk or the uncertainty related to cost estimates or the sensitivity to the costs and benefits identified as part of its AOA process. For example, the NGA body of work did not include a confidence interval or range for the cost estimates for each viable alternative in order to document the level of risk associated with the estimate. NGA’s AOA body of work documented the estimated alternatives’ initial costs and included contingency costs across all four alternatives. Corps of Engineers officials told us that they had developed a 30 percent design and 5 percent construction contingency cost factor across the four alternatives to account for cost risks in those areas. However, the NGA AOA body of work did not provide evidence of a confidence interval or range for the costs provided. NGA acknowledged that while its AOA body of work did not identify the risk associated with specific cost elements for each alternative, it did provide a “level of confidence,” because the methodology behind the cost components in the estimate implied a high level of confidence. Although we agree that NGA did provide a contingency factor for the site development costs and provided cost estimates for all four viable alternatives, NGA did not develop a confidence interval or risk range for those estimates. NGA’s cost estimates were used as a determining factor in the final decision among the four alternatives. However, without understanding the cost risk and the uncertainty of those costs as outlined in the best practice, a decision maker might be unable to make an informed decision. Additionally, the NGA AOA body of work did not demonstrate that NGA had conducted a sensitivity analysis for the cost and benefit and effectiveness estimates for each alternative in order to examine how changes in key assumptions would affect the cost and benefit estimates. The NGA AOA body of work documented that some sensitivity analysis or level of risk was analyzed as part of the schedule analysis, and NGA officials stated that the project considered how different values and variables affect each other during the criteria and evaluation analysis. However, the NGA AOA body of work did not document the sensitivity of cost and benefit estimates to changes in key assumptions for each alternative, and a sensitivity analysis was not applied to the initial cost estimates or benefit assumptions that were used to make the final site selection. NGA officials stated that this best practice has limited application to its AOA process, because the lack of variables between sites constrained their ability to develop full life-cycle cost estimates and complete a sensitivity analysis. NGA officials stated that their sensitivity analysis was limited to those considerations that were measurable and sensitive to change—predominantly schedule risk associated with land acquisition activities. Further, NGA officials explained that because all the site alternatives were located within the St. Louis metropolitan area, any variations in conditions would have equal effect. Although we agree that NGA did conduct a sensitivity analysis for schedule risks, NGA neither documented how the schedule sensitivity affected its cost or benefit estimates nor performed a sensitivity analysis for the various assumptions used to develop the cost or benefit for each alternative. According to the DOD instruction on economic analysis, a sensitivity analysis is a “what-if” exercise that should be performed to test the conclusions and assumptions of changes in cost and benefit variables and should always be performed when the results of the economic analysis do not clearly favor any one alternative. According to our best practice, not conducting a sensitivity analysis to identify the uncertainties associated with different assumptions increases the chances that an AOA team will recommend an alternative without understanding the full effects of costs, which could lead to cost and schedule overruns. Although NGA’s AOA process did not reflect all of the characteristics of a high-quality process, we are not making recommendations in this report, in part because NGA plans to conduct additional cost analysis and in part because we made an applicable recommendation to DOD in 2016. Specifically, although NGA’s AOA process is complete, NGA and Corps of Engineers officials said that they are developing full life-cycle cost estimates for the construction and design of the new NGA West campus and that these estimates will include many elements from our best practices. Further, we continue to believe that our September 2016 recommendation that DOD develop guidance requiring the use of AOA best practices for certain military construction decisions could help ensure that future AOA processes conducted by DOD agencies like NGA are reliable and that agencies identify a preferred alternative that best meets mission needs. While DOD did not concur with our recommendation, as we reported in 2016, our best practices are based on longstanding, fundamental tenets of sound decision making and economic analysis. Additionally, our best practices align with many DOD and military policies, directives, and other guidance pertaining to military construction. Further, during this review NGA officials stated that DOD did not have a set of best practices for conducting an AOA to help NGA make decisions regarding its military construction project, and that our AOA best practices would have been helpful had they been published prior to the start of NGA’s site selection process in 2012. Accordingly, we continue to believe our prior recommendation is relevant and that unless DOD has guidance directing that certain military construction AOA processes be conducted in accordance with identified best practices, it may not be providing Congress with complete information to inform its oversight of DOD’s future military construction decisions. We provided a draft of this report of this report to NGA for review and comment. NGA’s comments are reprinted in their entirety in appendix II. In comments on our report, NGA stated that it valued our assessment of its AOA process, which we judged to have substantially met the characteristics of a well-documented, comprehensive, and unbiased process, and would use our findings to continue to refine and improve its corporate decision making and processes. NGA raised a concern about our assessment that its AOA process used to select the site for its new NGA West project partially met the best practices that demonstrate a credible process. NGA’s specific concern was that we concluded that the AOA process did not fully include information on risks and sensitivities to cost estimates. In its letter, NGA stated that its analysis demonstrated that cost was a factor but not the most important factor. Moreover, NGA stated that cost elements and details ranged from well-defined costs, such as real estate costs, to estimates based on analogy such as an information technology trunk line. NGA additionally stated that, due to the conceptual nature of the design of the facility at that time, more detailed cost analysis was judged to provide no discrimination among alternatives and were thus purposely excluded from the initial cost estimates that were used in the AOA process. While NGA may have concluded that the project’s cost was not the most important factor, the agency estimates that construction of the campus will cost about $945 million and NGA used the cost estimate as a determining factor to select from the four final alternatives. Moreover, our assessment of the credibility characteristic is based only in part on NGA’s initial cost estimates and did not penalize NGA for excluding additional cost estimates. Rather, we assessed that NGA’s AOA body of work did not provide evidence of documenting the sensitivity of the cost-benefit or effectiveness estimates to changes in key assumptions for alternatives, nor was a sensitivity or risk analysis applied to the initial cost estimates used to make the final site selection. NGA also stated in its letter that our AOA best practices are not applicable in all circumstances, and pointed out that DOD did not concur with a recommendation in a prior report to develop AOA guidance requiring departmental components to use AOA practices, including the best practices we identified, for certain future military construction projects. Our prior report suggested that such guidance might only apply to certain military construction projects as determined by DOD. In addition, while DOD’s existing relevant guidance does not require use of our AOA best practices, the guidance does not prohibit it either. Further, as discussed in our report, NGA officials told us the AOA best practices are helpful to such processes, and lacking such DOD guidance NGA had to draw on expertise, practices, and procedures from a variety of sources to conduct its AOA for the new NGA West site. Finally, in its letter NGA proposed that two documents—the environmental impact statement and record of decision—fulfill the best practice to document the AOA process in a single document. Specifically, NGA stated that within the context of the environmental impact statement process, the record of decision is the authoritative capstone document of the process, and that together the two documents include discussions of the decision-making and factors considered by the director in selecting the agency-preferred alternative. These two documents were prepared to fulfill requirements of the National Environmental Policy Act of 1969 in order to determine the environmental impacts of the project, as discussed earlier in our report. While we recognize that the record of decision and the environmental impact statement are significant documents that include summaries of aspects of NGA’s AOA process, as NGA indicated these are two documents within an expansive AOA body of work. Further, many of the elements of NGA’s AOA process are diffused throughout these and several other reports and documents—that were specifically identified by NGA as the key documentation of its AOA process—rather than clearly delineated in a single document as prescribed by the best practice (see appendix I). As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretary of the Air Force; the Secretary of the Army; the Under Secretary of Defense for Acquisitions, Technology and Logistics; the Under Secretary of Defense for Intelligence; and the Director, National Geospatial-Intelligence Agency. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. In our earlier discussion of the extent to which NGA’s AOA process met best practices for such processes, we presented our analysis for specific best practices. These 22 best practices and their definitions were originally published and are listed in GAO-16-22. Table 3 summarizes our analysis of NGA’s AOA process for selecting the site for the new NGA West and our ratings of that process against all 22 best practices. In addition to the contact named above, Brian Mazanec, Assistant Director; Jim Ashley; Tracy Barnes; Chris Businsky; George Depaoli; Richard Johnson; Joanne Landesman; Jennifer Leotta; Jamilah Moon; Joseph Thompson; and Sally Williamson made key contributions to this report. Amphibious Combat Vehicle Acquisition: Cost Estimate Meets Best Practices, but Concurrency between Testing and Production Increases Risk. GAO-17-402. Washington, D.C.: Apr. 18, 2017. Joint Intelligence Analysis Complex: DOD Needs to Fully Incorporate Best Practices into Future Cost Estimates. GAO-17-29. Washington, D.C.: Nov. 3, 2016. Joint Intelligence Analysis Complex: DOD Partially Used Best Practices for Analyzing Alternatives and Should Do So Fully for Future Military Construction Decisions. GAO-16-853. Washington, D.C.: Sept. 30, 2016. Patriot Modernization: Oversight Mechanism Needed to Track Progress and Provide Accountability. GAO-16-488. Washington, D.C.: Aug. 25, 2016. Amphibious Combat Vehicle: Some Acquisition Activities Demonstrate Best Practices; Attainment of Amphibious Capability to Be Determined. GAO-16-22. Washington, D.C.: Oct. 28, 2015. DOE and NNSA Project Management: Analysis of Alternatives Could Be Improved by Incorporating Best Practices. GAO-15-37. Washington, D.C.: Dec. 11, 2014. Military Bases: DOD Has Processes to Comply with Statutory Requirements for Closing or Realigning Installations. GAO-13-645. June 27, 2013. Military Bases: Opportunities Exist to Improve Future Base Realignment and Closure Rounds. GAO-13-149. Washington, D.C.: Mar. 7, 2013. Military Base Realignments and Closures: The National Geospatial- Intelligence Agency’s Technology Center Construction Project. GAO-12-770R. Washington, D.C.: June 29, 2012. Military Base Realignments and Closures: DOD Is Taking Steps to Mitigate Challenges but It Is Not Fully Reporting Some Additional Costs. GAO-10-725R. Washington, D.C.: June 24, 2010. GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs. GAO-09-3SP. Washington, D.C., Mar. 2, 2009.
NGA, a defense agency and element of the Intelligence Community, provides geospatial intelligence to military and intelligence operations to support national security priorities. It currently operates out of two primary facilities—its headquarters in Springfield, Virginia, and its NGA West campus in St. Louis, Missouri. In 2012, NGA determined that a new location for its NGA West facilities was necessary to meet security standards and better support its national security mission. NGA estimates that the construction of the new campus will cost about $945 million. GAO was asked to evaluate the AOA process that NGA used to select the site for its new campus. This report (1) describes the process NGA used, including the key factors it considered and (2) evaluates the extent to which the AOA process met best practices for such analyses. GAO visited the existing NGA West campus and the final four alternative sites that were considered, analyzed and assessed reports and information that document NGA's AOA process for selecting the site, and interviewed relevant officials about the process. GAO evaluated NGA's process against best practices identified by GAO as characteristics of a high-quality, reliable AOA process. In 2012, the National Geospatial-Intelligence Agency (NGA) began an analysis of alternatives (AOA) process to evaluate potential sites for its new NGA Campus West (NGA West) using key evaluation factors related to mission, security, development and sustainability, schedule, cost, and environment. NGA's process included levels of analysis and considerations to select the agency-preferred alternative from an original list of 186 potential sites, subsequently narrowed to the final four alternative sites (see figure). The process culminated in the June 2016 selection of the agency-preferred alternative, the St. Louis City site. NGA's process for selecting a site for the new NGA West campus substantially met three of the four characteristics of a high-quality, reliable AOA process. Specifically, NGA's process substantially met the characteristics that demonstrate a well-documented, comprehensive, and unbiased AOA process. It partially met the credibility characteristic, in part because it did not fully include information on the risks and sensitivities to cost estimates. NGA officials stated that there was no comprehensive DOD guidance to inform its AOA process, and although NGA's AOA process is complete, NGA plans to develop full cost estimates as part of construction, planning, and design. In September 2016, GAO recommended that DOD develop guidance for the use of AOA best practices for certain types of military construction decisions. While DOD did not concur and the recommendation remains open, GAO continues to believe such guidance would help ensure that future AOA processes are reliable and would result in decisions that best meet mission needs. GAO is not making recommendations to NGA. In commenting on a draft of this report, NGA expressed concerns about GAO's assessment of NGA's estimates of cost risks and sensitivities. GAO continues to believe its assessment accurately reflects NGA's process.
Sacramento, California, was established at the confluence of the American and Sacramento Rivers shortly after gold was discovered upstream at Sutter’s Mill in 1848. Frequent flooding has been a problem in Sacramento since its founding. To help reduce flooding, over time a complex system of levees, dams, and other related facilities were built. Levees line both sides of the American River from where it meets the Sacramento River upstream for a distance of about 17 miles, and the Natomas Basin is completely surrounded by levees. In addition, the Folsom Dam, completed in 1956 and located upstream from Sacramento on the American River, uses a portion of its storage capacity for flood protection. The Sacramento area flood protection system was designed on the basis of records of rainfall during the first half of the 20th century. However, since 1950, the American River watershed has experienced five floods that were larger than any recorded in the pre-1950 period, although downtown Sacramento was not flooded during any of these events. Nonetheless, the Sacramento area has less protection than the designers of the original flood protection system realized. In fact, much of urbanized Sacramento is located in areas where a flood has a 1 percent chance of occurring every year—known as the 100-year floodplain. Because of this limited level of protection, the Corps estimates that a very large flood—one with a 0.25 percent chance of occurring every year—would flood the 400-year floodplain, resulting in residential, commercial, industrial, and public property damage of about $15.5 billion as well as loss of lives. According to the Corps, about 305,000 people live in more than 100,000 residential properties located within the American River floodplain. A major flood also would cause toxic and hazardous waste contamination; disrupt the city’s downtown business and government areas, including the state capitol; and interfere with the transportation system, including two interstate highways. A major flood in 1986, the largest one ever recorded on the American and Sacramento Rivers, severely strained the levee system protecting Sacramento. Although the levees held and downtown Sacramento was not flooded, the event spurred efforts by federal, state, and local entities to identify measures to increase Sacramento’s level of flood protection. In 1987, the Corps began work on a comprehensive study of flood protection alternatives for Sacramento. In its 1991 report, the Corps’ Sacramento district office considered six flood protection options and recommended building a new dam on the American River at Auburn, California, but Congress did not approve the dam’s construction. Subsequently, in response to the Department of Defense Appropriations Act of 1993, the Corps reevaluated three alternatives for increasing flood protection. In its 1996 report, the Corps examined (1) building a new dam near Auburn, California; (2) modifying the existing Folsom Dam; and (3) increasing the amount of water released from Folsom Dam during a flood, coupled with other flood protection measures. The Corps again recommended building a dam at Auburn, but, again, Congress did not approve its construction. Recognizing the magnitude of the opposition to the proposed Auburn Dam, in June 1996, the Corps recommended the Common Features Project, which included improving sections of the American and Sacramento Rivers’ levees, primarily by constructing cut-off walls, to provide small- scale improvements to flood protection for the Sacramento area while the options for more extensive improvements continued to be considered. The WRDA of 1996 authorized $57 million for the Common Features Project, which included 24 miles of levee improvements on the American River and 12 miles on the Sacramento River along the western border of the Natomas Basin. Subsequently, the Corps concluded that it could provide the same level of flood protection on the American River by modifying only about 21 miles of levees. Figure 2 shows how a cut-off wall, which is composed primarily of a soil, cement, and clay mixture that forms an impermeable barrier when it hardens, can prevent water from seeping under or through a levee. In January 1997, numerous rivers in northern California flooded causing extensive damages, although not in the Natomas Basin or downtown Sacramento. This flood, which was nearly as large as the 1986 flood, highlighted the continuing vulnerabilities of the existing flood protection system. In response, the WRDA of 1999 (1) modified the Common Features Project by adding about 3.8 miles of additional levee modifications along the American River and 10 miles on the Natomas Cross Canal, located on the northern border of the Natomas Basin, and (2) increased the project’s authorization from $57 million to $92 million. When Congress approves a flood protection project, it authorizes a specific amount of money for the project, which provides the basis for the maximum project cost. According to section 902 of the Water Resources Development Act of 1986, as amended, the maximum project cost is the sum of (1) the original authorized amount, with the costs of unconstructed project features adjusted for inflation; (2) the costs of modifications that do not materially alter the scope of the project, up to 20 percent of the original authorized amount (without adjustment for inflation); and (3) the cost of additional studies, modifications, and actions authorized by the 1986 Act or any later law. As a result of these provisions, the $92 million that Congress authorized for the Common Features Project in 1999 translates to an allowable maximum project cost of about $120 million in 2003. When Congress authorized the Common Features Project in 1996, federal law required that nonfederal partners pay 25 percent of the cost of flood protection projects. For the Common Features Project, these partners are the State of California Reclamation Board and the Sacramento Area Flood Control Agency. In this report, when we refer to project costs, including the maximum allowable project cost, we are referring to the combined federal and nonfederal expenditures. Estimated costs for the Common Features Project grew from $57 million in 1996, when the project was first authorized, to between $270 million and $370 million in 2002, primarily because the Corps changed the design of the levee improvements. For the American River levee improvements authorized in 1996, estimated costs more than tripled, due largely to changes in the design of the cut-off walls. New work authorized in 1999 added another $15 million to the cost increase. The Corps has completed much of the American River work authorized in 1996, but it has not begun construction on the work authorized in 1999. Regarding the Natomas Basin component, estimated costs increased from $13 million to between $112 million and $212 million. Costs rose primarily because the Corps changed the design of the levee improvements and proposed adding other improvements to this component. The Natomas Basin work is in the early planning stages, and the Corps has not begun construction. As of July 2003, the Corps had spent or made plans to spend nearly all of the money authorized for the Common Features Project. It therefore will not be able to finish constructing the American River work authorized in 1996, begin constructing the American River work authorized in 1999, or complete planning for the Natomas Basin work unless Congress increases the project’s authorized funding. The Corps’ cost estimate for the American River levee improvements authorized in 1996 has more than tripled, from $44 million in 1996 to about $143 million in July 2002, as shown in table 1. As table 1 shows, costs rose primarily because of the increased costs of the cut-off walls. The Corps’ original design called for building cut-off walls to a depth of between 20 and 30 feet to prevent water from seeping through the levees and for allowing gaps in the cut-off walls at bridge and utility crossings. However, after the 1997 flood, the Corps realized it also needed to address the problem of water seeping under levees. It therefore increased the depth of the cut-off walls to between 60 and 80 feet and closed the gaps in the cut-off walls at bridge and utility crossings. For some sections of the levees, the Corps could not close the gaps using its standard approach for cut-off walls because of problems accessing the sites. As a result, the Corps employed a new and more expensive approach—known as jet grouting—to build cut-off walls by drilling and injecting concrete material into areas that were difficult to access. Closing the gaps in the cut- off walls by jet grouting raised estimated costs by $52 million and increasing their depth raised costs by $24 million, according to the Corps’ July 2002 cost estimate. However, in September 2002, the Corps determined that fewer gaps needed to be closed using jet grouting, which should reduce costs to some extent. As of June 2003, however, the Corps had not incorporated these potential cost reductions into an official project cost update. As table 1 also shows, the Corps’ response to accidents that occurred during construction of the 1996 authorized work added $11 million to project costs. On three occasions, liquid material from the cut-off walls accidentally leaked into either the American River or the backyards of homes that are built against the levees. As a result, the Corps incurred costs cleaning up these spills and responding to new work requirements mandated by the Environmental Protection Agency to help prevent future leaks. Lastly, in addition to the cost increases related to the 1996 authorized work, new flood protection measures authorized in 1999 added about $15 million in costs to the American River component of the project. These measures include raising levee banks at two locations, installing gates and pumps at an existing drain, and installing cut-off walls in two additional levee segments. Of the American River work authorized in 1996, the Corps has completed about 90 percent and must still close gaps in the cut-off walls at some remaining bridge and utility crossings to complete this work. For the levee improvements authorized in 1999, the Corps has done some planning but has not begun any construction. However, as of July 2003, the Corps had spent or had plans to spend $116 million of the $120 million authorized for the entire Common Features Project. The Corps could not give an exact accounting of how much of the $116 million it had spent on the 1996 American River work. However, on the basis of the information that the Corps provided, we estimate the Corps has spent, or made plans to spend, at least $103 million for planning and constructing the 1996 American River work. Because the Corps has spent or made plans to spend most of the project’s authorized funds, it will not be able to complete the 1996 and 1999 work on the American River unless Congress increases the project’s authorized funding. The Corps’ preliminary cost estimates for the Natomas Basin component of the project increased from $13 million in 1996 to between $112 million and $212 million in 2002, as shown in table 2. As table 2 shows, the Corps estimates that the costs for the original levee improvements will increase by between $47 million and $88 million due to design changes to add cut-off walls or provide other methods of flood protection to control seepage under levees. The Corps proposed new work in 2002 that will increase costs by between $37 million and $84 million. This work is located in an area of the levee where the Corps previously had constructed a cut-off wall to stop water from seeping through the levee. However, the Corps later determined that the cut-off wall was not deep enough to prevent water from seeping under the levee, and the proposed new work will address this problem. Finally, the Corps estimates that the additional work authorized in 1999 to modify levees along the Natomas Cross Canal, which empties into the Sacramento River at the north end of the Natomas Basin, will add between $14 million and $26 million to the cost of this component of the project. The Natomas Basin work—authorized in 1996 and 1999 and the additional work the Corps identified—is in the planning stages and no construction has yet begun. The Corps has been updating information on the extent of the levee problems and the costs of the improvements identified in the original plan and intends to submit a more precise cost estimate to Congress when it completes its planning. However, the Corps halted its Natomas Basin planning work in June 2003 because it had spent or made plans to spend nearly all of the money authorized for the entire Common Features Project. Given that the Natomas Basin levee improvements will cost significantly more than originally estimated and no construction has yet begun, identifying and evaluating alternative flood protection measures could result in cost savings. For example, one possible alternative method for flood protection identified by the Sacramento Area Flood Control Agency, as well as the Corps, involves lowering the water level in the Sacramento River during floods by diverting water through the Fremont Weir and into the Yolo Bypass, which is located at a point just before where the Sacramento River flows past the Natomas Basin. The Fremont Weir is a low dam that controls the movement of large volumes of floodwater from the Sacramento River by diverting it into the Yolo Bypass. The Yolo Bypass is a continuous, 40-mile open space corridor that is protected from urban development pressure by flood easements. (See fig. 3.) Lowering the water level in the Sacramento River as it passes the Natomas Basin could, among other things, improve the reliability of the Natomas Basin levees and may provide more cost-effective flood protection than the current Natomas Basin levee improvement plan. However, as of June 2003, the Corps had not yet analyzed the costs and benefits of modifying the weir and the bypass or any other alternative method for Natomas Basin flood protection. After the 1997 storm demonstrated vulnerabilities in the American River levees, the Corps significantly changed the design of the levee improvements but did not analyze the likelihood of cost increases for the Common Features Project. The Corps then began constructing the American River levee improvements without informing Congress that the changes could greatly increase the overall costs of the project. By the time that the Corps reported the significant cost increases in 2002, it had already spent or made plans to spend more than double its original estimate for the American River levee improvements authorized in 1996. Furthermore, as previously discussed, the Corps estimates that it will spend more than three times its original estimate by the time it completes this work. The Corps has been able to pay for these levee improvements by spending funds originally planned for the Natomas Basin and the additional American River improvements authorized in 1999. The Corps did not analyze the risk of cost increases after changing the design of the American River levee improvements in 1997 and, therefore, did not provide Congress with information on the project’s exposure to significant cost increases. A storm in January 1997 demonstrated that the American River levees were vulnerable to floodwaters seeping under them, which could cause them to fail. On the basis of this information, the Corps significantly changed the design of the levee improvements but did not conduct a cost risk analysis, or any other type of analysis, to determine the extent to which these changes would increase the costs for the Common Features Project. According to the Corps’ policy, project management teams should consider conducting a cost risk analysis when developing cost estimates for projects with considerable uncertainties. A cost risk analysis identifies the areas of a project that are subject to significant uncertainty about costs and provides decision makers with a range of potential costs for a project and the probability that these costs will be exceeded. For example, a cost risk analysis might determine that there is a 50 percent chance that costs for a particular project will exceed $5 million but only a 20 percent chance that costs will exceed $8 million. According to a report from the Corps’ Institute for Water Resources, this type of estimate is more accurate than a single point cost estimate and provides decision makers with better and more complete information. However, the Corps did not analyze the risk of cost increases after changing the design of the American River levee improvements even though it had identified several factors that could lead to significant cost increases. For example, by July 1997 the Corps recognized that it had to close the gaps in the cut-off walls at bridges and other areas and extend the depth of some walls from about 20 to about 60 feet, although the Corps had not developed a final design for these improvements. By identifying a project element with significant cost uncertainty—the design and depth of the cut-off walls—the Corps essentially performed the first step of cost risk analysis. However, the Corps did not follow through by quantifying this uncertainty and determining a range of potential costs for the cut-off walls or the likelihood that the potential costs within that range would be exceeded—the second and third steps of the cost risk analysis. Given that the Corps’ original cost estimate for the American River work was nearly equal to its estimates of the benefits, if the Corps had conducted a cost risk analysis, it would have shown whether there was a significant likelihood that project costs would be greater than the economic benefits. Furthermore, despite experiencing significant cost increases for the 1996 work, the Corps did not conduct a cost risk analysis to determine its exposure to potentially significant cost increases for the 1999 work. In addition, the Corps is not planning to conduct a cost risk analysis for the Natomas Basin improvements. According to Sacramento district officials, the Corps did not conduct a cost risk analysis because it did not believe such an analysis was necessary to account for uncertainties in the project. The Corps’ planning guidance generally directs the Corps to seek new spending authority from Congress if it determines that a project’s estimated costs exceed the maximum project cost before it has awarded a project’s initial contract. However, after making significant changes to the project’s design in 1997, the Corps did not reevaluate its cost estimate to determine if it could still implement the project without exceeding the maximum project cost. For example, the Corps did not estimate the potential for cost increases due to tripling the depth of some cut-off walls, which eventually added $24 million in estimated costs to the project. In addition, the Corps did not estimate the potential for cost increases due to closing the gaps in the cut-off walls at bridges and other areas. This expense was not considered in the Corps’ original 1996 cost estimate and potentially involved the use of jet grouting—a technology the Corps had not previously used to construct cut-off walls. Closing the gaps in the cut-off walls eventually added $52 million in estimated costs to the project. In spite of significantly changing the project’s design, the Corps awarded the project’s first contract without updating its cost estimate to determine whether it would need additional spending authority to complete the project. In June 1998, the Corps issued the first Common Features Project solicitation for bids to construct about 1.6 miles of the redesigned cut-off wall on the north bank of the American River. These levee improvements represented only about 8 percent of the total miles of planned American River levee improvements, but the bid that the Corps selected amounted to 24 percent of the estimated cost for all of the American River levee work. We believe that this difference should have (1) alerted the Corps to the possibility that costs were likely to be much higher than it had originally estimated and (2) warranted an update of the Corps’ cost estimate before it awarded the initial contract. According to a headquarters official, the Corps issued the first contract without updating its total project cost estimate because it would have been impractical to delay the project while the agency revisited cost estimates. Furthermore, according to the Corps, the first contract was expected to be more costly than future contracts because, among other reasons, it involved work on only a small stretch of the levee, which limited possible cost efficiencies. However, because the Corps did not analyze the potential for cost increases for the remainder of the American River levee improvements, it did not determine the likelihood that it would need additional spending authority to complete the project before it awarded the first contract. The Corps has paid for the significantly increased costs of the American River levee improvements by using funds planned for the Natomas Basin and for the additional American River work authorized in 1999. Although the Common Features Project has two separate components, and Congress approved parts of the project in 2 different years, the project is subject to a single maximum project cost. The Corps has the flexibility to spend Common Features Project funds as it sees fit and is not required to allocate funds in proportion to its original cost estimates for each component. Following project authorization in 1996, the Corps began to construct the American River levee improvements before the Natomas Basin improvements. Although the Corps exhausted the funds it had originally estimated that it would need to construct the American River levee improvements, it was able to continue implementing the American River work by spending funds it had originally planned to use for the Natomas Basin work. With the authorization of additional work in 1999, effectively raising the project’s maximum cost to about $120 million, the Corps also was able to use funds planned for this work to pay for the increased costs of the American River work authorized in 1996. After it awarded the first Common Features Project contract, the Corps was not required to inform Congress of project cost increases until it could not contract for additional work without exceeding the maximum project cost. According to the Corps, in March 2001 it briefed a number of Members of Congress on its intention to prepare a report that would evaluate the potential for the cost of the Common Features Project to exceed the project’s maximum cost. However, it was not until February 2002, more than 4 years after it significantly modified the design of the American River levee improvements, that the Corps reported to Congress for the first time that due to significant cost increases, it could not complete the project without exceeding the maximum project cost. By this time, the Corps had spent or awarded contracts for more than twice the amount it originally planned to spend on the American River levee improvements authorized in 1996 and had completed about 90 percent of the work. Furthermore, the Corps estimates that it will spend more than three times its original estimate by the time it completes this work. Because the Corps did not update its cost estimate or report the significant cost increases to Congress until most of the 1996 American River work was complete, Congress did not have the opportunity to determine whether the significantly more expensive levee improvements were still the most appropriate means of providing flood protection for Sacramento. The Corps made mistakes in estimating the benefits for the American River levee improvements because it incorrectly counted and valued the properties that the levee improvements would protect and used an inappropriate methodology to determine the amount of flood damages they would prevent. Seven years after Congress authorized the project, the Corps has not yet prepared an accurate assessment of the benefits of the American River levee improvements. In addition, contrary to its guidance, the benefit estimate the Corps prepared in 2002 did not describe the range of possible benefits and the likelihood that the values in this range would be realized. This additional information, describing the uncertainty of the benefit estimate, would have provided decision makers with information on the likelihood that the project’s benefits would be greater than its costs. Furthermore, the Corps’ three-tiered quality control process did not identify the mistakes that we found during the course of our review. In its original 1996 analysis of the benefits and costs of the American River levee improvements, the Corps incorrectly counted the residential properties that the proposed levee improvements would protect. As a result, the Corps incorrectly calculated the benefits that these improvements would provide. According to the Corps, the methodology it used to count the number of residential properties in 1996 was “accepted practice and consistent with Corps guidance and technology applicable at the time.” In 2002, the Corps used a different methodology that incorporated new technologies and provided a more precise estimate of the number of properties protected. Using this new approach, the Corps determined that the actual number of residential properties protected by the levee improvements is about 20 percent less than its original estimate. The Corps did not calculate the amount that benefits would decrease due to this change. However, given the small difference between the original estimated annual benefits ($5.6 million) and the annual costs ($5.5 million) of the American River levee improvements, if the Corps had incorporated a more accurate estimate of the property inventory in its 1996 analysis, the benefits of these improvements may have been less than the costs. For flood protection projects, such as the Common Features Project, the Corps calculates benefits as the dollar value of the physical damages to residential, commercial, industrial, and public properties and infrastructure that the levee improvements prevent. To calculate the reduction in flood damage to properties, the Corps counts the number of properties located in the potential flood area—known as the floodplain— and then assesses the monetary value of the structures and their contents. The Corps uses this information to determine the property damage that would result from floods of various depths and to estimate the impact that the levee improvements would have in preventing this damage. It is important to remember that, in addition to the economic benefits from preventing property damage, levee improvements may reduce the risk of loss of human lives, which is a benefit that is not included in the Corps’ calculations. According to the Corps, about 305,000 people live within the American River floodplain and the number of lives lost because of levee failure would depend on a variety of factors, such as the size of the flood, warning time, time of day, and availability of evacuation routes. Because of the many factors involved and the lack of historical data, the Corps was not able to estimate the number of lives that would be lost as a result of levee failure and flooding in the Sacramento area. Although the Corps updated its benefit estimate in 2002 to incorporate the benefits from the new levee improvements authorized in 1999, a Sacramento district official acknowledged that the Corps again made mistakes in estimating the number of properties the levee improvements would protect. For the American River levee improvements authorized in 1999, the Corps identified an area that was larger than the area the levee improvements would actually protect. As a result, the Corps overestimated the number of properties protected and the benefits provided by the work authorized in 1999. According to a Sacramento district official, the Corps currently does not have the information it needs to determine the correct area the levee improvements would protect and therefore is unable, at this time, to provide a reliable estimate of the benefits from the 1999 work. In addition, the Corps made mistakes in its 2002 analysis in estimating the value of the residential properties the American River levee improvements would protect. The Corps’ policy calls for calculating a property’s value as the cost of replacing the structure less any depreciation, which accounts for a reduction in a structure’s value due to deterioration prior to flooding. Because the Corps had more than 100,000 residential properties to assess and a limited amount of time and resources, it determined depreciated replacement values for a small sample of 365 properties and then used the results to estimate the depreciated replacement values for all properties. However, the Corps did not correctly select the sample of properties. According to members of both the Appraisal Institute and The Appraisal Foundation, to accurately appraise a large number of properties by sampling requires a separate sample for each residential property type, such as single-family homes, condominiums, and apartment buildings. Instead of conducting a separate sample for each type of property, the Corps sampled all property types together and calculated an average depreciated replacement value for all property types. As a result, it is unclear whether the Corps accurately calculated depreciation, which in turn raises questions about its estimates of the value of the residential properties the American River levee improvements would protect. Moreover, the Corps did not use a consistent, objective appraisal methodology to calculate depreciation for the properties in the sample. Instead, the Corps subjectively determined depreciation. For example, if the Corps determined a structure was in “very good” condition it was assigned a zero percent, 5 percent, or 10 percent level of depreciation. However, the Corps could not provide us with its criteria for assigning the level of depreciation. Furthermore, the Corps’ economists who made these subjective decisions did not consult with the professional appraisers in the Corps’ Sacramento district office to identify alternative appraisal methodologies that may have been more appropriate. According to the Corps, the methods it used to determine depreciation are “standard practice at the Corps and are consistent with prior and existing guidance.” Nonetheless, we believe that the shortcomings identified above raise questions about the accuracy of the Corps’ property value estimates and, in turn, the project benefit estimates that are, in part, based on them. The Corps said it recognizes the need to strengthen its methodologies and is currently developing a new tool to estimate property values. Finally, the Corps’ 2002 analysis did not use the methodology described in Corps guidance to determine the number of properties that are located in the 100-year floodplain and the damages they would sustain in a 100-year flood. The 100-year floodplain is the land area that may be affected during a flood that has a 1 percent chance of occurring every year. Instead of following Corps guidance by directly counting the properties located in the 100-year floodplain and calculating the damages they would sustain in a 100-year flood, the Corps estimated the damages using a methodology that relied on the results from its incorrect 1996 count of properties. The Corps’ use of this alternative methodology further calls into question the accuracy of its benefit estimate for the American River levee improvements authorized in both 1996 and 1999, which is based in part on this flood damage assessment. The Corps told us that it could have directly counted the properties in the 100-year floodplain but the necessary information was not available in a “user friendly” format, and that the additional effort needed to collect more accurate information was not expected to change the results. As a result, the Corps did not believe this was an effective use of resources. However, the Corps did not provide us with any evidence to support the validity of calculating the 100-year flood damages as it did or to validate its contention that the results would not change if it had used the methodology prescribed in its guidance. The Corps has not followed its policy to provide Congress with an estimate of the range of possible benefits from the American River and Natomas Basin levee improvements and the likelihood that these benefits will actually be realized. In 1996, the Corps established a policy calling for benefit estimates and benefit-cost comparisons for flood protection projects to be reported with their associated probabilities. For example, rather than reporting that the benefits for a particular project are exactly $1.5 million, the Corps could report that it is 80 percent confident that project benefits will be at least $1 million but it is only 30 percent confident that benefits will reach $2 million. The Corps recognizes that this information can assist Congress in understanding the uncertainty involved in achieving various levels of benefits and in determining whether those risks justify funding the project. According to the Corps, it did not estimate a range of benefits for the Common Features Project in 1996 because the computer software used to assess the project’s benefits and costs was developed prior to the 1996 guidance and did not have the capability to calculate a range of values. However, in its 2002 reanalysis of project benefits, when a new version of the software capable of calculating benefit ranges and probabilities was available and costs for the American River work had significantly increased, the Corps chose not to calculate a range of benefits and instead continued to report a single estimate. Because the Corps’ 2002 estimates of benefits and costs for the American River work were so close in value (1.1 to 1), an analysis of the potential range of benefits would have revealed whether there was a significant probability that project benefits could be lower than the single estimate the Corps reported and perhaps lower than project costs. According to a Sacramento district official, the Corps did not use the new version of its software that could have calculated the range of benefits to maintain consistency with information on flood protection it had previously released to the public. For example, the Corps has reported to the public that the American River levees have about a 1 percent chance of being breached by floodwaters in any given year. This estimate of flood protection could be different if calculated using the newer version of the software. The Corps was concerned that using the newer software would require it to report a different, and perhaps slightly lower, level of flood protection, which would confuse the public. However, by taking this approach, the Corps did not provide Congress with important information about the uncertainty surrounding the amount of benefits the project would provide. Three organizational levels within the Corps—district, division, and headquarters—reviewed and approved the 1996 and 2002 benefit analyses for the American River component of the Common Features Project, but these reviews did not identify the mistakes that we found. This issue raises questions about the adequacy and effectiveness of the Corps’ review process. We raised similar concerns about the Corps’ review process in our report on the Delaware River Deepening Project, which found significant miscalculations and invalid assumptions in the project’s economic analysis that the Corps did not find during its reviews. For the Common Features Project, the Corps’ Sacramento district office conducted the 1996 study that analyzed the technical and economic aspects of the proposed project and the 2002 report updating that information. The Corps’ Los Angeles district office reviewed the 2002 economic analysis for technical accuracy. Next, the Corps’ South Pacific division reviewed the analysis; although, following the Corps’ policy, it did not review the district’s work for technical accuracy or verify the underlying analysis. Rather, the division checked that the district’s reports had undergone a technical review, and that the district had issued a quality control certification report with the necessary district office-level approvals. The division then forwarded the project to headquarters. Corps headquarters also did not conduct a technical review of the analysis. Rather, headquarters checked that the district’s report adhered to Corps policies for conducting a benefit-cost analysis and addressed any concerns headquarters had raised. These review processes, however, were ineffective in detecting and correcting the mistakes in the benefit analyses we identified. For example, for the 2002 study, we found no indication that the mistakes made in calculating the number and the value of residential properties or the mistake made in calculating flood damages were detected during the Corps’ review process. For the 2002 analysis of the American River levee improvements, a Corps economist from another district independently reviewed the benefits analysis. However, the review was not comprehensive enough to sufficiently identify methodological problems. The review primarily focused on process-oriented issues, such as assessing whether the Sacramento district conducted certain analyses, rather than examining the technical aspects of how the analyses should have been and were conducted. It is critical that decision making and priority setting be informed by accurate information and credible analysis. Reliable information from the Corps about the costs and benefits for the American River component of the Common Features Project has not been present to this point. The analysis on which Congress has relied contained significant mistakes. And of most relevance today, the analyses for the remaining work do not provide a reliable economic basis upon which to make decisions concerning the American River levee improvements authorized in the WRDA of 1999. To provide a reliable economic basis for determining whether these improvements are a sound investment, the Corps’ analysis needs to adequately account for the risk that project costs could increase substantially, correctly count and value the properties the project would protect, and include information on the range of potential project costs and benefits. Moreover, because the Corps has not made some critical decisions regarding the Natomas Basin work, it is not yet known whether the Corps will be able to identify cost-effective flood protection options for this area. Specifically, the Corps has not determined whether it will (1) conduct a cost risk analysis of its current plan to identify its exposure to potentially significant cost increases or (2) evaluate the costs and benefits of alternatives to the current levee improvement plan to identify the most cost-effective flood protection option. In addition, identifying cost-effective flood protection involves reporting the range of potential project benefits and the probability of achieving them, which the Corps has not done for the Natomas Basin work. If the Corps begins implementing the authorized Natomas Basin work before it completes a comprehensive, accurate cost- benefit analysis, significant unanticipated cost increases could materialize, as they did with the American River work. Finally, for Congress to have confidence that the Corps’ economic analyses have been prepared accurately, the Corps’ quality control process would need to be sufficiently independent and detailed to identify the types of mistakes that our review revealed. For the American River levee improvements authorized in 1999 and for the planned Natomas Basin work, we recommend that the Secretary of the Army direct the Corps of Engineers to determine whether it is appropriate to conduct risk analyses of project costs and document the basis for that decision in its project files; report information to Congress on the range of potential project benefits and the probability of achieving those benefits, as called for in the Corps’ guidance, in future benefit-cost analyses; and arrange for a credible, independent review of the completeness and accuracy of the revised benefit-cost analyses. For the American River project component, we also recommend that the Secretary of the Army direct the Corps of Engineers to reanalyze the benefits of the improvements authorized in the WRDA of 1999, correcting for the mistakes made in counting and valuing properties and the inappropriate methodology used to calculate flood damages. Additionally, for the Natomas Basin project component, we recommend that the Secretary of the Army direct the Corps of Engineers to analyze the costs and benefits of alternatives to the current levee improvement plan and identify the flood protection plan that provides the greatest net benefits and submit a report to Congress that includes a cost estimate for all of the planned Natomas Basin work, and wait until Congress authorizes funding that is based on the report before beginning construction of any Natomas Basin levee improvements. We provided a draft of this report to the Secretary of the Army for review and comment. In commenting on the draft report, the Army concurred with all of our recommendations. Perhaps most significantly, the Army acknowledged that on the basis of the Corps’ experience in constructing the American River levee improvements, there is a potential for substantial cost increases for the Natomas Basin levee improvements, and therefore the Corps needs to investigate a wider array of alternatives for providing flood protection for the Natomas Basin. In addition, although the Army concurred with our recommendation to reanalyze the benefits of the improvements added to the American River component of the project in 1999, it contended that the Corps has already completed the reanalysis. We disagree. In 2002, the Corps prepared an analysis of the economic benefits for the work added to the project in 1999. However, our review found several mistakes in this analysis, including mistakes in counting and valuing properties and using an inappropriate methodology to calculate flood damages. We continue to believe that before the Corps begins construction of the work added to the American River project component in 1999, it should reanalyze this work to ensure it is cost beneficial. The Army stated that the report does not recognize the significant role Congress played in 1999 by adding additional work to the project and providing funds for construction before the Corps had developed reliable cost estimates, which created the situation of which our report is critical. By focusing its comment on the relatively small amount of work added in 1999, the Army avoided the main issues regarding the American River levee improvements discussed in our report. Specifically, that (1) the costs for the American River component of the project approved in 1996 are more than triple the original estimate; (2) the Corps had information, before construction began, that should have alerted it that costs would likely increase greatly; and (3) the Corps should have communicated this information to Congress at that time, but it did not. Furthermore, the additional funding provided by Congress for the work authorized in 1999 has not been used for that purpose, but rather has been used to fund the cost overruns for the work authorized in 1996. The full text of the Army’s comments, and our responses to them, are presented in appendix III. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees, other interested Members of Congress, and the Secretary of the Army. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you, or your staff, have any questions about this report, please contact me at (202) 512-3841. Key contributors to this report are listed in appendix IV. To determine the reasons for the cost increases for the Common Features Project, we obtained the key cost estimation documents prepared by the U.S. Army Corps of Engineers’ (the Corps) Sacramento district office. Specifically, we obtained the Corps’ 1996 Supplemental Information Report, American River Watershed Project; the 1997 Addendum to the Supplemental Information Report; the 2002 Second Addendum to the Supplemental Information Report; and other related documents. We reviewed the Supplemental Information Report, which examined a number of different flood protection alternatives, because it provided the foundation, including cost estimates, for the project elements that the Corps later grouped together as the Common Features Project. The Addendum to this report documented the Corps’ first cost estimate that specifically and exclusively addressed the Common Features Project and included separate costs for both the American River component and the Natomas Basin component of the project. We reviewed the Second Addendum, the Corps’ most current official cost estimate, to establish the amount of and the reasons for the increased costs. We also analyzed construction contracts to determine the cost of responding to accidents constructing the levee improvements authorized in 1996. We calculated the extent of inflation for both components of the project, using the Corps’ Civil Works Construction Cost Index System and a cost index from the Office of Management and Budget. Finally, we discussed the reasons for the cost increases with economists, cost estimators, project managers, engineers, and other staff from the engineering, constructions operations, and planning divisions of the Corps’ Sacramento district office. To determine whether the Corps analyzed the likelihood of significant cost increases for the project and reported them to Congress in a timely manner, we reviewed the Corps’ (1) policy regarding the use of cost risk analysis in estimating costs for civil works projects (Engineer Regulation 1110-2-1302) and (2) requirements for updating project cost estimates and informing Congress of cost increases (Engineer Regulation 1105-2-100). We also reviewed a document from the Corps’ Institute for Water Resources on incorporating risk and uncertainty into cost estimation. We examined the American River levee improvement construction contracts to determine when the Corps became aware of cost increases for this component of the project. In addition, we reviewed the Corps’ annual budget documents related to the Common Features Project, which contained information on the project’s status and any changes or cost increases. We examined the Corps’ cost estimates from 1996, 1997, and 2002 for compliance with relevant Corps cost estimating guidance and to determine if the Corps provided Congress with accurate information about significant expected cost increases. Finally, we discussed the Corps’ cost estimating procedures and awareness of likely cost increases with cost estimators, project managers, and other staff from Corps headquarters and the Sacramento district office. To determine whether the Corps correctly estimated the economic benefits of the American River levee improvements, we reviewed the extent to which the Corps followed accepted economics practices and whether the major assumptions used in the analysis were reasonable and well supported. We obtained the Corps’ 1996, 1997, and 2002 economic analyses for the Common Features Project and discussed the sources of these data and conduct of the analyses with the Corps economists responsible for preparing them. We also discussed the basis for the hydrologic and engineering assumptions used in the economic analysis with the Corps specialists who provided this information. In addition, we obtained the Corps’ guidance (Engineer Regulations 1105-2-100 and 1105-2-101 and Engineer Manual 1110-2-1619) on the accepted economic and engineering methodologies for incorporating risk and uncertainty into benefit estimation. To verify and supplement the information we received from officials in the Corps’ Sacramento district office, we spoke with, among others, Corps officials at the Hydrologic Engineering Center and the Institute for Water Resources and experts in real estate appraisal from the Appraisal Foundation and the The Appraisal Institute. Where we identified problems that affected the accuracy of the benefit analysis, we discussed them with the responsible Corps staff and considered any new data or revisions that they provided. Finally, we identified the roles and responsibilities of the Sacramento district office, South Pacific division, and headquarters in the Corps’ internal quality control process for the Common Features Project. We also obtained copies of the quality control reviews and the reviewers’ comments on the economic analysis and discussed the comments and their resolution with Corps officials. We conducted our review from September 2002 through September 2003 in accordance with generally accepted government auditing standards. In this report, unless otherwise noted, we present costs in the dollar values for the years in which they were estimated, not in constant dollars. For example, the Corps estimated the original cost of the project as $57 million in 1996, and that is how we present it in this report. We did not adjust the costs to constant dollars to account for inflation to maintain consistency with the figures in published Corps reports on the Common Features Project. However, table 3 shows the Corps’ 1996 cost estimates for key components of the Common Features Project and also shows the same estimates adjusted to 2002 constant dollars to account for inflation. The following are GAO’s comments on the Department of the Army’s letter dated September 22, 2003. 1. Although the Army asserted that we made some factual errors, its subsequent comments failed to identify any specific factual errors. 2. The Army believes that the report does not recognize the significant role Congress played in 1999 when it added additional work to the project and authorized funds for construction before the Corps had developed reliable cost estimates. While the Congress did add work to the Common Features Project in the Water Resources Development Act (WRDA) of 1999 without a Corps report, the cost of this work is relatively small in comparison to the work authorized in 1996. We believe the Army’s comment is not relevant to the main focus of our report, which is the significant cost increases for the work the Corps recommended and the Congress authorized in 1996. For example, the costs for the work the Corps recommended on the American River more than tripled from $44 million in 1996 to $143 million in 2002. In contrast, the estimated cost for the work on the American River levees the Congress added in 1999 is about $15 million. We believe our report accurately reflects the limited impact the addition of work in 1999 had on the American River component of the project’s overall cost. Furthermore, the additional funding provided by Congress for the work authorized in 1999 has not been used for that purpose, but rather has been used to fund the cost overruns for the work authorized in 1996. 3. The Army stated that the consistent provision of funds to the Corps by Congress, at or exceeding the Corps’ budget request, created the situation of which our report is critical. We do not agree. Two of the main issues in our report are that the costs of the American River component of the project nearly tripled due to design changes, and that the Corps began construction of the American River levee improvements without analyzing the likelihood of these cost increases or reporting the potential cost increases to Congress. The fact that Congress provided funding for the project does not absolve the Corps of its responsibility to communicate project cost increases in a timely manner. 4. The Army implied that Congress was informed of potential cost increases for the Common Features Project during the yearly appropriations process. This is not the case on the basis of our review of all of the Corps’ submissions for the annual appropriations process from 1997 through 2001. As our report states, it was not until February 2002, more than 4 years after it had significantly modified the design of the American River levee improvements, that the Corps informed Congress for the first time of the significant cost increases for the American River component of the project. 5. The Army stated that the levee improvements were not originally designed to withstand the destructive effect of seepage and that this design was not an error. Rather, an unknown condition (i.e., the potential for destructive seepage under the levees) resulted in design changes and increased costs. Our report does not criticize the Corps for not anticipating the need for a levee improvement design that would stop seepage under the levees. We acknowledge that the flood of January 1997 caused the Corps to change the design of its levee improvements. However, as our report notes, the Corps did not develop new cost estimates after making these design changes and did not communicate the resulting significant cost increases to Congress in a timely manner. 6. We do not consider the separable elements of the Common Features Project as separate projects. This report makes clear that there is one Common Features Project comprised of an American River component and a Natomas Basin component. 7. We agree with the Army that, in 1996, the Corps was not aware of any significant areas of cost uncertainty for the proposed American River levee improvements. However, as the Army recognizes, the flood of January 1997 showed that the Corps’ design for the levee improvements should be significantly modified. After making these design changes, though, the Corps did not estimate the potential for cost increases due to tripling the depth of some cut-off walls or closing the gaps in cut-off walls at bridges and other areas. These design changes eventually added $76 million to the cost of the project. 8. The Army stated that the Corps believes that its review process results in decision documents that form the basis for sound recommendations. However, in two recent cases, we found that the process did not serve its intended purpose. As this report documents, the Corps’ review process was ineffective in detecting and correcting the mistakes in the benefit analyses we identified. We raised similar concerns about the review process in our June 2002 report on the Delaware River Deepening Project. 9. We did not recommend that the Corps reanalyze the costs and benefits of the work authorized in 1996. We agree that a reanalysis of this work, which is nearly complete, would be of little value. However, we continue to believe that a reanalysis of the economic benefits from the work authorized in 1999 is necessary because the Corps’ initial analysis contained significant mistakes and construction of the work has not yet begun. Before beginning construction of this work, the Corps should verify that the work is in fact cost beneficial. In addition, the Corps should arrange for a credible, independent review of the completeness and accuracy of its reanalysis. 10. The Army contends that the Corps has already completed the reanalysis we recommended of the work added to the American River component of the project in 1999. We disagree. The Corps analyzed the economic benefits for the 1999 work added to the project for the first and only time in 2002. Our review found several problems with the Corps’ 2002 analysis of the benefits from this work. For example, we found that the Corps had made mistakes in how it counted and valued properties and had used an inappropriate methodology to calculate flood damages. As a result, the Corps has not yet prepared an accurate assessment of the benefits resulting from the 1999 work. The Corps has not begun any construction for the work authorized in 1999, and it is not currently known if the benefits provided by this work are greater than the costs. Consequently, we recommend a reanalysis of these benefits in order to correct the mistakes that we identified. 11. The Army stated that the Corps does not conduct individual real estate appraisals to determine the value of each property that could be damaged in a flood. Our report does not suggest that the Corps should conduct such appraisals. Rather, we identified weaknesses in the sample the Corps used to estimate property values and its methodology for calculating depreciation for the properties in the sample. For example, to accurately appraise a large number of properties by sampling requires a separate sample for each residential property type, such as single-family homes and apartment buildings. However, the Corps sampled all property types together. In addition, the Corps did not use a consistent objective appraisal methodology to calculate depreciation for the properties in the sample. These weaknesses raise questions about the accuracy of the Corps’ property value estimates and the project benefit estimates that are, in part, based on them. 12. The Army claims that there are approximately 163,000 residential structures in the 400-year floodplain. This is not correct on the basis of the Corps’ most current analysis. The estimate of 163,000 residential structures comes from the Corps’ 1996 economic analysis. However, in 2002, the Corps updated its analysis and found that it had overestimated the number of residential structures in 1996. The Corps’ 2002 analysis estimated that there were 115,347 residential structures in the 400-year floodplain. In addition to the individual above, Jeff Arkin, Chuck Barchok, Judy Hoovler, Richard Johnson, Mark Metcalfe, Ryan Petitte, and Stephen Secrist made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
In 1996 and 1999, Congress authorized the U.S. Army Corps of Engineers (the Corps) to strengthen sections of the American River and Natomas Basin levees that provide flood protection for Sacramento, California. In 2002, the Corps reported that the cost of this work, known as the Common Features Project, had increased significantly. GAO was asked to determine why costs increased, the extent to which the Corps analyzed and reported the potential cost increases to Congress in a timely manner, and whether the Corps correctly estimated economic benefits. Estimated costs for the Common Features Project rose from $57 million in 1996 to between $270 million and $370 million in 2002--primarily because of design changes. For the American River, costs more than tripled from $44 million to $158 million in 2002, primarily due to changes such as deepening the walls built in the levees (cut-off walls) to prevent seepage and closing gaps in the walls at bridge crossings. Cost estimates for the Natomas Basin--still in planning--increased from $13 million in 1996 to between $112 million and $212 million in 2002. The Corps has yet to analyze alternative flood protection approaches for the Natomas Basin that might be more cost-effective. Furthermore, it has not analyzed its exposure to potentially significant cost increases for the Natomas Basin work. The Corps did not fully analyze, or report to Congress in a timely manner, the potential for significant cost increases for the American River levee improvements authorized in 1996. Specifically, a severe storm in the Sacramento area in January 1997 indicated some cut-off walls would need to be much deeper and therefore would be more costly. Corps guidance generally directs the Corps to seek new spending authority from Congress if it determines, before issuing the first contract, that it cannot complete the project without exceeding its spending limit. However, the Corps began construction in 1998 without analyzing or reporting potential cost increases. By 2003, it had committed most of the funding authorized for the entire Common Features Project to the 1996 American River work, leaving the additional 1999 work and the Natomas Basin improvements without funding. In 1996, the Corps incorrectly estimated the economic benefits for the American River levee improvements by overcounting the residential properties to be protected. In 2002, it incorrectly estimated benefits for the 1999 improvements by, among other things, miscalculating the size of the area that the improvements would protect. The Corps' quality control process was ineffective in identifying and correcting these mistakes.
Delta II has historically been NASA’s preferred medium class launch vehicle for its science missions, launching 36, or nearly 60 percent, of the agency’s science missions since October 1998. Known as the workhorse of the launch industry, the Delta II comprises a group of expendable rockets that can be configured as two or three-stage vehicles and with three, four, or nine strap-on solid rocket motors depending on mission needs. The largest configuration is referred to as Delta II Heavy. The Commercial Space Act of 1998, U.S. Space Transportation Policy, and National Space Policy of the U.S. require NASA, to the maximum practical extent, to acquire launch vehicles from the U.S. commercial sector. NASA uses the NASA launch services contract to acquire small, medium, and intermediate launch vehicles for NASA’s science, exploration, and operational missions. The launch services contract is a multiple award indefinite delivery indefinite quantity (IDIQ) task order contract. The original launch services contract competition in 2000 resulted in the award of firm-fixed price IDIQ launch services contracts with not-to-exceed prices to Boeing Launch Services Incorporated (Boeing) and Lockheed Martin Commercial Launch Services Incorporated (Lockheed), which later merged to form United Launch Alliance, for the Delta and Atlas vehicles. In 2005, NASA awarded Orbital an IDIQ launch services contract for the small class launch vehicles Taurus, Taurus XL, and Pegasus XL, and in 2008 NASA awarded SpaceX an IDIQ launch services contract for the small class Falcon 1 and medium class Falcon 9 vehicles. Pursuant to the “on-ramp” clause in the launch services contract, the original solicitation remains open during the life of the contract to allow launch services providers—including contractors who have already been awarded an IDIQ launch services contract as well as other contractors—to introduce launch vehicles or technologies that were not available at the time of the award of the initial contract. See figure 2 for launch vehicles discussed in detail in this report. When NASA needs to acquire launch services for science missions, NASA’s LSP, which is responsible for acquiring launch vehicles for NASA’s Science Mission Directorate, issues a request for launch service proposals. All contractors who have been awarded a launch services contract at the time NASA issues the request for launch service proposals are contractually obligated to submit a proposal, unless the contracting officer waives the requirement. NASA considers each proposal according to specified criteria and awards the task order to the contractor who provides the best value in launch services that meet NASA’s requirements. The ordering period under the NASA Launch Services I contract began in 2000 and expired in summer 2010. On September 16, 2010, NASA announced the award of the NASA Launch Services II contract which, like the NASA Launch Services I contract, is a multiple award IDIQ contract. NASA selected four companies for awards: Lockheed, Orbital, SpaceX, and United Launch Alliance, and each contract has an ordering period through 2020. Orbital did not respond to the contract solicitation for its Taurus II vehicle. According to Orbital officials, it plans to take advantage of the on-ramp clause of the NASA Launch Services contract in summer 2011. According to LSP officials, competition between the launch service providers is intended to lead the providers to sell NASA launch services at prices less than the negotiated not-to-exceed prices. This competition is limited in the medium and intermediate classes, however, because of the small number of providers who have been awarded a contract. For example, United Launch Alliance is currently the only provider of intermediate class launch vehicles for Earth orbit escape missions and Space X is currently the only provider of a medium class launch vehicle on the Launch Services II contract. While NASA’s LSP is responsible for acquiring launch services for science missions, several NASA offices are involved in the development of the new commercial launch vehicles that NASA plans to use to replace the Delta II. NASA’s LSP is part of NASA’s Space Operations Mission Directorate but also supports, and has formal relationships with, the International Space Station Cargo Crew Services program within the Space Operations Mission Directorate and the Commercial Orbital Transportation Services program within NASA’s Exploration Systems Mission Directorate. See figure 3. NASA Commercial Orbital Transportation Services (COTS) program: The COTS program, which began in 2006, is intended to facilitate the development and demonstration of end-to-end transportation systems, including the development of launch and space vehicles, ground and mission operations, and berthing with the International Space Station. Under this program, NASA provides funding to SpaceX and Orbital through funded Space Act Agreements to help offset International Space Station-related developmental costs of the Falcon 9 and Taurus II, respectively. Both the SpaceX vehicle, Falcon 9, and the Orbital vehicle, Taurus II, are medium class launch vehicles similar in capability to the Delta II. SpaceX plans three demonstration flights under the COTS agreement, while Orbital plans one such flight. Under these agreements NASA provides progress payments, offsetting a portion of the developer’s costs, when the partners meet established milestones. NASA’s Cargo Crew Services program: The program is responsible for acquiring commercial cargo resupply services for the International Space Station through the Commercial Resupply Services (CRS) contract with SpaceX and Orbital for flights beginning in calendar year 2011. NASA has ordered 12 resupply missions to the International Space Station from SpaceX, and 8 from Orbital. SpaceX and Orbital will use their respective launch vehicles, Falcon 9 and Taurus II, to provide these services. NASA’s LSP is taking steps to address risk and ensure the success of the last planned Delta II launched missions. LSP’s risk mitigation strategy uses established oversight mechanisms to address areas of concern and to assure the success of all remaining Delta II launched missions. LSP has issued task orders to United Launch Alliance for the final three Delta II missions through the Launch Services I contract. LSP exercises oversight of United Launch Alliance through a combination of specific government approvals and targeted government insight into contractor activities and designs. Specific areas requiring government approval include spacecraft- to-launch vehicle interface control documents, mission-unique hardware and software design, top-level test plans, and requirements and success criteria for integrated vehicle systems. The government also has insight into baseline vehicle design, analyses, models and configuration management, critical flight hardware pedigree and postflight anomaly, and compliance evaluations. An important element in LSP’s oversight approach is the use of engineering review boards to independently review and validate the competency and adequacy of the contractor’s technical efforts. According to LSP officials, having government systems engineers with the technical expertise to review or repeat the contractors’ engineering analyses is a key factor in high launch success rates. From 1990 through 2009, NASA has achieved about a 98 percent launch success rate—compared to about a 69 percent success rate for U.S. commercial launches without significant U.S. government involvement. Likewise, United Launch Alliance officials indicate that their company has never had a mission failure, successfully launching 37 missions in a 36-month period from December 2006 through December 2009. LSP is taking some additional actions to mitigate risk associated with the remaining Delta II flights. Due to the current low flight rate of the vehicle, LSP is conducting targeted field site closeout photo reviews during vehicle processing for each remaining NASA Delta II mission. According to agency officials, a closeout photo review includes photographing system components as assembly and processing steps are completed, and reviewing photographs to ensure assembly and processing steps were conducted as required. NASA conducts similar closeout photo reviews on the Pegasus and Taurus launch vehicle missions for the same reason—low flight rates. LSP has also identified several specific areas of concern with the remaining Delta II flights—including contractor workforce expertise, postproduction subcontractor support, spare parts, and launch pads—that must be mitigated where possible to ensure the success of the remaining missions. Workforce Expertise: United Launch Alliance is taking steps to mitigate the risk that workforce expertise may be lost. For example, it actively tracks the certifications necessary for assembly, integration, ground operations, processing, and launch of the Delta II. United Launch Alliance also tracks the current certifications of the Delta II workforce and provides training necessary to retain the required certifications. To retain critical skills, United Launch Alliance uses essentially the same workforce for the Delta II and Delta IV, a vehicle that shares significant commonality. LSP officials indicated that the LSP workforce would remain essentially unchanged through the last missions as LSP is responsible not only for Delta II but for all NASA science mission launches. Postproduction Subcontractor Support: LSP is funding an approximately $8 million per year, postproduction support relationship, managed by United Launch Alliance, with key Delta II subcontractors. According to agency officials, this will ensure that subcontractors with knowledge and expertise needed to manufacture or repair subcomponents are available if needed. United Launch Alliance has contracted with Alliant Techsystems, Incorporated for solid rocket motors, Pratt & Whitney Rocketdyne for the first stage engine, and Aerojet for the second stage engine through fiscal year 2011. Spare Parts: United Launch Alliance has implemented a process, which has previously been used on the last flights of other vehicles, to ensure key spare parts are available to support the final Delta II missions. This process identifies irreplaceable or critical hardware the unavailability, loss, or damage of which cannot be remedied without serious impact to program cost, schedule, or technical performance. United Launch Alliance has identified 28 such items for Delta II and will mitigate the risk of spare parts availability by either purchasing additional spares beyond planned needs or implementing quality assurance activities to minimize risk. In addition, LSP personnel have been assigned to assess and monitor Delta II launch vehicle spare parts during the retirement of the Delta II. United Launch Alliance also indicated the five currently unsold Delta II vehicles in the heavy configuration could be cannibalized for parts, if needed, for the remaining NASA Delta II missions. Launch Pads: NASA has assumed responsibility for the operation and maintenance of the Delta II launch pads—Space Launch Complexes 17A and 17B at Cape Canaveral Air Force Station and Space Launch Complex 2 at Vandenberg Air Force Base—from the Air Force. NASA will perform continuing periodic maintenance through the final planned NASA Delta II flights from Space Launch Complex 17B in September 2011 and Space Launch Complex 2 in June and October 2011. The cost of ongoing operation and maintenance is included in the launch services contracts between LSP and United Launch Alliance. In some instances, however, efforts beyond continuing maintenance are necessary. For instance, NASA is recertifying the fuel storage and water deluge systems at Space Launch Complex 17B. See figure 4. NASA plans to leverage ongoing investments in the COTS and CRS vehicles—Falcon 9 and Taurus II—to acquire a new medium launch capability for science missions in the relative cost and performance range of the Delta II. LSP has been coordinating with NASA and contractor officials responsible for these efforts. Further, NASA revised its policy directive on launch vehicle certification to allow the providers to certify their vehicles more quickly than would have been possible under the previous policy. Due to an active small class launch vehicle market and NASA’s relative low need for vehicles in this class, the agency has no immediate plans to develop additional small class launch vehicles. Rather, the agency will acquire small class launch services using the NASA Launch Services II contract. NASA’s plan to transition from Delta II to other medium class launch providers is to eventually certify the vehicles being developed for space station resupply for use by NASA science missions. This plan originated from a series of studies beginning in 2006 which examined launch market conditions and assessed whether the agency should continue to fly Delta II beyond the then-current Delta II manifest. These studies found that NASA should phase out Delta II, begin working with alternative launch providers to acquire a new medium class launch vehicle, and use vehicles—such as Atlas V or Delta IV—as an interim solution until alternative launch providers are ready. These studies culminated in an August 2009 report to Congress which laid out NASA’s plans for transitioning to future small and medium class launch vehicles and discussed contingencies, each of which could involve additional time or funding, should the preferred solution not come to fruition as planned. For example, NASA could: Continue indefinitely to launch medium class science missions on the Atlas V, which is capable of launching payloads with more size and mass than Falcon 9 or Taurus II but is about twice as expensive. Launch multiple missions simultaneously on larger launch vehicles, which is a viable option in some instances, but according to NASA is difficult to coordinate due to specific factors such as orbit, destination, and development and launch schedule. Use the five remaining Delta II heavy configuration vehicles. Considering the additional infrastructure and postproduction support costs that Delta II would require, however, its costs could exceed that of the Atlas V and further it cannot easily be used for most earth science missions because of launch facility constraints. Use foreign launch vehicles or decommissioned excess Department of Defense (DOD) intercontinental ballistic missiles, such as Minotaur, as space transportation vehicles. The use of such vehicles, however, is governed by law and policy and would require time to be approved. NASA believes that its preferred approach would leverage ongoing NASA investments in Falcon 9 and Taurus II made by the COTS and CRS programs and allow it to negotiate discounted prices for increased quantities of a common launch vehicle. LSP’s involvement in the COTS and CRS efforts is intended, in part, to smooth NASA’s transition to future medium class launch vehicles for science missions by giving LSP detailed, firsthand technical knowledge of the candidate vehicles. NASA’s LSP has been in coordination with Orbital, SpaceX, and NASA’s COTS and CRS programs for several years. For example, in addition to the funded Space Act Agreements under the COTS program, LSP entered into a nonreimbursable Space Act Agreement with Orbital for technical insight into the development and design of the Taurus II in 2008. According to LSP officials, this partnership is expected to result in the agency gaining a better understanding of the launch vehicle, which will assist LSP when they begin the certification process for science missions and will allow Orbital access to NASA expertise for review of launch vehicle development documentation and independent assessments of various Taurus II systems and performance. This relationship has already provided benefits. For example, through this relationship, LSP persuaded Orbital to include additional engine testing into the Taurus II test strategy that will ultimately contribute to the certification effort for science missions. LSP does not have such an agreement in place with SpaceX; however, LSP may gain insight into SpaceX’s design for Falcon 9 that should provide similar benefits because SpaceX was awarded a NASA Launch Services contract in 2008 and 2010. SpaceX was awarded a Launch Services I and II contract, but NASA has not awarded SpaceX any task orders under those contracts. If NASA had awarded SpaceX a task order, its technical insight to Falcon 9 would be greater. In 2007, LSP entered into a Memorandum of Understanding with the Commercial Crew and Cargo Program Office which manages the COTS demonstration missions. Although LSP is not responsible for mission success, under this agreement it serves in a consulting role. For example, LSP is a member of the COTS advisory team and provides technical guidance, mentoring, and lessons learned relating to launch system development. LSP also attends technical meetings, such as preliminary design reviews, as requested. LSP also has a Memorandum of Agreement in place with the International Space Station program to support the CRS missions. Under the terms of this agreement, LSP will perform nonrecurring and limited recurring technical assessments and make recommendations for specific launch vehicle hardware, software, and analyses. While LSP is not responsible for mission success, it will perform launch vehicle mission and fleet risk assessments, focusing on systems that have been historical causes of mission failure. The assessments that LSP will conduct include a postflight data review for each flight; a mission-unique design review for the first flight of each launch vehicle configuration; a “test like you fly” hardware qualification assessment for launch vehicle propulsion, flight controls, and separation systems; and an assessment of the launch vehicles’ guidance, navigation, and control design and an assessment of flight software and recurring software development practices. Some of these assessments, such as the “test like you fly” hardware qualification assessment, could be applicable to the eventual certification process for science missions and LSP technical oversight of new launch providers, as long as the same launch vehicle configuration is used. This could shorten the length of time required to certify the vehicles for science missions. The formal certification process for each launch vehicle will commence after LSP awards a task order to the contractor for a science mission. Under the Launch Services II contract, a vehicle cannot be considered for a launch service task order for a science mission until it has had a successful first flight. Falcon 9 had a successful first flight in June 2010, but has not been awarded a science mission. The Taurus II’s first flight will be no sooner than September 2011. According to NASA, on average it takes about 3 years once a task order is awarded to complete certification. Therefore, if Falcon 9 is awarded one of the first science missions under the Launch Services II contract, assuming only limited technical challenges and only minor changes are needed for certification, NASA could certify Falcon 9 to category 2 by mid 2013 and to category 3 by late 2013 or early 2014. According to NASA, if resources are available, LSP may proactively begin the formal certification process for Falcon 9 or Taurus II prior to award of a task order for a science mission under the Launch Services II contract. See figure 5 for a time line for certifying Falcon 9 based on a potential task order award in early 2011. NASA revised its launch policy to enable more certification opportunities for emerging launch vehicle providers, and according to LSP officials, these changes could also speed up the certification process. LSP officials indicate that the former policy could have required 10 or more years to certify a new vehicle to category 3, the highest level of vehicle certification, and given the imminent retirement of the Delta II, NASA considered this gap too large. NASA eventually plans to certify the Falcon 9 and Taurus II vehicles to category 3. However, NASA may initially certify the vehicles to category 2, the next highest certification depending on the payload risk classification of the initial mission or missions to use the new vehicle. The Science Mission Directorate assigns payload risk classifications, A through D, with A being least tolerant to risk. See table 1. The risk posture then becomes a requirement in securing a launch vehicle through the Launch Services contract. Under the revised policy, there are three alternative approaches to certification to category 3, as shown in table 2. When a category 3 certification is required of one of the new vehicles, NASA plans to use the certification alternative that requires 3 successful flights (2 of which must be consecutive) of the same vehicle configuration, a flight margin verification, and a full vehicle root cause analysis, among other analyses, to certify the vehicles. If the first NASA mission using one of the new vehicles only requires a category 2 certified vehicle, then NASA will use one of the category 2 alternatives as appropriate. Currently, Orbital has 8 Taurus II CRS missions under contract with NASA, and SpaceX has 12 Falcon 9 CRS missions under contract with NASA, as well as commercial contracts. These flights, if successful, may be applied to NASA’s certification requirements, as long as at least 3 successful flights are based upon the same vehicle configuration. Changes to a vehicle’s configuration—the distinct combination of core propulsive stages and hardware—will reset the number of required successful flights. NASA’s near-term plan for small class launch vehicles is to rely on small class providers through the NASA Launch Services II contract because the number of small class launch vehicles currently available is sufficient to meet NASA’s needs. The small class launch services market currently has five U.S. launch vehicles—SpaceX’s Falcon 1; Orbital’s Taurus and Pegasus; Lockheed’s Athena; and DOD’s Minotaur—although Minotaur is not readily available to NASA. NASA’s strategy is to seek competition without encouraging oversupply, which will allow the market to stabilize over the next several years. According to agency officials, the fostering of a small class of launch vehicles is important because new launch service providers have tended to start with smaller vehicles before moving on to develop larger ones. However, NASA forecasts only about one science mission in the small class per year. Because DOD has typically used Minotaur launch vehicles in the small class, NASA asserts that its needs, along with the needs of the commercial market, can only provide enough business to support about one to two providers in the small class. NASA has a reasonable plan for addressing the medium launch capability gap, but its approach has inherent risks that need to be mitigated. First, NASA has not developed detailed estimates of the time and money required to resolve technical issues likely to arise during the launch vehicle certification process. Second, both Taurus II and Falcon 9 have already experienced delays and history indicates more delays are likely as launch vehicle development is an inherently risky endeavor. Finally, neither potential provider currently has the proper facilities, such as a West Coast launch site, needed to launch the majority of NASA earth science missions requiring a medium capability. NASA has not prepared a detailed estimate of the potential costs to resolve technical issues and implement modifications and upgrades required for NASA’s specific science mission needs that are likely to arise during the certification process for Falcon 9 and Taurus II. Based on the historical costs of certifying launch vehicles such as Atlas V, LSP estimates about $15 million could be required for each vehicle. LSP officials noted that if serious problems or shortfalls are discovered during the certification process, or extensive changes need to be made to the basic launch vehicle design to accommodate science mission needs, these costs could be higher. For example, if the certification process uncovers inadequacies with the contractors’ qualification test program or the flight margin verifications uncover significant differences between predicted and actual system performance in flight, NASA or the contractor may be faced with significant cost increases or delays. Ancillary changes to components such as connectors and payload adapters needed to accommodate the science mission spacecraft are unlikely to increase estimated costs. According to NASA officials, relative immaturity of a vehicle and inexperience of a provider could contribute to higher costs and additional time needed for certification. Further, any additional work needed may not be achievable within the expected 3-year time frame of the certification process. Based on anticipated labor rates, LSP estimates that the total cost to conduct the assessments necessary to certify each vehicle will be about $10 million. These costs are in addition to the approximately $15 million NASA anticipates will be required to resolve technical issues and implement required modifications and upgrades resulting from the certification assessment. According to program officials, these costs would be passed on to the customer, the Science Mission Directorate, which would determine how to budget for these costs. For example, the directorate could assign these costs to the first mission to use a new launch vehicle, or amortize the cost over the first several missions. However, it is currently undetermined who would pay the costs for fixes needed to meet NASA’s specific science mission requirements. In the case of the Atlas V, such costs were shared by NASA, DOD, and the contractor. The responsibility for these costs will have to be negotiated as needed between LSP, the Science Mission Directorate, and the contractors. As additional costs are currently unknown, according to Science Mission Directorate officials, NASA has yet to budget for them. GAO’s Cost Estimating Guide, however, indicates that assumptions should be made about the costs of unknowns and that contingency funding should be reserved to cover potential costs. Both SpaceX and Orbital have experienced delays in the development and testing of Falcon 9 and Taurus II, respectively. We reported in June 2009 that both companies were working under aggressive schedules and their vehicles were experiencing schedule delays—at the time, the first flight of the Falcon 9 was scheduled for June 2009 but slipped to June 2010, whereas the first flight of the Taurus II was scheduled for December 2010 and has now slipped to no earlier than September 2011. Further, our past work and NASA’s experience indicate that more delays are likely, given that developing launch vehicles is an inherently complex and risky endeavor. For example, we reported in 2005 that the Air Force’s Delta IV Heavy Lift Vehicle’s first operational flight was delayed 6 months, due in part to design problems discovered in testing. Likewise, according to NASA, vehicle histories from SpaceX, Orbital, and United Launch Alliance indicate that the average delay in the third successful launch of a new vehicle is 31 months from the manifested date of launch. The contractors for Falcon 9 and Taurus II are not currently awarded any task orders for science missions; therefore the formal certification process for each has not begun. Consequently, the schedule and budget of any science mission that is assigned to one of these vehicles could be negatively impacted if delays occur in the certification process. While NASA expects these vehicles will eventually become a viable option for medium class science missions, it is uncertain how long the process might take. Neither SpaceX nor Orbital currently has a high-inclination launch site option for its medium class vehicle, yet the majority of NASA’s Earth science missions require such a site due to the high inclination required to achieve a polar orbit. Launches from the East Coast of the United States are suitable only for low-inclination orbits because major population centers underlie the trajectory required for high-inclination launches. High-inclination launches are accomplished from the West Coast because the flight trajectory avoids populated areas. Orbital is conducting a site selection survey and its West Coast options include Kodiak, Alaska; Space Launch Complex 2 at Vandenberg Air Force Base, California; and Space Launch Complex 8, also at Vandenberg Air Force Base, California, which Orbital currently uses to launch the Minotaur. According to Orbital officials, the site selection decision is expected in 2011, with the site ready for operations as early as 2014. According to SpaceX officials, SpaceX plans are underway to secure a Falcon 9 launch site at Vandenberg Air Force Base for high-inclination launches. This capability is planned to be ready for operation by late 2012. However, if the launch sites are not available when needed, NASA’s planned science mission manifest could be negatively impacted, as 12 of the 14 medium class science missions planned through 2020 that do not yet have assigned launch vehicles require a high-inclination launch. NASA science missions requiring a medium class launch vehicle that are approaching their preliminary design review face uncertainties related to committing to as-yet uncertified and unproven launch vehicles. The preliminary design review marks the point at which it is demonstrated that the preliminary design meets system requirements with acceptable risk and within cost and schedule constraints, and establishes the basis for proceeding with detailed design. Shortly after the preliminary design review, a project establishes its commitment baseline which documents the project’s estimated cost and schedule. From this point on, almost all changes to baselines are expected to represent successive refinements, not fundamental changes. NASA program managers indicated that the launch vehicle of a science mission should be assigned by the preliminary design review to allow the science mission design team to optimize their spacecraft based on the operational characteristics of the launch vehicle. A number of NASA science missions are approaching the preliminary design review; therefore, decisions need to be made about the launch vehicle for these missions. However, as indicated by figure 6, some decisions will have to be made before either the Falcon 9 or Taurus II is certified for science missions. The Soil Moisture Active and Passive (SMAP), Joint Polar Satellite System (JPSS-1), and Ice, Cloud, and land Elevation Satellite (ICESat-2) missions are approaching their preliminary design reviews and are the first three missions requiring a medium capability for which a Falcon 9 could potentially be selected for launch services. Falcon 9 had a successful first flight in June 2010 and could potentially be certified as a category 3 vehicle by late 2013 or early 2014. NASA is planning for the imminent release of a request for launch service proposals for the SMAP mission and tentatively plans to issue requests for proposals for the JPSS and ICESat-2 missions in spring 2011. If Falcon 9, the only medium class launch vehicle currently available under the Launch Services II contract, is selected for any of these missions, the mission launch date will be tied to a successful certification of the Falcon 9 launch vehicle. Because the preliminary design review establishes the basis for proceeding with detailed design, according to NASA officials, any changes to accommodate a new launch vehicle after the preliminary design review are fundamental changes and rarely, if ever, occur. Therefore, NASA’s intention is to select a launch vehicle and accept any delays and residual cost increases to the science mission associated with delays in the certification process. According to NASA officials, changing the planned launch vehicle of a science mission after its preliminary design review is a fundamental change to the mission design and would lead to significant cost growth and schedule delays. As figure 6 illustrates, several NASA missions require a launch vehicle decision prior to the certification of Falcon 9. While NASA expects that Falcon 9 could be certified to a category 3 prior to the planned launch dates of these missions, given the relative immaturity of the launch vehicle and the likelihood of further delays, the schedule for these missions could be at risk if the Falcon 9, or any other unproven launch vehicle, is selected. NASA officials indicated that science missions within the next few years might be asked to design to accommodate multiple launch vehicle possibilities if the availability of future vehicles is delayed or until the task order is issued for the particular mission. Science Mission Directorate officials indicated that while designing to accommodate multiple launch vehicles is possible, the practice is cumbersome, especially when continued beyond the preliminary design review. Under this type of design scenario, every decision is constrained to the worst case performance characteristic of the competing vehicles. Consequently, overall mission effectiveness is reduced, because benefits associated with a particular vehicle are traded away to design to the lesser set of capabilities of another vehicle. Thus, if the less constrained vehicle is chosen, that capability is left unused. Ultimately, the scientific benefit of the planned mission is reduced, because the science payload may have to be adjusted to accommodate reduced launch capability. NASA is taking an appropriate approach to help ensure the success of the remaining Delta II missions by adequately addressing workforce, support, and launch infrastructure risks. Nevertheless, an affordable and reliable medium launch capability is critical to NASA meeting its scientific goals. NASA has a plan in place for obtaining this capability through Orbital and SpaceX’s vehicles, but past experience with other development programs and recent history with both vehicles indicate that maturing and certifying these vehicles for use by science missions is likely to prove more difficult and costly than currently anticipated. If the companies are not successful in delivering, in a timely manner, reliable and cost-effective upgraded launch vehicles that can be used for NASA science missions, NASA will lack an affordable domestic launch capability in the medium performance vehicle class and could be forced to use more costly or time-consuming options. Further, costs associated with addressing any issues discovered during the certification process and resulting from the need to delay missions or use other alternatives will require trade-offs to be made that will likely impact the number of science missions the agency can afford. Given the likelihood of delays and additional costs associated with developing and fielding a medium class launch vehicle fully certified for science missions and the implications to funding available to support science missions, we recommend that as LSP gains a more complete understanding of the detailed designs and actual performance of the Falcon 9 and Taurus II, the NASA Administrator require, NASA’s Science Mission Directorate—in conjunction with NASA’s Space Operations Mission Directorate—to perform a detailed cost estimate to determine the likely costs of certification and the trade-offs required to fund these costs. This estimate should at a minimum examine the need for funds to resolve technical issues with the Falcon 9 and Taurus II launch vehicles discovered through the certification process. The estimate should also examine the costs associated with delaying science missions if necessary until launch vehicles are available or contingencies such as selecting more costly or time- consuming launch options. Given that NASA’s Science Mission Directorate could have to fund additional significant costs for certification and the use of contingencies, we recommend that the NASA Administrator require, that the costs identified through developing the detail cost estimate are adequately budgeted for and identified by the Science Mission Directorate. Until such time, however, that these costs are better understood, we recommend that the NASA Administrator require, the Science Mission Directorate to identify and budget for additional contingency funding for the projects requiring a medium launch capability vehicle and approaching their preliminary design review prior to certification of Falcon 9 and Taurus II that could be impacted by additional costs associated with certification of these vehicles, including the need to address technical issues and shoulder delays in the certification process. In written comments on a draft of this report (see app. II), NASA concurred with our recommendations. NASA acknowledged the risks associated with its transition strategy for medium class launch vehicles and recognized the importance of developing detailed cost estimates, budgeting for known costs, and identifying and budgeting additional contingency funding for unknown costs. NASA stated that the Space Operations Mission Directorate will develop detailed estimates of the costs to certify the new vehicles as well as to resolve technical issues during certification, and the Science Mission Directorate will estimate the costs for its missions if certification is delayed. Based on these estimates, the Science Mission Directorate will appropriately budget for certification costs and potential contingencies in future budget cycles. Separately, NASA provided technical comments, which have been addressed in the report, as appropriate. We will send copies of the report to NASA’s Administrator and interested congressional committees. The report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-4841 or at ChaplainC@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix III. To examine the National Space and Aeronautics Administration’s (NASA) and United Launch Alliance’s steps to ensure resources (budget, workforce, and facilities) are available to support safe Delta II operations through the last planned NASA flight, we interviewed NASA Launch Services Program (LSP) program officials and United Launch Alliance program officials and reviewed their launch vehicle transition plans. We obtained contract documents, launch manifests, risk information sheets, and engineering review board documentation from LSP to examine NASA’s planned contracting and technical approach for managing NASA’s remaining Delta II missions. We also compared NASA’s transition strategy to NASA and national space policies. We reviewed United Launch Alliance’s processes for certifying its work force for processing and manufacturing, launch manifests, market projections, cost estimates, workforce estimates, and launch infrastructure maintenance needs through the last planned NASA Delta II flight in October 2011. We also visited Space Launch Complex 17B at Cape Canaveral Air Force Station, Florida and visually inspected ongoing efforts to maintain Delta II launch capability through the last planned Delta II flight from this facility in 2011 and interviewed relevant NASA and contractor personnel at the launch complex regarding their maintenance efforts. To examine NASA’s plans and contingencies for ensuring a smooth transition from current small and medium class launch vehicles to other launch vehicles for future science missions, we interviewed relevant program officials and obtained and reviewed agency documents related to their transition plans. We interviewed officials within NASA’s Exploration Systems Mission Directorate, Space Operations Mission Directorate, and Science Mission Directorate regarding these plans. We also discussed these plans with NASA’s Office of Inspector General. We further interviewed officials from Orbital Sciences Corporation and Space Exploration Technologies to discuss their plans for certifying their launch vehicles, which are currently being designed to support the Commercial Resupply Services contract for future medium class science missions. We reviewed the launch providers’ launch vehicle manifests and launch vehicle histories. We compared the agency’s plans for certifying these vehicles to relevant NASA policy directives, risk mitigation strategies, U.S. law, and National Space Policy. We also examined how the agency’s certification requirements have evolved to facilitate transition to future launch services providers. To examine the risks associated with NASA’s planned approach to fill the medium launch capability gap, we interviewed officials with NASA’s Launch Services Program and identified and analyzed risks, and their accompanying mitigation strategies. We interviewed NASA Science Mission Directorate and Space Operations Mission Directorate and contractor officials responsible for both the Falcon 9 and Taurus II development programs and determined where their programs are in the development process and obtained their estimates of when these vehicles might be ready to launch science missions. We also reviewed prior GAO reports and identified risks common to all spacecraft development efforts. To examine technical and programmatic implications to science missions if NASA commits to new launch vehicles before they are certified and proven, we reviewed NASA’s systems engineering policy and interviewed officials with NASA’s Science Mission Directorate, NASA science mission project managers, and the Launch Services Program and discussed potential cost and schedule effects of committing to unproven launch vehicles. We conducted this performance audit from March 2010 to November 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Shelby S. Oakley, Assistant Director; Dr. Timothy M. Persons, Chief Scientist; Morgan Delaney Ramaker; Laura Greifner; Kristine R. Hassinger; Carrie W. Rogers; Roxanna T. Sun; and John S. Warren Jr. made key contributions to this report.
The National Aeronautics and Space Administration (NASA) has long relied on the Delta II medium class launch vehicle to launch science missions. Delta II, however, is no longer in production, and no other vehicle in the relative cost and performance range is currently certified for NASA use. Thus, NASA faces a potential gap in the availability of medium class launch vehicles that could cause design challenges, delays, or funding issues. GAO was asked to assess (1) NASA's and the Delta II contractor's, steps to ensure resources (budget, workforce, and facilities) are available to support safe Delta II operations through the last planned NASA flight in 2011; (2) NASA's plans and contingencies for ensuring a smooth transition from current small and medium class launch vehicles to other launch vehicles for future science missions; (3) the risks associated with NASA's planned approach to fill the medium launch capability gap; and (4) technical and programmatic implications to science missions if NASA commits to new launch vehicles before they are certified and proven. GAO identified and assessed transition plans and mitigation activities and interviewed responsible NASA and government officials. NASA's Launch Services Program (LSP) is taking steps to address risks and ensure the success of the last planned Delta II launched missions through a combination of specific government approvals and targeted government insight into contractor activities and designs. For example, LSP uses government systems engineers with technical expertise to review or repeat the contractors' engineering analyses. This is a key factor in high launch success rates. From 1990 through 2009, LSP has achieved a 98 percent launch success rate. LSP is conducting additional reviews of launch vehicle processing to mitigate risk associated with the remaining Delta II flights. LSP has also identified several specific areas of concern with the remaining Delta II flights--including contractor workforce expertise, postproduction subcontractor support, spare parts, and launch pads--and is taking steps where possible to mitigate risks and ensure the success of the remaining missions. NASA plans to leverage ongoing investments to acquire a new medium launch capability for science missions in the relative cost and performance range of the Delta II. The agency expects to eventually certify the vehicles being developed for space station resupply for use by NASA science missions. NASA has been in coordination with agency and contractor officials responsible for these efforts. Further, the agency revised its policy to allow for faster certification of new providers. Due to an active small class launch vehicle market and NASA's relative low need for vehicles in this class, the agency has no plans to develop additional small class launch vehicles. Rather, the agency will acquire these services through the NASA Launch Services II Contract. NASA's plan has inherent risks that need to be mitigated. NASA has not developed detailed estimates of the time and money required to resolve technical issues likely to arise during the launch vehicle certification process. As these costs are currently unknown, according to Science Mission Directorate officials, NASA has not yet budgeted for them. Further, both space station resupply vehicles have experienced delays and more delays are likely as launch vehicle development is an inherently risky endeavor. Neither potential provider currently has the facilities needed to launch the majority of NASA earth science missions requiring a medium capability. NASA medium class science missions that are approaching their preliminary design review face uncertainties related to committing to as yet uncertified and unproven launch vehicles. Launch vehicle decisions for these missions will be made before new vehicles are certified. Because changing the launch vehicle of a science mission after its preliminary design review is likely to lead to significant cost growth and schedule delays, NASA's intention is to select a launch vehicle and accept the impacts that any delays in the certification process could have to the cost and schedule of the science mission. NASA officials also indicated that future science missions might be asked to accommodate multiple launch vehicle possibilities if the availability of future vehicles is delayed. GAO recommends that NASA perform a detailed cost estimate based on knowledge gained during launch vehicle certification and adequately budget for potential additional costs. NASA concurred.
Security at the nation’s 424 federal courthouses is overseen by FPS, the Marshals Service, GSA, and AOUSC. Federal statutes and interagency agreements define these stakeholders’ roles and responsibilities for the protection and security of federal courthouses and persons within courthouses. FPS is the primary federal agency responsible for patrolling and protecting the perimeter of GSA-controlled facilities, including facilities housing federal court functions, and for enforcing federal laws and regulations in those facilities. Specifically, FPS has the authority to enforce federal laws and regulations aimed at protecting federally owned and leased properties and the persons on such property. FPS conducts its mission by providing security services through two types of activities: (1) physical security activities, such as conducting risk assessments of facilities and recommending risk-based countermeasures aimed at preventing incidents at facilities; and (2) law enforcement activities, such as responding to incidents, conducting criminal investigations, and exercising arrest authority. FPS charges customer agencies, such as the judiciary and Marshals Service, fees for the security services FPS provides. FPS charges federal agencies three fees: (1) a basic security fee, (2) a building-specific administrative fee, and (3) a security work authorization administrative fee. All customer agencies in GSA-controlled properties pay the basic annual security fee. Customer agencies in facilities for which FPS recommends specific countermeasures pay the building-specific administrative fee, along with the cost of the countermeasures. Customer agencies that request additional countermeasures pay the security work authorization administrative fee, along with the cost of the countermeasures. The Marshals Service, by law, has primary responsibility for the security of the federal judiciary, in either primary courthouses—where judicial and judicial-related space comprise at least 75 percent of the building—or multitenant facilities—including the safe conduct of court proceedings and the security of federal judges and court personnel. The Marshals Service is divided into 94 districts (with one U.S. Marshal for each district) to correspond with the 94 federal judicial districts. Security of federal courthouses is administered by the Marshals Service’s Judicial Security Division, whose mission is to ensure the safe and secure conduct of judicial proceedings and provide protection for federal judges, U.S. Attorneys, Assistant U.S. Attorneys, jurors, and other members of the federal court family. The Office of Courthouse Management has responsibility for, among other things, physical security and construction of all Marshals Service office, support, and special-purpose space. The Marshals Service receives both direct appropriations and funding transferred from the judiciary for its courthouse security activities. Judicial Services has oversight for programs funded by the AOUSC court security appropriation. This funding provides for the Court Security Officer (CSO) program, security equipment, and systems for space occupied by the judiciary and for Marshals Service employees. As the federal government’s landlord, GSA designs, builds, manages, and safeguards federal buildings, including courthouses. Under the Homeland Security Act of 2002, FPS was transferred to DHS along with FPS’s responsibility to perform law enforcement and related security functions for GSA buildings. However, GSA retained some responsibilities related to courthouse security. GSA continues to provide proposed plans for new construction and renovation of court space and for the installation of additional security systems and other security measures, such as fencing, lighting, and locks on doors. In response to a recommendation we made in 2005 and to enhance coordination with FPS, GSA established in 2006 the Building Security and Policy Division within the Public Buildings Service, where FPS once resided, to manage its security policy and implementation efforts, including its dealings with FPS. Additionally, GSA’s Center for Courthouse Programs is responsible for nationwide policy formulation and general management of new federal courthouse construction and the modernization of existing courthouses. In the judicial branch, both the Judicial Conference, which is the judiciary’s principal policy-making body concerned with the administration of the U.S. courts, and AOUSC, which is the central administrative support entity for the judicial branch, play a role in courthouse security. The Judicial Conference’s Committee on Judicial Security coordinates security issues involving the federal courts. For example, the committee monitors the protection of court facilities and proceedings, judicial officers, and court staff at federal court facilities and other locations, and makes policy recommendations to the Judicial Conference. Appropriations for security can be funded directly to the courts or transferred to the Marshals Service, which is responsible for administering judicial security consistent with the standards or guidelines agreed to by the Director of AOUSC and the Director of the Marshals Service. This represents a collaborative effort between the federal judiciary and DOJ to assist in securing the judicial process. Additionally, at the district level, federal judges have responsibilities in securing courthouses. For example, the court has the authority to, among other things, issue rules or orders regulating, restricting, or prohibiting items within or near the perimeter of any facility that has a courthouse. We have previously identified key practices both for enhancing collaboration among federal agencies and for facility protection. Regarding collaboration, we have identified a number of factors, such as leadership and trust and agreeing on roles and responsibilities, which are key to facilitating an effective collaborative relationship. We have used these practices in our prior work to evaluate collaboration between FPS and tenants in federal facilities and the Transportation Security Administration’s efforts to secure commercial airports, for example. We have also identified facility protection key practices from the collective practices of federal agencies and the private sector to provide a framework for guiding agencies’ protection efforts and addressing challenges. We have used the key practices to evaluate, for example, the efforts of FPS in protecting federal facilities, the Smithsonian Institution in protecting its museums, and the National Park Service in protecting national icons such as the Statue of Liberty. Courthouses—which house judicial proceedings and which many view as symbols of democracy and openness—have faced increasing security risks. DOJ data on potential threats against court personnel and individuals involved in the judicial process show a steady rise in recent years, adding to the concern of securing federal courthouses. (See figure 1.) AOUSC recognizes the symbolic nature of courthouses, and has stated that access to the courts is a core value in the American system of government and that courthouses are important symbols of the federal government in communities across the country. The Interagency Security Committee (ISC) ranks U.S. circuit, district, and bankruptcy courthouses as “very high”—the highest security level—because they are prominent symbols of U.S. power or authority. According to ISC, symbolic targets are attractive to foreign terrorists, as well as domestic antigovernment radicals. For example, case-related or antigovernment protests and demonstrations can occur outside courthouses, increasing possible security risks to the courthouses. Among our site visits, a court official at one location stated that, in one instance in March 2003, protesters outside the courthouse who were trying to avoid arrest broke a window and entered the official’s office, potentially putting the official and others at risk. With the increased number of potential threats against courthouses in recent years, the Marshals Service reports that threats from extremist groups exist, particularly at courthouses in certain locations. For example, at one courthouse we visited, FPS officials expressed concern about the presence of antigovernment persons and groups within the district. In part to better protect the courthouse from any person or group that may attack the courthouse, an area of raised terrain followed by an excavated area lined with rock which is not visible from a distance, was constructed between the courthouse and the road. Figure 2 illustrates the antivehicle barriers constructed to protect the courthouse. In addition to the symbolism of courthouses, the wide variety of civil and criminal cases that come before the federal judiciary include some that can pose increased security risks to federal courthouses, such as those involving domestic and international terrorism, domestic and international organized crime, extremist groups, gangs, and drug trafficking. At courthouses with these types of cases, federal stakeholders have implemented additional security measures. For example at one courthouse we visited, the Marshals Service stated that they implemented countermeasures to increase security in preparation for a major terrorist trial. These countermeasures included closing streets near the courthouse during the trial, erecting additional barriers, and having a uniformed guard presence 24 hours a day, 7 days a week. The location of courthouses can contribute to security issues. In particular, court officials at one courthouse located near the Southwest border told us that they deal with a large number of immigration cases and cases that involve drug trafficking organizations. According to these officials, they can also have hundreds of defendants who have not previously been involved in U.S. court proceedings, making it difficult to obtain information on them. One judge at the courthouse noted that the Marshals Service took additional security measures, such as additional training and increased CSO presence as the judge entered and left the courthouse parking garage, to protect him after the Marshals Service received information from an informant that a drug-trafficking organization had threatened violence against the judge Further, according to the judiciary, criminal cases related to immigration offenses jumped nearly 60 percent from about 17,000 in 2006 to about 27,000 in 2010, and the number of defendants in those cases rose by about 55 percent over the same period to about 28,000 defendants. According to the AOUSC, the growth in immigration cases is mostly from filings addressing improper reentry by aliens and involving fraud and misuse of visa or entry permits in the five federal judicial districts located along the U.S. Southwest border. The Marshals Service noted that defendants can be violent or have extensive criminal histories. According to AOUSC, courthouses can play a significant role in urban redevelopment efforts. Because of this, courthouses can be located in areas with higher crime rates, increasing risks at those buildings. For example, at one courthouse we visited, Marshals Service officials told us they had concerns about the neighborhood in which the courthouse was located because of crime, building disrepair, and suspicious activity occurring on properties near the courthouse. Officials noted that after receiving a report on a suspicious individual who could potentially be a threat to the courthouse, a CSO identified the person of interest moving barrels into a house adjacent to the courthouse. Marshals Service officials were concerned the barrels contained hazardous or explosive materials and coordinated with local law enforcement to investigate. Although the barrels were found to contain harmless materials, Marshals Service officials remained concerned about potential future security risks and worked with local authorities to require the landlord to maintain the house and yard or have the house demolished, which addressed the Marshals Service’s concerns. Security has become an important element considered by federal stakeholders in the design and construction of new courthouses. In addition to the life-safety and health concerns common in all buildings, federal courthouses must adhere to numerous specific design guidelines for aesthetics, security, interior circulation, barrier-free access, and mechanical and electrical systems, among other things. According to courthouse design documents, federal stakeholders should consider security measures from the beginning of the design process for new courthouses by, for example, integrating security considerations with other building system controls, such as for fire safety and air circulation. Specifically, the U.S. Courts Design Guide notes that courthouse security is complex because of court operations and movement patterns for different groups of individuals within courthouses, such as prisoners, judges, court personnel, and the public, require varying degrees of security. The guide notes that optimal security is a fine balance between architectural solutions, allocation of security personnel, and installation of security systems and equipment. Although courthouse design has evolved in the last 20 years to address modern security needs, the infrastructure of the nation’s 424 courthouses varies widely. According to GSA, 146 courthouses—about one-third —are historic facilities. Under the National Historic Preservation Act (NHPA), as amended, federal agencies are to use historic properties to the maximum extent feasible, and when making infrastructure changes or rehabilitating a property, to retain and preserve the historic character of the property. At one courthouse—considered a historic facility under NHPA—Marshals Service officials told us they identified the judge’s parking lot as a potential security vulnerability because the area has one opening for entry or exit which could allow for an attack on a judge, and that a new guard post should be constructed to help mitigate that vulnerability. The security committee for the courthouse approved the project, but could not begin the project until a consultant from the state Architectural Board approved the design. Marshals Service officials told us that the process took a long time, in part, because the guard post had to be designed to blend with, and not detract from, the historic façade and not interfere with the original gate to the courthouse. According to the officials, the project was scheduled to be completed 4 years after it was initially approved at an additional cost of approximately $20,000. Furthermore, historic or aging buildings may not be able to support, or may make it more difficult to implement, recommended physical security enhancements such as barriers or setbacks from the street. For example, in order to reduce the risk of a car bomb exploding close to the two historic courthouses we visited, the Marshals Service had restricted street parking and only allowed the Marshals Service or other court personnel to park alongside the building. Making security changes to an historic or aging building itself can also be challenging. For example, Marshals Service officials at one historic courthouse stated that judges, prisoners, and the public currently use the same hallways. Marshals Service officials stated that there is a need for a dedicated judges’ elevator and secured prisoner hallways to move prisoners through separate areas. The U.S. Courts Design Guide notes that an essential element of security design is the physical separation of public, restricted, and secure circulation systems and that trial participants should not meet until they are in the courtroom during formal court proceedings. Marshals Service officials stated that the most effective way to address this vulnerability would be to add another elevator to the building solely for transporting prisoners, but doing so would be difficult given the building’s age and historic designation. A Marshals Service official stated that retrofitting the building with more substantial permanent barriers would be difficult and expensive in order to comply with NHPA. Of the nation’s 424 federal courthouses, AOUSC reports that 201 share a building with other federal agencies which, together, occupy more than 25 percent of the building. According to AOUSC, the other 223 courthouses are considered primary courthouses, meaning court space comprises at least 75 percent of the building. AOUSC officials stated that they designate facilities as primary courthouses for the purpose of security management. Officials from sites we visited told us it is generally the chief judges who make security related decisions, and the judiciary pays for enhancements. In the case of the 201 courthouses located in multitenant facilities, courthouses operate along with the other agencies in the building (e.g., the Internal Revenue Service, Department of Health and Human Services, Department of Agriculture) and face additional challenges not encountered by primary courthouses, since they must coordinate their security operations with the other federal tenants. Federal stakeholders, including FPS and the Marshals Service, have taken steps to strengthen their collaboration for securing courthouses. However, stakeholders have faced challenges related to fragmented implementation of roles and responsibilities, the use or participation in existing collaboration mechanisms, and lack of clarity regarding GSA’s roles and responsibilities for courthouse security. Further, while federal stakeholders have taken steps to assess risks facing federal courthouses, they have not completed risk assessments as required by their own guidance and consistent with key practices for facility security. Various interagency agreements designate courthouse security roles and responsibilities for federal stakeholders, including FPS, the Marshals Service, GSA, and the judiciary. Our work on effective interagency collaboration has shown that collaborating agencies should work together to define and agree on their respective roles and responsibilities, including how the collaborative effort should be led. This allows agencies to clarify who will do what, organize their joint and individual efforts, and facilitate decision making. One mechanism for doing so is an interagency agreement. The key interagency agreement for courthouse security is a 1997 MOA between the Marshals Service, GSA, and AOUSC. The MOA designates specific roles and responsibilities for each of these entities with regard to protecting federal courthouses and sets forth the federal framework for securing courthouses. This MOA was reaffirmed in 2004 to acknowledge the transfer of FPS from GSA to DHS, with DHS assuming those responsibilities in the MOA that FPS formerly performed under GSA. Table 1 summarizes each federal stakeholder’s primary security responsibilities, as designated in the MOA. In addition to identifying specific roles and responsibilities for each federal stakeholder in protecting federal courthouses, the MOA recognizes areas in which the Marshals Service, FPS, and AOUSC stakeholders are to coordinate their security efforts. For example, the Marshals Service is to coordinate its activities to control access to space housing judicial personnel with FPS, and is to report to FPS and cooperate in FPS investigations of crimes committed in GSA-controlled facilities housing federal courts. FPS is to coordinate occupant emergency plans with the judiciary and Marshals Service and provide review of any Marshals Service proposed plans for new construction or renovation projects to determine facility perimeter security needs. Further, AOUSC is to provide the Marshals Service with space acquisition requests for the judiciary to ensure security systems are included in plans. In recent years, federal stakeholders have taken various actions to strengthen their collaborative efforts to secure courthouses. For example, at the headquarters level, FPS established a position in 2007 to liaise between FPS and the Marshals Service on court security issues. This liaison serves as the focal point for FPS and the Marshals Service to raise and resolve issues with each other. According to Marshals Service and FPS officials, the liaison has helped to strengthen coordination and communication between the two agencies on court security. Also, at the headquarters level, the Judicial Conference’s security committee meets twice a year with the Marshals Service Director and usually several additional times per year, as needed, with Marshals Service executive staff to, among other things, discuss threats to courthouse security. GSA also sponsors events three or four times a year where all federal tenants discuss portfoliowide issues—one of which is security. According to GSA officials, FPS, the Marshals Service, the judiciary, and other stakeholders, such as the U.S. Attorney’s Office, are invited to attend. ISC also holds quarterly meetings, which include representatives from FPS, the Marshals Service, judiciary, and GSA. According to GSA officials, they use these meetings to coordinate and address court security issues. At the courthouse level, federal stakeholders have established committees for coordinating their security activities. These committees include Court Security Committees (CSC) and Facility Security Committees (FSC). According to the 1997 MOA, the Marshals Service is to establish a CSC in each judicial district comprised of representatives from the Marshals Service, clerk of the court, the U.S. Attorney, chief judge, FPS, and GSA, as appropriate. FSCs typically exist at multitenant facilities, where the courts are one of various federal tenants. FSCs consist of a representative from each of the tenant agencies in the facility, and are responsible for addressing security issues at their respective facility and approving the implementation of security countermeasures. Depending on the district or individual courthouse, there can be either a CSC, an FSC, or both. In addition to these committees, coordination occurs at individual courthouses, as stakeholders implement their security roles and responsibilities. For example, officials told us that they typically coordinate on an as needed basis with each other on cases or demonstrations that draw large crowds. Although federal stakeholders have defined their courthouse security roles and responsibilities and taken steps to strengthen their coordination, various challenges have affected stakeholders’ efforts to secure courthouses. Specifically, we identified three main challenges federal stakeholders face in securing courthouses: (1) fragmentation in stakeholders’ efforts to implement their security roles and responsibilities; (2) limitations in the use of or participation in existing collaboration mechanisms; and (3) lack of clarity on GSA’s roles and responsibilities for courthouse security. Our prior work on interagency collaboration has shown that when multiple agencies are working to address aspects of the same problem, there is a risk that overlap or fragmentation among programs can waste scarce funds, confuse and frustrate program customers or stakeholders, and limit overall program effectiveness. Federal stakeholders’ efforts to implement their security roles and responsibilities at courthouses have been subject to fragmentation, as shown by stakeholders’ dissatisfaction with the dual approach to security and, at select courthouses, duplication in security efforts or stakeholders’ performance of security roles inconsistent with their responsibilities identified under the MOA. First, according to AOUSC and other stakeholders, the federal government’s approach to courthouse security in which the Marshals Service and FPS both provide security services has resulted in a bifurcated security environment with two lines of authority for implementation and oversight of security services. Additionally, the chair of the Judicial Conference Committee on Judicial Security has stated that the current approach to court security has resulted in two separate lines of authority, or chains of command, which in his view, diminishes the effective command and control over all components of the security program. Further, a key GSA management official involved with courthouse security told us that having one clear line of authority for courthouse security would have the advantage of reducing coordination challenges between FPS and the Marshals Service. FPS officials also told us that having multiple agencies responsible for courthouse security can be problematic because of overlapping jurisdiction, and that there is a need for clear lines of authority. Second, officials at 5 of the 11 courthouses we visited brought to our attention examples of fragmentation—either duplication of command and control or stakeholders’ performance of security roles inconsistent with responsibilities identified under the MOA. With regard to duplication, at one courthouse we visited, for example, both FPS and the Marshals Service had cameras pointed at the courthouse lobby. Marshals Service, FPS, and judiciary officials told us they considered this redundant. Marshals Service officials said that they installed cameras in the courthouse lobby because of past experience in which FPS cameras broke and took months to repair or replace, creating security vulnerabilities. At two other courthouses we visited, FPS and Marshals Service stakeholders each had their own staffed control rooms to monitor their cameras and alarms, and in some cases, each other’s cameras. At one of these courthouses, the Marshals Service reported that having two control rooms was redundant. Further, at two other courthouses, stakeholders noted that the Marshals Service was performing duties ascribed to FPS in the 1997 MOA without written agreements documenting these changes, as specified below:  FPS and Marshals Service officials from one of the courthouses we visited told us that while FPS conducted perimeter security for the facility, the Marshals Service provided security at one checkpoint—a delivery ramp—which FPS did not staff. Neither Marshals Service nor FPS officials identified a specific reason why the Marshals Service performed these exterior perimeter security activities rather than FPS. A Marshals Service official told us this situation was of concern to them because the Marshals Service had to move staff from another courthouse in the area to provide staff for the delivery ramp, which the Marshals Service viewed as being a higher priority than security needs at the other courthouse. However, this movement of staff resulted in the other courthouse having fewer staff than the authorized staffing level, and a Marshals Service security review identified this as a critical concern. The officials told us they were in the process of developing an MOA to address this issue.  Marshals Service officials at another courthouse we visited monitored FPS’s five cameras at the courthouse, and the perimeter security functions FPS performed were occasional patrols around the exterior of the courthouse. Marshals Service officials told us that it did not make sense for FPS to have responsibility for the perimeter cameras because there was no regular FPS presence at the building and that the Marshals Service could monitor, repair, and replace those cameras more quickly as a result. Both FPS and Marshals Service officials told us that they had proposed that the Marshals Service take over responsibility for all perimeter cameras, but FPS headquarters denied this request. FPS headquarters officials stated that the Marshals Service carries out the same types of duties as FPS at selected courthouses, but the officials did not provide a rationale or guidelines for when this arrangement would be appropriate. In situations like these, if a transfer of responsibilities is agreed to by FPS and the Marshals Service, having a local MOA outlining these responsibilities could help ensure greater accountability, clarity, and transparency. In discussing the prospect of developing a local MOA outlining FPS and Marshals Service responsibilities, Marshals Service officials at one courthouse brought to our attention that another district, which was not one of our visited locations, had an MOA addressing these responsibilities. This local MOA outlined changes in responsibilities for perimeter security. Specifically, the Marshals Service agreed to be responsible for perimeter security at this federal building and courthouse, and FPS agreed to reimburse the Marshals Service for security services. In addition to fragmentation of roles and responsibilities, federal stakeholders have not always used or participated in existing collaboration mechanisms, particularly security committees. We have previously reported that information sharing and coordination among organizations is crucial to addressing threats, and having a process in place to obtain and share information can help agencies better understand risk and more effectively determine what preventative measures should be implemented. CSCs and FSCs are intended to provide a means for federal stakeholders to discuss and coordinate their court security activities at the local level. CSCs are also responsible for addressing security countermeasures recommended by the Marshals Service or FPS, and FSCs are responsible for addressing security countermeasures recommended by FPS. At 4 of the 11 courthouses we visited, officials noted one of the federal agencies on the CSCs or FSCs did not regularly participate in meetings. Specifically, at 3 of the courthouses, FPS did not regularly participate in CSC meetings, though FPS was designated as a member of the committees and was responsible for perimeter security at these courthouses. FPS officials told us they had not been notified about CSC meetings and, as a result, did not participate in the meetings. In 1 courthouse, GSA did not regularly participate in CSC meetings, though GSA was designated as a member of the committee. In these locations GSA officials told us that they believed that issues discussed at the CSC meetings did not apply to them and, thus, did not attend. Without attending these meetings, agencies may be missing opportunities to share information and coordinate with stakeholders so that security risks are better understood and addressed. Information sharing and coordination are key practices in facility protection that we have identified, and these committees are intended to serve this purpose in the courthouse security area. DOJ’s Office of Inspector General (IG) also found participation problems related to the security committees. In November 2010, the IG reported that among six judicial districts visited, one did not have a CSC and another was not holding regular meetings. The Chief Judge in the latter district stated that the CSC was generally not holding meetings due to poor communication between the Marshals Service and the judiciary. The DOJ IG recommended that the Marshals Service ensure all its district offices assign a principal coordinator to the district security committee and encourage the local judiciary to lead regular meetings. The Marshals Service concurred with the recommendation and stated they would emphasize the requirement and noted that existing policy directs the Marshals Service to serve as principal coordinator for CSC meetings and for Marshals Service judicial security inspectors to attend and participate in CSCs. In August 2010 we identified lack of participation and other challenges associated with FSCs, which raised questions about their effectiveness as a collaboration mechanism. We reported that FSCs have operated since 1995 without procedures that outline how they should operate or make decisions or that establish accountability. Further, we identified instances in which tenant agency representatives to the FSC generally did not have any security knowledge or experience but were expected to make security decisions for their respective agencies. We also reported that many FSC tenant agency representatives did not have the authority to commit their respective organizations to fund security countermeasures. This issue was brought to our attention at two of the courthouses we visited. In one location, the chair of the FSC said that FPS made a request for various security enhancements to the building, which were the first infrastructure enhancements in at least 12 years. However, none of the tenants had the authority to approve the increased costs. Additionally, court officials at another location we visited said the FSC works through a vote system, and each tenant agency gets the opportunity to hear the issue and to voice their vote. However, the officials said that tenant agencies could not commit to financial decisions at that level. ISC developed procedures for FSCs to use when presented with security issues that affect the entire facility. These standards, which are being tested for a 1 year period, note that FSC members may or may not have the authority to obligate their respective organizations to a financial commitment, and FSC members are responsible for seeking guidance from their respective funding authority. According to Marshals Service, GSA, and FPS officials, GSA’s courthouse security responsibilities have not been clearly defined since the transfer of FPS to DHS in 2003. The 1997 MOA identifies individual stakeholders’ roles and responsibilities, as well as areas requiring collaboration, in securing federal courthouses, and was signed by DOJ, GSA (of which FPS was a part), and AOUSC. However, the 2004 reaffirmation was signed by DOJ, DHS, and AOUSC; GSA was not a signatory, and according to GSA officials, they were not invited by the other agencies to participate in the reaffirmation. The 2004 reaffirmation updated the 1997 MOA by acknowledging the transfer of FPS from GSA to DHS; it did not make any other modifications to the MOA. However, the reaffirmation did not clearly articulate which security responsibilities GSA—which has responsibility for managing all federal courthouses— retained and which specific responsibilities were transferred to DHS. GSA officials told us that this lack of clarity leads to confusion about which stakeholder is ultimately responsible for taking action. GSA officials told us that there have been instances in which they are consulted about security issues for which they do not have responsibility or they are excluded from security discussions where they have responsibilities. For example, the officials noted it is not uncommon for a U.S. Marshal or chief judge who is unfamiliar with the court security framework to approach GSA and expect them to take action to address security concerns, even though it may be FPS’s responsibility. Further, GSA officials noted instances in which GSA has security responsibilities, such as installing bollards, barriers, and other physical changes to the building, but GSA was not always included in the security decisions. Lack of clarity in these types of situations can cause confusion, lead to implementation delays, and lengthen the amount of time needed to address vulnerabilities, according to GSA officials. Further, Marshals Service officials told us that there is a lack of clarity about GSA’s roles and responsibilities, including the extent of GSA’s participation in CSCs and FSCs. These officials also told us that updating the MOA would help address this issue and noted that there have been preliminary discussions between the stakeholders about doing so. GSA headquarters officials also told us that they should have been a signatory to the reaffirmation because in addition to their security responsibilities, GSA is the landlord for all federal facilities that house judicial personnel. In June 2005 we reported on GSA’s role in facility protection since September 11, 2001. Prior to the creation of DHS, we reported that if DHS was given the responsibility for securing facilities, the role of integrating security with other real-property functions would be an important consideration. We later noted that it would be critical that GSA be well-equipped to engage in security related matters given that it is still the owner and landlord of federal facilities. According to GSA, permanent security enhancements, such as installing bollards and altering buildings to improve circulation patterns, are GSA’s responsibility and cannot be implemented without GSA’s involvement. Also in 2005, we recommended that GSA should establish a mechanism—such as a chief security officer position or formal point of contact—that could serve in a liaison role to address the challenges GSA faces related to security in buildings it owns and leases, and enable GSA to define its overall role in security given the transfer of FPS to DHS so it would be better equipped to address security related matters related to its federal building portfolio. GSA subsequently established such a position. Furthermore, related to GSA’s role, GSA and DHS have yet to complete a revised agreement of their own on security fees and protection responsibilities for all GSA buildings, including courthouses. DHS and GSA signed an MOA in 2006 that outlines the roles, responsibilities, and operational relationships between DHS and GSA, but progress has been slow in updating the document. DHS and GSA are renegotiating the 2006 MOA to, among other things, address communication and information- sharing issues and address service concerns by tenants. However, DHS and GSA have been working to update the MOA for more than 3 years. A number of issues remain to be worked out, including outlining what the basic security fee covers and the sharing of security assessments; GSA and DHS have a goal of completing the MOA by the end of fiscal year 2011. These three challenges—fragmentation in implementation of roles and responsibilities, limitations in the use of existing collaboration mechanisms, and lack of clarity about GSA’s security roles and responsibilities—have affected stakeholders’ efforts to secure courthouses. It is difficult to directly link these challenges to specific security vulnerabilities, but further clarifying roles and responsibilities at the national and local levels, including GSA’s security roles and participation in security committees, could help strengthen accountability, transparency, and coordination on courthouse security efforts among federal stakeholders. It could also help provide opportunities for federal stakeholders to identify and address any potential gaps or unnecessary overlaps in their security efforts and activities. In so doing, federal stakeholders may be better equipped to address security vulnerabilities that may arise. In 2008 Congress authorized the Marshals Service, in consultation with the AOUSC, to implement a Perimeter Pilot Security Program for the Marshals Service to assume FPS’s responsibilities to provide perimeter security at selected courthouses participating in the program. The purpose of the pilot program was to determine the feasibility of the Marshals Service providing perimeter security services at selected primary courthouses—federal facilities in which judiciary and judiciary related offices occupy at least 75 percent of rentable space. According to Marshals Service officials, the pilot program’s goal was to eliminate duplication and system incompatibilities, streamline guard services and post orders, and provide clearer accountability for court security. Beginning in January 2009, the Marshals Service began providing perimeter security at seven courthouses. Prior to initiation of the pilot program, the Marshals Service and FPS signed an MOA in 2008 defining the conditions of the pilot program and noting that the Marshals Service would conduct periodic reviews of the status and effectiveness of the program and share written status reports with FPS. At these courthouses the Marshals Service conducted on-site assessments to inspect the existing perimeter security systems and equipment to determine if they could be used “as is” or needed to be repaired, modified, or replaced to meet its standards. It also assessed the security guard requirements and proposed a plan that it thought would provide optimal security coverage. The Marshals Service assumed control of physical security of each courthouse in the pilot program with the understanding that it would be responsible for inspecting, adjusting, repairing, and replacing all FPS- owned surveillance cameras and associated equipment. In October 2010, the judiciary issued its final evaluation report to Congress on the implementation and operation of the pilot program, recommending that the pilot program be expanded to all primary courthouses. The report noted that the general consensus of opinions expressed by judges, court officials, and district Marshals Service was in support of the pilot program. Specifically, the report noted that program participants had positive views about Marshals Service’s consolidation of command and control over all aspects of physical security at the pilot sites, which they believed resulted in improved protection for both people and buildings. Participants stated that the benefits of the program included, among other things, improved quality of security services, security coverage, communication, stewardship and monitoring of security equipment, as well as unified command and control over courthouse physical security. Additionally, in November 2009, the Marshals Service conducted a survey at 5 of the 7 courthouses participating in the pilot program. Representatives at 4 of these courthouses stated that the pilot program was effective and supported the concept for wider implementation. Subsequent to completing this survey, Marshals Service officials at the fifth courthouse told us they endorse the program. Among the 11 courthouses we visited, 2 were participating in the pilot program. Marshals Service and court officials we spoke with at both courthouses generally expressed satisfaction with the pilot program. The judiciary and Marshals Service conducted their evaluation of the pilot program by collecting information from the chief district judge, the district U.S. Marshal, and other court and Marshals Service staff at the seven courthouses participating in the program. The Marshals Service also inspected, adjusted, repaired, or replaced all FPS-owned security equipment, and at some sites additional equipment was added to enhance security. Further, AOUSC estimated additional costs that would occur if the pilot program was expanded to other primary courthouses, based on various options. In particular, the report estimated additional annual costs ranging from more than $1.5 million for expanding the program to selected, large courthouses (i.e., primary courthouses with 11 to 20 judges each) and extra large courthouses (primary courthouses with 21 or more judges each) to about $200 million for expanding the program to all primary courthouses. According to AOUSC, the initial pilot program was implemented in a cost-neutral manner, but stated that cost neutrality would not be possible if the Marshals Service were to assume responsibility at primary courthouses nationwide. Although AOUSC has recommended expansion of the pilot program on the basis of its evaluation, additional analysis of the benefits and costs of this approach could better position the federal stakeholders and Congress to consider and determine whether to expand the pilot. Pilot programs can be one way to identify innovative efforts to improve performance, as they allow for experiences to be rigorously evaluated, shared systematically with others, and for new procedures to be adjusted, as appropriate, before they receive wider application. Key practices in assessing the results of pilot programs include a range of standards, such as having a clearly articulated methodology, a strategy for comparing results with other efforts, and a cost-effectiveness analysis to ensure that the program produces sufficient benefits in relation to its costs. Additionally, having a process in place to obtain and share information can help agencies more effectively make decisions and would be consistent with key practices in facility protection that we have identified. Expanding the pilot program and shifting to an approach for all primary courthouses in which the Marshals Service would be solely responsible for building security would fundamentally alter how courthouse security is managed, as FPS has significant courthouse security responsibilities. FPS management officials told us that AOUSC and the Marshals Service did not consult with them in evaluating the pilot, nor would AOUSC provide FPS with a copy of the completed evaluation when it was requested. These FPS officials also raised concerns that FPS’s views were not discussed in the evaluation, including the protection of other federal tenants in these courthouses. They noted that FPS continues to have statutory responsibilities for providing security to those tenants not involved with court business. The officials also noted that the Marshals Service did not provide the required quarterly statistics on security incidents in a majority of regions where pilot program facilities were located. Furthermore, GSA officials told us that, as the building owner, they should have been involved in discussions on any expansion of the pilot program. Additionally, FPS and GSA are in the process of renegotiating the basic security fee structure all tenants pay to FPS. By further assessing possible costs for expanding the pilot program and consulting with other stakeholders, such as GSA and FPS, the Marshals Service and federal stakeholders could consider additional information to help them better evaluate whether to expand the program and communicate this information to Congress. Both the Marshals Service and FPS have developed tools, particularly risk assessments, to help identify security vulnerabilities and manage risk. The Marshals Service is required to conduct an annual security survey in each judicial district and develop security plans for every judicial facility. FPS is supposed to conduct facility security assessments (FSA) to identify security vulnerabilities and make recommendations. FSAs are to be conducted on a regular schedule, and during this process FPS is required to conduct an on-site physical security analysis. FPS assessments generally focus on building systems and perimeter and entry issues (e.g., emergency power systems; heating, air conditioning, and air intake systems; bollards and barriers, and building setbacks), while Marshals Service assessments generally focus on security issues within the court portions of the building and are supposed to include detailed information on courtrooms, judge’s chambers and clerks offices, and prisoner movement. We have previously reported that allocating resources using risk management is a facility protection key practice. More specifically, risk management involves a systematic and analytical process to consider the likelihood that a threat will endanger an asset (structure, individual, or function) and identify, evaluate, select, and implement actions that reduce the risk or mitigate the consequences of an event. Although applying risk management principles to facility protection can take various forms, our past work showed that most risk management approaches generally involve identifying potential threats, assessing vulnerabilities, identifying the assets that are most critical to protect in terms of mission and significance, and evaluating mitigation alternatives for their likely effect on risk and their cost. As such, using risk assessments for decision making serves as the backbone to a comprehensive facility protection program. The Marshals Service and FPS have not always conducted risk assessments of courthouses, as required by their respective guidance and directives. With regard to the Marshals Service, in 9 of the 11 courthouses we visited, the Marshals Service had not conducted risk assessments—what the Marshals refer to as court security facility surveys—for their judicial facilities. Marshals Service officials at 6 courthouses told us they assess security needs as part of the budget process. According to Marshals Service officials, each courthouse conducts an annual nationwide budget call to determine each court’s current and future requirements for CSOs. Although the budget process requires information about projected guard, security systems, and equipment needs, the Marshals Service Judicial Security Directive requires each court to have a completed court security facility survey based on a specific format outlined in the policy. This format is more comprehensive and includes detailed questions on the types of weapons guards carry, the agency responsible for overall building security, and who monitors cameras—information which goes beyond what is required as part of the budget formulation process. Similar to our findings, a November 2010 DOJ IG report found that although court security facility surveys are required annually and the results are to be used to develop or update judicial security plans, these plans were not always updated as required, and in one instance had not been updated since 1983. The DOJ IG found that Marshals Service officials were not completing court security facility surveys in three of the six districts it examined. The DOJ IG recommended that the Marshals Service ensure all district offices regularly update their plans and ensure that court security facility surveys are performed at each district and judicial security plans are updated as required. The Marshals Service agreed with the recommendation and noted it will emphasize the requirements to ensure it is a component of the district audit program and the annual district self assessment. FPS has also faced difficulties in preparing FSAs. For example, we have previously reported that FPS’s assessments are vulnerable to subjectivity because they lack a vigorous risk assessment methodology, and inspectors’ compliance with policies and procedures in conducting assessments is inconsistent. FPS initially tried to address these issues by implementing a new risk management program that was to incorporate a less subjective and time-consuming assessment tool. However the program, known as the Risk Assessment Management Program, experienced considerable delays, and FPS recently halted implementation. Moreover, federal stakeholders have experienced disagreements about the sharing of completed security surveys and FSAs. We have reported that information sharing among organizations is crucial to producing comprehensive and practical approaches and solutions to address terrorist threats directed at federal facilities. Our work showed that by having a process in place to obtain and share information on potential threats to federal facilities, agencies can better understand the risk they face and more effectively determine what preventive measures should be implemented. At the two courthouses where the Marshals Service had completed risk assessments, the Marshals Service did not provide other stakeholders, including FPS, with a copy. Further, prior to fiscal year 2010 FPS officials told us that they shared the executive summaries of their FSAs, rather than the full FSAs, with other members of security committees because the FSAs were law-enforcement sensitive. At the courthouses we visited, FPS completed the FSA at each courthouse prior to fiscal year 2010. Marshals Service, GSA, and court officials told us that they did not consistently receive full FSAs from FPS at the courthouses we visited. For example, at five courthouses we visited, court officials stated that they did not receive executive summaries or full FSAs from FPS. At three courthouses we visited, Marshals Service officials told us that they also did not receive copies of FPS’s full FSAs. In those cases when officials received copies of the executive summaries, they noted that the information contained in them was inadequate to inform security decision making. For example, one official told us that the executive summaries did not contain sufficient evidence on which to base decisions. Moreover, we have previously found that GSA officials at all levels cite limitations with the executive summaries of FPS’s FSAs saying, for example, that the summaries do not contain enough contextual information on threats and vulnerabilities to support countermeasure recommendations and to justify the expenses that would be incurred by installing additional countermeasures. According to FPS officials, they began providing full versions of the assessments completed in fiscal year 2010 and after to security committee members, and they plan to document FPS’s commitment to share full FSAs specifically with GSA in the update to the DHS and GSA 2006 MOA, which DHS and GSA plan to complete by the end of fiscal year 2011. GSA officials stated they have received some full FSAs from FPS among those completed. While these are positive steps, FPS has completed a small number of FSAs since fiscal year 2010 in part because of challenges faced by FPS in moving toward a new risk assessment and management system. For example, in July 2011 we reported that FPS’s Risk Assessment and Management Program (RAMP) tool, which FPS was to launch in 2009 as a web-based risk assessment and guard management system, was behind schedule, over budget, and could not be used to complete FSAs. Among other things, we recommended that FPS develop interim solutions for completing FSAs. FPS concurred with this recommendation and reported that it is revalidating RAMP requirements with its stakeholders to strengthen future RAMP investments and assessing alternative programs. Given the current challenges faced by the Marshals Service and FPS in conducting and sharing risk assessments, federal stakeholders lack the information needed to comprehensively assess and understand security risks both to individual courthouses and across the entire portfolio of courthouses. Without a comprehensive picture of risks, federal stakeholders face difficulties in prioritizing risks in light of available resources and in determining appropriate measures to mitigate those risks. We have reported that the capability to gauge risk across a portfolio of facilities and make resource allocation decisions accordingly represents an advanced use of risk management. The ability to compare risks across buildings is important because it could allow stakeholders to comprehensively identify and prioritize risks and countermeasure recommendations at a national level and direct resources toward alleviating them. One possible mechanism for addressing these challenges could be for FPS and the Marshals Service to conduct joint security assessments, according to one FPS regional director. We have previously reported that in situations where agencies conduct similar but fragmented functions and provide results to the same recipients, agencies coordinating and integrating their efforts have the potential to achieve greater efficiencies. By ensuring that security surveys and assessments are completed and shared, FPS and the Marshals Service could strengthen their efforts to identify security vulnerabilities at courthouses, determine measures to help address or mitigate those vulnerabilities, and communicate information on vulnerabilities and security needs to relevant stakeholders, as appropriate. Given the nature of judicial business and increased potential threats to federal courts, securing courthouses requires collaboration and coordination among the various federal stakeholders responsible for security and tenant agencies present in facilities that house federal courts. Federal stakeholders have taken action to define and implement their roles and responsibilities for securing courthouses and to mitigate threats and vulnerabilities. However, updating the MOA that identifies these roles and responsibilities to better incorporate accountability for federal agencies’ collaborative efforts could strengthen the multiagency courthouse security framework. In particular, clarifying stakeholder roles and responsibilities, participation in security committees, and the parameters under which deviating from roles and responsibilities is agreeable to the stakeholders and would benefit from location-specific agreements, would strengthen the MOA. Furthermore, clarifying GSA’s role in the current security framework and instilling greater accountability for security committee participation could be addressed in an MOA update. In addition to these areas of collaboration, risk assessments that are to be conducted by the Marshals Service and FPS are the primary tools for identifying and addressing security vulnerabilities at courthouses. As such, updating the MOA to help ensure that these assessments, referred to by the Marshals Service as court security facility surveys and by FPS as FSAs, are completed in a timely manner and the results shared with the other federal agencies responsible for courthouse security, could better equip federal stakeholders to assess courthouses’ security needs and gaps and make informed decisions. The pilot program, whereby the Marshals Service has assumed responsibility for security at a limited number of primary courthouses, explores a fundamental change in how courthouse security is managed. Although AOUSC has recommended expanding this program to other primary courthouses, additional information on the costs and views of other stakeholders, such as FPS and GSA, on expansion of the program could better position the federal stakeholders and Congress to evaluate expansion options. We are making two recommendations to the Secretary of Homeland Security and the Attorney General. Recognizing that there are several stakeholders involved in courthouse security, we are addressing these recommendations to the Secretary and Attorney General because their departments have primary responsibility for courthouse security. However, as indicated below, implementation of these recommendations includes consultation and agreement with the judiciary and GSA. First, we recommend that the Secretary and Attorney General instruct the Director of FPS, and the Director of the Marshals Service, respectively, to jointly lead an effort, in consultation and agreement with the judiciary and GSA, to update the MOA on courthouse security to address the challenges discussed in this report. Specifically, in this update to the MOA stakeholders should: (1) clarify federal stakeholders’ roles and responsibilities including, but not limited to, the conditions under which stakeholders may assume each other’s responsibilities and whether such agreements should be documented; and define GSA’s responsibilities and determine whether GSA should be included as a signatory to the updated MOA; (2) outline how they will ensure greater participation of relevant stakeholders in court or facility security committees; and (3) specify how they will complete required risk assessments for courthouses, referred to by the Marshals Service as court security facility surveys and by FPS as FSAs, and ensure that the results of those assessments are shared with relevant stakeholders, as appropriate. Second, to the extent that steps are taken to expand the perimeter pilot program, we recommend that the Secretary and Attorney General instruct the Director of FPS, and the Director of the Marshals Service, respectively, to work collaboratively, in consultation and agreement with the judiciary and GSA, to further assess costs and benefits, in terms of enhanced security, of expanding the pilot program to other primary courthouses, and assess all stakeholders’ views about the pilot program. We provided a draft of this report to DOJ, DHS, AOUSC, and GSA for their review and comment. In an email from DOJ’s Acting Assistant Director for the Audit Liaison Group dated September 15, 2011, DOJ indicated that the Marshals Service concurred with the recommendations and would not be providing written comments. We received written comments from DHS, AOUSC, and GSA, which are reproduced in full in appendixes II, III, and IV, respectively. DHS also provided technical comments, which we incorporated as appropriate. DHS concurred with our recommendations. With regard to the first recommendation, that FPS and the Marshals Service jointly lead an effort to update the MOA on courthouse security, DHS stated that it agrees that the current MOA should be reviewed and revised. DHS noted that it is committed to working collaboratively with all parties to further determine the conditions under which stakeholders may assume multiple and overlapping responsibilities. With regard to the second recommendation that FPS and the Marshals Service work collaboratively to further assess costs and benefits of expanding the pilot program, DHS agreed that continued collaboration and further review of pilot program results would enhance security at federal courts. DHS also noted that it did not agree with any suggested expansion of the pilot program to include additional facilities. We did not recommend or suggest that the pilot project should be expanded in this report. Rather, this report notes that further assessment of the costs and benefits of the project and further consultation with stakeholders could provide additional information to help better evaluate whether to expand the program. During the comment period, AOUSC requested that we clarify the judiciary’s role related to the recommendations. Specifically, AOUSC requested that the recommendations explicitly state that the Marshals Service and FPS should seek the judiciary’s agreement when implementing them. We concluded that this change would help to clarify the recommendations, and we modified the recommendations to state that the Marshals Service and FPS should seek the agreement of both the judiciary and GSA as key stakeholders in implementing our recommended actions. In its written comments, AOUSC expressed appreciation for our recognition of the judiciary’s role in courthouse security. GSA expressed similar concerns, during the comment period, about the first recommendation that the Marshals Service and FPS jointly lead an effort, in direct consultation with other federal stakeholders, to update the MOA on courthouse security. We informed GSA of our clarifications to the recommendations that the Marshals Service and FPS should seek agreement with the judiciary and GSA in implementing the recommendations. In its written comments, GSA requested that we revise the first recommendation to ensure that all stakeholders be involved in updating the MOA and that all stakeholders be included as signatories. We did not make further modifications to the recommendations in response to GSA’s written comments for two reasons. First, the recommendation calls for the Marshals Service and FPS to jointly lead an effort to update the MOA on courthouse security in direct consultation and agreement with other federal stakeholders, specifically the judiciary and GSA. As such, we believe that this recommendation already ensures that stakeholders, including the judiciary and GSA, would be involved in the effort to update the MOA. Second, in our view, having the judiciary, Marshals Service, FPS, and GSA reach agreement on GSA’s role, as a part of updating the MOA, would be a more cooperative approach to resolving this issue and would reflect key practices in interagency collaboration that call for federal agencies to work together to define and agree on their respective roles and responsibilities, including how the collaborative effort should be led. The recommendation states that the Marshals Service and FPS would need to seek GSA’s consultation and agreement on whether GSA should be a signatory. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this report. At that time, we will send copies of this report to the Attorney General, Secretary of Homeland Security, Administrator of the General Services Administration, Director of the Administrative Office of U.S. Courts, selected congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact Mark Goldstein at (202) 512-6670 or goldsteinm@gao.gov, or William Jenkins at (202) 512-8777 or jenkinswo@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix VI. To identify the attributes of federal courthouses contributing to concerns about their security, we examined U.S. Marshals Service (Marshals Service) and Federal Protective Service (FPS) documentation of courthouse security challenges and vulnerabilities, such as security assessments and surveys of federal courthouses. We also visited 11 federal courthouses in 10 U.S. locations. We selected these courthouses based on a mix of criteria that included (1) geographic location, including courthouses in various U.S. Court regions, near U.S. borders, and in cities of different sizes; (2) age of courthouses, including historic courthouses; (3) size of courthouses; and (4) tenancy in facilities with courthouses, including courthouses located in multitenant and primary courthouse facilities; and (5) courthouses participating in the perimeter security pilot program. At each courthouse, we toured the facility and observed security gaps or vulnerabilities as well as countermeasures. We also obtained federal officials’ information and views on the courthouses’ security vulnerabilities by interviewing officials from the Marshals Service, FPS, the General Services Administration (GSA), and the courts, including court clerks and federal judges. The information we obtained from observing security activities at these locations and interviewing officials cannot be generalized across all federal courthouses in the United States. However, because we selected these courthouses based on a variety of factors, they provided us with an overview of security at federal courthouses, examples of security vulnerabilities, and challenges in protecting courthouses. To assess the extent to which federal stakeholders have collaborated and used risk management practices to protect federal courthouses, we examined relevant statutes; and documentation from the Marshals Service, FPS, GSA, and the judiciary, including plans, reports, guidance, security assessments, and surveys. In particular, we reviewed federal laws that set forth roles and responsibilities for protecting federal courthouses. Additionally, we reviewed documents such as memoranda of agreement and agency-specific guidance, such as Marshals Service and FPS memorandums and security directives. We observed federal stakeholders’ implementation of these roles and responsibilities at the 11 federal courthouses we visited, and obtained views from Marshals Service, FPS, GSA, and judiciary officials at these locations and headquarters. At two courthouses, FPS regional officials with responsibility for protection of other courthouses in their regions provided us with examples of security arrangements at those other courthouses. We relied on officials to bring security issues to our attention at the individual courthouses. Therefore, we could not always determine whether these issues were present at other courthouses unless officials brought them to our attention. Further, we analyzed federal stakeholders’ processes for conducting security assessments and surveys at federal courthouses and for coordinating courthouse security decision making and activities examining documentation of these processes and interviewing federal stakeholders at headquarters and our site visit locations. The information we obtained from our site visits cannot be generalized across all U.S. federal courthouses, but because we selected the courthouses based on a mix of criteria, they provided us with examples of federal stakeholders’ implementation of courthouse security activities. We compared federal stakeholders’ efforts to secure courthouses to criteria in our prior work on effective interagency collaboration and results-oriented government, key practices for facility protection, and key practices for assessing pilot programs. We also compared the implementation of federal stakeholders’ security roles and responsibilities with those designated in the 1997 courthouse security MOA as reaffirmed in 2004. We conducted this performance audit from January 2010 to September 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contacts named above, Rebecca Gambler, Assistant Director; David Sausville, Assistant Director; Aaron Kaminsky, analyst-in- charge; R. Rochelle Burns; Andy Clinton; Ray Griffith; Brian Hartman; Delwen Jones; Susan Michal-Smith; and Sara Ann Moessbauer made significant contributions to this report.
Safe and accessible federal courthouses are critical to the U.S. judicial process. The Federal Protective Service (FPS), within the Department of Homeland Security (DHS), the U.S. Marshals Service (Marshals Service), within the Department of Justice (DOJ), the Administrative Office of the U.S. Courts (AOUSC), and the General Services Administration (GSA) are the federal stakeholders with roles related to courthouse security. As requested, this report addresses (1) attributes that influence courthouse security considerations and (2) the extent to which stakeholders have collaborated in implementing their responsibilities and using risk management. GAO analyzed laws and documents, such as security assessments; reviewed GAO's work on key practices for collaboration and facility protection; visited 11 courthouse facilities, selected based on geographic dispersion, age, size, and other criteria; and interviewed agency and judiciary officials. While the results from site visits cannot be generalized, they provided examples of courthouse security activities. Various attributes influence security considerations for the nation's 424 federal courthouses, which range from small court spaces to large buildings in major urban areas. According to DOJ data, threats against the courts have increased between fiscal years 2004 and 2010--from approximately 600 to more than 1,400. The Interagency Security Committee--an interagency group that develops standards for federal facility security--has assigned courthouses the highest security level because they are prominent symbols of U.S. power. Federal stakeholders have taken steps to strengthen their collaboration, such as establishing agency liaisons, but have faced challenges in implementing assigned responsibilities and using risk assessment tools. (1) A 1997 memorandum of agreement (MOA) outlines each stakeholder's roles and responsibilities and identifies areas requiring stakeholder coordination. However, at 5 of the 11 courthouses GAO visited, FPS and the Marshals Service were either performing duplicative efforts (e.g., both monitoring the courthouse lobby) or performing security roles that were inconsistent with their responsibilities. The judiciary and other stakeholders stated that having the Marshals Service and FPS both provide security services has resulted in two lines of authority for implementing and overseeing security services. Updating the MOA that identifies roles and responsibilities could strengthen the multiagency courthouse security framework by better incorporating accountability for federal agencies' collaborative efforts. (2) In 2008, Congress authorized a pilot program, whereby the Marshals Service would assume FPS's responsibilities to provide perimeter security at 7 courthouses. In October 2010, the judiciary recommended that the pilot be expanded. AOUSC noted general consensus among various stakeholders in support of the pilot and estimated the costs of expanding it, but AOUSC did not obtain FPS's views on assessing the pilot results or on how the expansion may affect FPS's mission. Additional analysis on the costs and benefits of this approach and the inclusion of all stakeholder perspectives could better position Congress and federal stakeholders to evaluate expansion options. (3) The Marshals Service has not always completed court security facility surveys (a type of risk assessment), as required by Marshals Service guidance. At 9 of the courthouses GAO visited, the Marshals Service had not conducted these surveys, but Marshals Service officials at some courthouses told us that they assessed security needs as part of their budget development process. However, these assessments are less comprehensive than the court security facility surveys required by Marshals Service guidance. FPS has faced difficulties completing its risk assessments, known as facility security assessments, and recently halted an effort to implement a new system for completing them. Furthermore, GAO found that the Marshals Service and FPS did not consistently share the full results of their risk assessments with each other and key stakeholders. Sharing risk assessment information could better equip federal stakeholders to assess courthouses' security needs and make informed decisions. GAO recommends DHS and DOJ update the MOA to, among other things, clarify stakeholders' roles and responsibilities and ensure the completion and sharing of risk assessments; and further assess costs and benefits of the perimeter pilot program, in terms of enhanced security, and include all stakeholders' views, should steps be taken to expand the program. DHS and DOJ concurred with GAO's recommendations.
Protections for workers in the United States were enacted in the Fair Labor Standards Act of 1938, which established three basic rights in American labor law: a minimum wage for industrial workers that applied throughout the United States; the principle of the 40-hour week, with time- and-a-half pay for overtime; and a minimum working age for most occupations. Since 1938, the act has been amended several times, but the essentials remain. For many years, the act (combined with federal and state legislation regarding worker health and safety) was thought to have played a major role in eliminating sweatshops in the United States. However, we reported on the “widespread existence” of sweatshops within the United States in the 1980s and 1990s. Subsequent to our work, in August 1995, the Department of Labor and the California Department of Industrial Relations raided a garment factory in El Monte, California, and found sweatshop working conditions—workers were confined behind razor wire fences and forced to work 20 hours a day for 70 cents an hour. Leading retailers were found to have sold clothes made at this factory. According to the National Retail Federation, an industry trade association, the El Monte raid provoked a public outcry and galvanized the U.S. government’s efforts against sweatshops. Concern in the United States about sweatshops has spread from its shores to the overseas factories that supply goods for U.S. businesses and the military exchanges. With globalization, certain labor-intensive activities, such as clothing assembly, have migrated to low-wage countries that not only provide needed employment in those countries but also provide an opportunity for U.S. businesses to profit from manufacturing goods abroad and for consumers to benefit from an increasing array of quality products at low cost. Various labor issues (such as child labor, forced overtime work, workplace health and safety, and unionization) have emerged at these factories. In May 2000, for example, the Chentex factory in Nicaragua—which produces much of the Army and Air Force exchange’s private label jeans and denim product—interfered in a wage dispute involving two labor groups, firing the union leaders of one of the groups. Subsequently, much publicity ensued over working conditions at this factory. International labor rights were defined in the Trade Act of 1974 as the right of association; the right to organize and bargain collectively; a prohibition on the use of any form of forced or compulsory labor; a minimum age for the employment of children; and acceptable conditions of work with respect to minimum wages, hours of work, and occupational safety and health. As globalization progressed, U.S. government agencies, nongovernmental organizations, industry associations, retailers, and other private organizations began addressing worker rights issues in overseas factories. For example, the International Labor Organization, a United Nations specialized agency that formulates international policies and programs to help improve working and living conditions, has endorsed four international labor principles: (1) freedom of association and the effective recognition of the right to collective bargaining, (2) the elimination of all forms of forced or compulsory labor, (3) the effective abolition of child labor, and (4) the elimination of discrimination in employment. Appendix II provides additional information on governmental agencies’, nongovernmental organizations’, and industry associations’ efforts to address worker rights in overseas factories. The military exchanges are separate, self-supporting instrumentalities of the United States located within the Department of Defense (DOD). The Federal Acquisition Regulation, the Defense Federal Acquisition Regulation supplement, and component supplements do not apply to the merchandise purchased by the exchanges and sold in their retail stores, since the purchases are not made with appropriated funds. The Assistant Secretary of Defense (Force Management Policy) is responsible for establishing uniform policies for the military exchanges’ operations. The exchanges are managed by the Army and Air Force Exchange Service (AAFES), the largest exchange, and by the Navy Exchange Service Command (Navy Exchange) and Marine Corps Community Services (Marine Corps Exchange). The exchanges operate retail stores similar to department stores selling apparel, footwear, household appliances, jewelry, cosmetics, food, and other merchandise. For the past several years, about 70 percent of the exchanges’ earnings from these sales revenues were allocated to morale, welfare, and recreation activities— libraries, sports programs, swimming pools, youth activities, tickets and tour services, bowling centers, hobby shops, music programs, outdoor facilities, and other quality of life improvements for military personnel and their families—and about 30 percent to new exchange facilities and related capital projects. The number of retail locations and the annual revenues and earnings reported by the exchange services for 1999 and 2000 are shown in table 1. The exchanges have created private label products, which generally carry their own name or a name created exclusively for the exchange. The exchanges began creating private labels in the mid-1980s to provide lower prices for customers, to obtain higher earnings margins for the exchanges, and to remain competitive with major discount retailers. Private labels are profitable for retailers because their costs do not include marketing, product development, or advertising, which are used by companies to position national brands in the marketplace and to maintain the market share. In 2000, AAFES reported purchases of $44.8 million in private label merchandise from overseas companies, and the Navy Exchange reported purchases of $11.6 million in private label merchandise from importers.The Marine Corps Exchange only recently created its private label and did not purchase any private label merchandise from importers or overseas companies in 2000, but it reported purchases of about $350,000 of AAFES’ and the Navy Exchange’s private label merchandise for resale in its stores. The private label goods sold by the military exchanges are shown in table 2. The retailers we contacted in the private sector are more proactive about identifying working conditions than the military exchanges. They periodically requested that suppliers provide a list of overseas factories and subcontractors that they used to make the retailers’ private label merchandise, administered questionnaires on working conditions, visited factories, and researched labor issues in the countries where prospective factories are located. The military exchanges largely rely on their suppliers to identify and address working conditions in overseas factories that manufacture the exchanges’ private label merchandise. The exchanges generally did not maintain the names and locations of the relevant overseas factories. The exchanges assumed that their suppliers and other U.S. government agencies, such as U.S. Customs Service, ensured that labor laws and regulations that address working conditions and minimum wages were followed. The 10 leading private sector retailers we contacted are more active in identifying working conditions than the military exchanges for a variety of reasons, ranging from a sense of social responsibility to pressure from outside groups and a desire to protect the reputation of their companies’ product lines. These retailers periodically requested that overseas suppliers provide a list of factories and subcontractors that they used to make the retailers’ private label merchandise. Some retailers we contacted terminated a business relationship with suppliers that used a factory without disclosing it to the retailers. For example, JCPenney’s purchase contracts stipulate that failure by a supplier or one of its contractors to identify its factories and subcontractors may result in JCPenney’s taking the following actions: seeking compensation for any resulting expense or loss, suspending current business activity, canceling outstanding orders, prohibiting the supplier’s subsequent use of the factory, or terminating the relationship with the supplier. JCPenney officials told us that they have terminated suppliers for using unauthorized subcontractors. Some retailers that we interviewed, such as The Neiman Marcus Group, Inc., JCPenney, and Liz Claiborne, Inc., developed a company questionnaire, which they had factory management complete. The questionnaire addressed health and safety issues and whether U.S. or foreign government agencies had investigated the factory. The retailers used the questionnaire to provide factories with feedback on their compliance with the retailers’ standards and for the retailer to provide the factory an opportunity to make improvements in working conditions before an inspection. The representatives of these retailers told us that they visited factories to verify the accuracy of the factories’ answers to the questionnaire before ordering merchandise. Each of the 10 retailers we contacted told us they also used information on human rights issues that was either developed internally or was available from government agencies and nongovernmental organizations to assess labor issues in the countries where the factories are located. This included the Department of State’s annual Country Reports on Human Rights Practices (a legislatively mandated report to Congress that covers worker rights issues in 194 countries), which the retailers frequently cited as a source for identifying labor issues in a particular country. Most retailers also used information obtained from the United Nations; U.S. Department of State; U.S. Customs Service; U.S. Department of Labor; and nongovernmental organizations, such as Amnesty International. The retailers we contacted used this information in their assessments of suppliers to avoid business arrangements with factories in areas with a higher risk of labor abuses. In addition, some of the retailers told us that their decisions to buy merchandise made in a particular country sometimes depended on whether they could improve factory conditions in a country. For example, companies such as Levi Strauss & Co. used only those Chinese factories that corrected problem conditions, an approach supported by the officials we met at the Departments of State and Labor. The military exchanges’ methods for identifying working conditions in overseas factories that manufacture their private label merchandise are not as proactive as the methods employed by companies in the private sector. Only the Army and Air Force Exchange knew the identity of the factories that manufactured its private label merchandise, and none of the exchanges knew the nature of working conditions in these factories. Instead, they assumed that their suppliers and other government agencies ensured good working conditions. While the exchanges have sent letters to some suppliers describing their expectations of compliance with labor laws and regulations that address working conditions and minimum wages in individual countries, they have not taken steps to verify that overseas factories are in compliance or otherwise acted to determine the status of employee working conditions; instead, they assumed that their suppliers and other government agencies ensured good working conditions. For example, the Navy Exchange and the Marine Corps Exchange do not routinely maintain the name and location of the overseas factories that manufactured their merchandise because they rely on brokers and importers to acquire the merchandise from the overseas factories. The AAFES Retail Business Agreement requires suppliers to promptly provide subcontractors’ name and manufacturing sites upon request. But because it had no program to address working conditions in overseas factories, AAFES has not requested this information, except for the suppliers it used for its private label apparel, and then only to check on the quality of the merchandise being manufactured. AAFES’ records show that in fiscal year 2000, its private label apparel was manufactured in 70 factories in 18 countries and territories, as shown in table 3. In some cases, the exchanges’ private label merchandise was manufactured in countries that have been condemned internationally for their human rights and worker rights violations. For example, at 9 of the 10 retailers we contacted, officials told us that they had ceased purchasing from Myanmar (formerly Burma) in the 1990s because of reports of human rights abuses documented by governmental bodies, nongovernmental organizations, and the news media; at one retailer we contacted, officials told us that they had ceased purchasing from Myanmar in 2000 for the same reasons. In contrast, during 2001, each exchange purchased private label apparel made in Myanmar. For the most part, the exchanges assume compliance with laws and regulations that address child or forced labor in the countries where their factories are located instead of determining compliance. In 1996, for example, following the much publicized El Monte, California, sweatshop incident, the Navy Exchange notified all of its suppliers by letter that it expected its merchandise to be manufactured without child or forced labor and under safe conditions in the workplace, but it did not attempt to determine whether these suppliers and their overseas factories were willing and able to meet these expectations. The Navy Exchange and Marine Corps Exchange relied solely on their suppliers to address working conditions in the factories. Similarly, AAFES’ management officials told us that they assumed that their suppliers were in compliance with applicable laws and regulations by virtue of their having accepted an AAFES purchase order. According to these management officials, when suppliers accept a purchase order, they certify that they are complying with their Retail Business Agreement. This agreement, distributed by letter to all suppliers in 1997, states that by supplying merchandise to AAFES, the supplier guarantees that it—along with its subcontractors—has complied with all labor laws and regulations governing the manufacture, sale, packing, shipment, and delivery of merchandise in the countries where the factories are located. According to AAFES officials, an AAFES contracting officer and a representative of the supplier are to sign the agreement. We reviewed the contracting arrangements between AAFES and nine of its suppliers of private label merchandise. Only four of the nine suppliers had signed the AAFES Business Agreement. AAFES management officials also told us that they rely on the reputation of their suppliers for assurance that overseas factories are in compliance with its business agreements. For example, these officials told us that they use only the overseas suppliers that have existing business relationships with other major U.S. retailers. The officials also stated that since many of these private retailers have developed and are using their own program to address working conditions in their overseas factories, the use of the same suppliers provided some degree of confidence that the suppliers are working within the laws of the host nation. However, some retailers we contacted said their programs addressed factory conditions only for the period that the factories were manufacturing the retailer’s merchandise and that the factories did not have to follow their program when they were manufacturing merchandise for another company. AAFES management officials also told us that they rely on the U.S. Customs Service to catch imported products that are manufactured under abusive working conditions. However, the Customs officials we interviewed told us that their agency encourages companies to be aware of the working conditions in supplier factories to further reduce their risk of becoming engaged in an import transaction involving merchandise produced with forced or indentured child labor. According to the Customs’ officials, the military exchanges—like retailers—are responsible for assuring that their merchandise is not produced with child or forced labor. A single industry standard for adequate working conditions does not exist, and the retailers we contacted did not believe that such a standard was practical because each company must address different needs, depending on the size of its operations, the various locations where its merchandise is produced, and the labor laws that apply in different countries. However, each of the retailers that we contacted had taken three key steps that could serve as a framework for the exchanges in promoting compliance with local labor laws and regulations in overseas factories. They involve (1) developing codes of conduct for overseas suppliers; (2) implementing their codes of conduct by disseminating expectations to their purchasing staff, suppliers, and factory employees; and (3) monitoring to better ensure compliance. The three steps taken by the retailers vary in scope and rigor, and they are evolving. We did not independently evaluate the effectiveness of these retailers’ efforts, but the retailers’ representatives told us that although situations could occur in which their codes of conduct are not followed, they believed that these steps provided an important framework for ensuring due diligence and helped to better assure fair and safe working conditions. The government agencies we visited and the International Labor Organization also recognized these three steps as key program elements and expressed a willingness to assist the exchanges in shaping a program to assure that child or forced labor was not used to produce their private label merchandise. Representatives of the 10 retailers we contacted believed that the three steps they have taken—developing codes of conduct for overseas suppliers; implementing their codes of conduct by disseminating expectations to their purchasing staff, suppliers, and factory employees; and monitoring to better ensure compliance—provide due diligence as well as a mechanism to address and improve working conditions in overseas factories. For example, officials at Levi Strauss & Co. told us that after they refused to do business with a prospective supplier in India because the supplier’s factory had wage violations and health and safety conditions that did not meet Levi Strauss & Co.’s guidelines, the supplier made improvements and requested a reassessment 4 months later. According to Levi Strauss & Co., the reassessment showed that the supplier had corrected wage violations and met health and safety standards. In addition, employee morale had also improved, as indicated by lower turnover, improved product quality, and higher efficiency at the factory. In 1991, Levi Strauss & Co. became the first multinational company to establish a code of conduct to convey its policies on working conditions in supplier factories, and subsequently such codes were widely adopted by retailers. According to the Department of Labor, U.S. companies have adopted codes of conduct for a variety of reasons, ranging from a sense of social responsibility to pressure from competitors, labor unions, the media, consumer groups, shareholders, and worker rights advocates. In addition, allegations that a company’s operations exploit children or violate other labor standards put sales—which depend heavily on brand image and consumer goodwill—at risk and could nullify the hundreds of millions of dollars a company spends on advertising. According to Business for Social Responsibility, a nongovernmental organization that provides assistance for companies developing and implementing corporate codes of conduct, adopting and enforcing a code of conduct can be beneficial for retailers because it can strengthen legal compliance in foreign countries, enhance corporate reputation/brand image, reduce the risk of negative publicity, increase quality and productivity, and improve business relationships. “when notified by the U.S. Department of Labor or any state or foreign government, or after determining upon its own inspection that a supplier or its subcontractor has committed a serious violation of law relating to child or forced labor or unsafe working conditions, Federated will immediately suspend all shipments of merchandise from that factory and will discontinue further business with the supplier.” An official from Federated Department Stores, Inc., said that the company would demand that the supplier factory institute the monitoring programs necessary to ensure compliance with its code of conduct prior to the resumption of any business dealings with that supplier. A variety of monitoring organizations, colleges, universities, and nongovernmental organizations have codes of conduct, and codes of conduct have now been widely adopted by the private sector. The International Labor Organization’s Business and Social Initiatives Database includes codes of conduct for about 600 companies. While the military exchanges’ core values oppose the use of child or forced labor to manufacture their merchandise, the military exchanges do not have codes of conduct articulating their views. Examples of Internet Web sites with codes of conduct are included in appendix III. Although retailers in the private sector implement their codes of conduct in various ways, officials of the retailers we contacted told us that they generally train their buying agents and quality assurance employees on their codes of conduct to ensure that staff at all stages in the purchasing process are aware of their company’s code. For example, an official at Levi Strauss & Co. stated that his company continually educates its employees, including merchandisers, contract managers, general managers in source countries, and other personnel at every level of the organization during the year. Officials of the retailers we contacted told us they also have distributed copies of their codes of conduct to their domestic and international suppliers and provided them with training on how to comply with the code. In addition, some retailers required suppliers to post codes of conduct and other sources of labor information in their factories in the workers’ native language. For example, The Walt Disney Company has translated its code of conduct into 50 different languages and requires each of its suppliers to post the codes in factories in the appropriate local language. Retailers such as Liz Claiborne, Inc., and Levi Strauss & Co. also work with local human rights organizations to make sure that workers understand and are familiar with their codes of conduct. Some retailers dedicate staff solely to implementing a code of conduct, while other retailers assign these duties to various departments—such as compliance, quality assurance, legal affairs, purchasing agents, and government affairs—as a collateral responsibility. Executives and officials from the retailers we contacted stated that the successful implementation of a code of conduct requires the involvement of departments throughout the supply chain, both internally and externally (including supplier and subcontractor factories). They also stated that the involvement of senior executives is critical because they provide an institutional emphasis that helps to ensure that the code of conduct is integrated throughout the various internal departments of the company. To help ensure that suppliers’ factories are in compliance with their codes of conduct, the retailers we contacted have used a variety of monitoring efforts. Retailer officials told us that the extent of monitoring varies and can involve internal monitoring, in which the company uses its own employees to inspect the factories; external monitoring, in which the company contracts with an outside firm or organization to inspect the factories; or a combination of both. The various forms of monitoring involve the visual inspection of factories for health and safety violations; interviewing management to understand workplace policies; reviewing wage, hour, age, and other records for completeness and accuracy; and interviewing workers to verify workplace policies and practices. The 10 retail companies we contacted did not provide a precise cost for their internal and external monitoring programs, but a representative of Business for Social Responsibility estimated that monitoring costs ranged from $250,000 to $15 million a year. Some retailers suggested that the military exchanges could minimize costs by joining together to conduct monitoring, particularly in situations where the exchanges are purchasing merchandise manufactured at the same factories. Companies that rely on internal monitoring use their own staff to monitor the extent to which supplier factories adhere to company policies and standards. According to an official with the National Retail Federation, the world’s largest retail trade association, retailers generally prefer internal monitoring because it provides them with first-hand knowledge of their overseas facilities. At the same time, representatives of the nongovernmental organizations we visited expressed their opinion that inspections performed by internal staff may not be perceived as sufficiently independent. According to information we obtained from the retailers we contacted, nearly all of them had an internal monitoring program to inspect all or some supplier factories; their internal monitoring staff ranged from 5 to 100 auditors located in domestic and international offices. Some retailers said they perform prescreening audits before entering into a contractual agreement, followed by announced and unannounced inspections at a later time. The frequency of audits performed at supplier factories depends on various factors, such as the rigor and size of the corporation’s monitoring plan, the location of supplier factories, and complaints from workers or nongovernmental organizations. Some retailers—along with colleges, universities, and factories—are also using external monitoring organizations that provide specially trained auditors to verify compliance with workplace codes of conduct. We visited four of these monitoring organizations—Fair Labor Association, Social Accountability International, Worker Rights Consortium, and Worldwide Responsible Apparel Production. More information on these monitoring organizations appears in appendix II. Each organization has different guidelines for its monitoring program, but typically, a program involves (1) a code of conduct that all participating corporations must implement and (2) the inspection of workplaces at supplier factories participating in the program by audit firms accredited by the organization. External monitoring organizations’ activities differ in scope. For example, under the Fair Labor Association’s program, companies use external monitors accredited by the Fair Labor Association for periodic inspections of factories. In contrast, in the Worldwide Responsible Apparel Production’s program, individual factories are certified as complying with their program. Although differences in scope exist—and have led to debate on the best approach for a company—corporations that are adopting external monitors believe they are valuable for providing an independent assessment of factory working conditions. Some retailers we contacted offered to share their experiences in developing programs to address working conditions in overseas factories. The Departments of Labor and State, the U.S. Customs Service, and the International Labor Organization prepare reports that address working conditions in overseas factories. These organizations expressed a willingness to assist the military exchanges in shaping a program to assure that child or forced labor does not produce private label exchange merchandise. Furthermore, the International Labor Organization offered to provide advisory services, technical assistance, and training programs to help the military exchanges define and implement best labor practices throughout their supply chain. The military exchanges lag behind leading retailers in the practices they employ to assure that working conditions are not abusive in overseas factories that manufacture their private label merchandise. As a result, the exchanges do not know if workers in these factories are treated humanely and compensated fairly. The exchanges recently became more interested in developing a program to obtain information on worker rights and working conditions in overseas supplier plants, and the House Armed Services Committee Report for the Fiscal Year 2002 National Defense Authorization Act requires them to do so. However, developing a program that is understood throughout the supply chain, lives up to expectations over time, and is cost-effective will be a challenge. Leading retailers have been addressing these challenges for as long as 10 years and have taken three key steps to promote adequate working conditions and compliance with labor laws and regulations in overseas factories—developing codes of conduct, implementing the codes of conduct by the clear dissemination of expectations, and monitoring to ensure that suppliers’ factories comply with their codes of conduct. Drawing on information and guidance from various U.S. government agencies and the International Labor Organization can facilitate the military exchanges’ development of such a program. Information available from these entities could be useful not only in establishing an initial program but also in implementing it over time, and the costs may be minimized by having the military exchanges pursue these efforts jointly. As the Secretary of Defense moves to implement the congressionally directed program to assure that private label exchange merchandise is not produced by child or forced labor, we recommend the Under Secretary of Defense (Personnel and Readiness), in conjunction with the Assistant Secretary of Defense (Force Management Policy), require the Army and Air Force Exchange Service, Naval Exchange Service Command, and Marine Corp Community Services to develop their program around the framework outlined in this report, including creating a code of conduct that reflects the values and expectations that the exchanges have of their suppliers; developing an implementation plan for the code of conduct that includes steps to communicate the elements of the code to internal staff, business partners, and factory workers and to train them on these elements; developing a monitoring effort to ensure that the codes of conduct are using government agencies, such as the Departments of State and Labor, retailers, and the International Labor Organization as resources for information and insights that would facilitate structuring their program; establishing ongoing communications with these organizations to help the exchanges stay abreast of information that would facilitate their implementation and monitoring efforts to assure that exchange merchandise is not produced by child or forced labor; and pursuing these efforts jointly where there are opportunities to minimize costs. In commenting on a draft of this report, the Assistant Secretary of Defense (Force Management Policy) concurred with its conclusions and recommendations. The Assistant Secretary identified planned implementing actions for each recommendation and, where action had not already begun, established July 1, 2002, as the date for those actions to be effective. The Department’s written comments are presented in their entirety in appendix IV. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretary of the Army; the Secretary of the Navy; the Secretary of the Air Force; the Commander, Army and Air Force Exchange Service; the Commander, Navy Exchange Service Command; the Commander, Marine Corps Community Services; the Director, Office of Management and Budget; and interested congressional committees and members. We will also make copies available to others upon request. Please contact me at (202) 512-8412 if you or your staff has any questions concerning this report. Major contributors to this report are listed in appendix V. To compare military exchanges with the private sector in terms of the methods used to identify working conditions at the overseas factories, we limited our work to the exchanges’ efforts related to private label suppliers and performed work at the military exchanges and leading retail companies. To determine the actions of the exchanges to identify working conditions in the factories of their overseas suppliers, we reviewed the policies and procedures governing the contract files, purchase orders, and contractual agreements at the exchanges’ headquarters offices and interviewed officials responsible for purchasing merchandise sold by the exchanges. For example, we reviewed the contracting arrangements between the Army and Air Force Exchange Service (AAFES) and nine of its suppliers of private label merchandise to determine if AAFES had requested information on working conditions in overseas factories and whether the suppliers had signed the contractual documents. For historical perspective, we reviewed the results of prior studies and audit reports of the military exchanges. We met with officials and performed work at the headquarters of AAFES in Dallas, Texas; the Navy Exchange Service Command (Navy Exchange) in Virginia Beach, Virginia; and the Marine Corps Community Services (Marine Corps Exchange) in Quantico, Virginia. To determine the actions of the private sector to identify working conditions in the factories of their overseas suppliers, we analyzed 10 leading private sector companies’ efforts to identify working conditions in overseas factories by interviewing the companies’ officials and the documentation they provided. We chose seven of the companies from the National Retail Federation’s list of the 2001 Top 100 Retailers (in terms of sales) in the United States. The retailers and their ranking on the Federation’s list follow: Federated Department Stores, Inc. (15); JCPenney (8); Kohl’s (36); Kmart (5); The Neiman Marcus Group, Inc. (64); Sears, Roebuck and Co. (4); and Wal-Mart (1). The remaining three companies— The Walt Disney Company, Levi Strauss & Co., and Liz Claiborne, Inc.— were chosen on the basis of recommendations from U.S. government agencies, nongovernmental organizations, and industry associations as being among the leaders in efforts to address working conditions in overseas factories. These three companies generally refer to themselves as “manufacturers” or “licensing” organizations, but they also operate retail stores. We interviewed officials and reviewed documents from the Departments of State and Labor, the Office of the United States Trade Representative, and the International Labor Organization to gain a perspective on government and industry efforts to address factory working conditions. We also interviewed officials from industry associations and labor and human rights groups. To identify steps the private sector has taken to promote adequate working conditions at factories that could serve as a framework for the exchanges, we focused on the efforts of the 10 retailers. We documented the programs and program elements (e.g., codes of conduct, plans for implementing codes of conduct throughout the supply chain, and monitoring efforts) used by the 10 retailers that we contacted. We did not independently evaluate the private sector programs to determine the effectiveness of their efforts or to independently verify specific allegations of worker rights abuses. Rather, we relied primarily on discussions with retailers’ officials and the documentation they provided. We met with officials from government agencies and reviewed independent studies such as State and Labor Department and International Labor Organization reports, providing a perspective on government and industrywide efforts to address working conditions in overseas factories. We documented the procedures the exchanges used to purchase merchandise and interviewed headquarters personnel responsible for buying and inspecting merchandise made overseas. We also reviewed the exchanges’ policies, statements of core values, and oversight programs. To gain a perspective on the various approaches to address worker rights issues, we interviewed nongovernmental organizations and industry associations, including representatives from the National Labor Committee, National Consumers League, International Labor Rights Fund, Global Exchange, Investor Responsibility Research Center, Business for Social Responsibility, National Retail Federation, and the American Apparel and Footwear Association. In addition, we interviewed officials from four monitoring organizations—the Fair Labor Association; Social Accountability International; Worldwide Responsible Apparel Production; and Worker Rights Consortium—which inspect factories for compliance with codes of conduct governing labor practices and human rights. To collect information on government enforcement actions and funding for programs to address working conditions in overseas factories, we interviewed officials from the Department of State’s Office of International Labor Affairs, the Department of Labor’s Bureau of International Labor Affairs, the U.S. Customs Service’s Fraud Investigations Office, and the Office of the United States Trade Representative. For an international perspective on worldwide efforts, we visited the International Labor Organization’s offices in Washington, D.C., and Geneva, Switzerland. We performed our review from April through November 2001 in accordance with generally accepted government auditing standards. The Customs Service’s Fraud Investigations Office and its 29 attaché offices in 21 countries investigate cases concerning prison, forced, or indentured labor. The Customs officials work with the Department of State, Department of Commerce, and nongovernmental organizations to collect leads for investigations. In some cases, corporations have told Customs about suspicions they have about one of their suppliers and recommended an investigation. In addition, private citizens can report leads they may have concerning a factory. The Forced Child Labor Center was established as a clearinghouse for investigative leads, a liaison for Customs field offices, and a process to improve enforcement coordination and information. Customs also provides a toll-free hotline in the United States (1-800-BE-ALERT) to collect investigative leads on forced labor abuses. Outreach efforts from the Customs Service involve providing seminars around the world for U.S. government agencies, foreign governments, nongovernmental organizations, and corporations concerning forced and indentured labor issues. In December 2000, Customs published a manual entitled Forced Child Labor Advisory, which provides importers, manufacturers, and corporations with information designed to reduce their risk of becoming engaged in a transaction involving imported merchandise produced with forced or indentured child labor. Customs also publishes on its Internet Web site a complete list of outstanding detention orders and findings concerning companies that are suspected of producing merchandise from forced or indentured labor. Customs can issue a detention order if available information reasonably, but not necessarily conclusively, indicates that imported merchandise has been produced with forced or indentured labor; the order may apply to an individual shipment or to the entire output of a type of product from a given firm or facility. If, after an investigation, Customs finds probable cause that a class of merchandise is a product of forced or indentured child labor, it can bar all imports of that product from that firm from entering the United States. On June 5, 1998, the Department of the Treasury’s Advisory Committee on International Child Labor was established to provide the Treasury Department and the U.S. Customs Service with recommendations to strengthen the enforcement of laws against forced or indentured child labor, in particular, through voluntary compliance and business outreach. The Advisory Committee was established to support law enforcement initiatives to stop illegal shipments of products made through forced or indentured child labor and to punish violators. The Committee comprises industry representatives and child labor experts from human rights and labor organizations. Customs Service officials told us they have met with leading retailers to provide feedback on their internal monitoring programs to assure that their merchandise is not produced with forced child labor. Customs Service officials expressed a willingness to assist the exchanges in shaping a program to assure that child or forced labor does not produce private label exchange merchandise. The Department of Labor conducts targeted enforcement sweeps in major garment centers in the United States, but it does not have the authority to inspect foreign factories. In August, 1996, the Department of Labor called upon representatives of the apparel industry, labor unions, and nongovernmental organizations to join together as the Apparel Industry Partnership (later becoming the Fair Labor Association) to develop a plan that would assure consumers that apparel imports into the United States are not produced under abusive labor conditions. The Bureau of International Labor Affairs, Department of Labor, has produced seven annual congressionally requested reports on child labor, entitled By the Sweat and Toil of Children, concerning the use of forced labor, codes of conduct, consumer labels, efforts to eliminate child labor, and the economic considerations of child labor. Other relevant reports on worker rights produced by the Bureau include the 2000 Report on Labor Practices in Burma and Symposium on Codes of Conduct and International Labor Standards. Since 1995, the Department of Labor has also contributed $113 million to international child labor activities, including the International Labor Organization’s International Program for the Elimination of Child Labor. In addition, the Department of Labor provided the International Labor Organization with $40 million for both fiscal years 2000 and 2001 for programs in various countries concerning forced labor, freedom of association, collective bargaining, women’s rights, and industrial relations in lesser-developed nations. The Department also provides any company that would like to learn how to implement an effective monitoring program with technical assistance, and Labor officials have expressed a willingness to assist the exchanges in shaping a program to assure that private label exchange merchandise is not produced by child or forced labor. On January 16, 2001, the Department of State’s Anti-Sweatshop Initiative awarded $3.9 million in grants to support efforts to eliminate abusive working conditions and protect the health, safety, and rights of workers overseas. The Anti-Sweatshop Initiative is designed to support innovative strategies to combat sweatshop conditions in overseas factories that produce goods for the U.S. market. Five nongovernmental and international organizations, such as the Fair Labor Association, International Labor Rights Fund, Social Accountability International, American Center for International Solidarity, and the International Labor Organization, received over $3 million. In addition, the U.S. Agency for International Development will administer an additional $600,000 for smaller grants in support of promising strategies to eliminate abusive labor conditions worldwide. The Department of State’s Bureau of Democracy, Human Rights, and Labor publishes Country Reports on Human Rights Practices, a legislatively mandated annual report to Congress concerning worker rights issues, including child labor and freedom of association in 194 countries. Retailers and manufacturers stated they have utilized these reports to stay abreast of human and labor rights issues in a particular country and to make factory selections. The Department of State has expressed a willingness to assist the exchanges in shaping a program to assure that child or forced labor does not produce private label exchange merchandise. The Office of the U.S. Trade Representative leads an interagency working group—the Trade Policy Staff Committee—which has the right to initiate worker rights petition cases under the Generalized System of Preferences. The Generalized System of Preferences Program establishes trade preferences to provide duty-free access to the United States for designated products from eligible developing countries worldwide to promote development through trade rather than traditional aid programs. A fundamental criterion for the Generalized System of Preferences is that the beneficiary country has or is taking steps to afford workers’ internationally recognized worker rights, including the right to association; the right to organize and bargain collectively; a prohibition against compulsory labor; a minimum age for the employment of children; and regulations governing minimum wages, hours of work, and occupational safety and health. Under the Generalized System of Preferences, any interested party may petition the committee to review the eligibility status of any country designated for benefits. If a country is selected for review, the committee then conducts its own investigation of labor conditions and decides whether or not the country will continue to receive Generalized System of Preferences benefits. Interested parties may also submit testimony during the review process. In addition, U.S. Trade Representatives can express their concern about worker rights issues in a country to foreign government officials, which may place pressure on supplier factories to resolve labor conditions. (The general authority for duty-free treatment expired on September 30, 2001 . Proposed legislation provides for an extension with retroactive application similar to previous extensions of this authority. Authority for sub-Saharan African countries continues through September 30, 2008 [19 U.S.C. 2466b]). The International Labor Organization is a United Nations specialized agency that seeks to promote social justice and internationally recognized human and labor rights. It has information on codes of conduct, research programs, and technical assistance to help companies address human rights and labor issues. Currently, the International Labor Organization is developing training materials to provide mid-level managers with practical guidance on how to promote each of its four fundamental labor principles both internally and throughout a company’s supply chain. The following are the four fundamental principles: (1) freedom of association and the effective recognition of the right to collective bargaining, (2) the elimination of all forms of forced or compulsory labor, (3) the effective abolition of child labor, and (4) the elimination of discrimination in employment. These principles are contained in the International Labor Organization’s Declaration on Fundamental Principles and Rights at Work and were adopted by the International Labor Conference in 1998. To promote the principles, the U.S. Department of Labor is funding various projects pertaining to improving working conditions in the garment and textile industry and is addressing issues of freedom of association, collective bargaining, and forced labor in the following regions or countries: Bangladesh, Brazil, Cambodia, the Caribbean, Central America, Colombia, East Africa, East Timor, Kenya, India, Indonesia, Jordan, Morocco, Nigeria, Nepal, Vietnam, southern Africa, and Ukraine. For fiscal years 2000 and 2001, these projects received about $40 million in funding. On January 16, 2001, the International Labor Organization was awarded $496,974 by the Department of State Anti-Sweatshop Initiative to research how multinational corporations ensure compliance with their labor principles. Another research project seeks to demonstrate the link between international labor standards and good business performance. A major product of the research will be a publication for company managers that looks at the relationship between International Labor Organization conventions and company competitiveness and that then examines how adhering to specific standards (i.e., health and safety, human resource development, and workplace consultations) can improve corporate performance. The International Labor Organization has also created the Business and Social Initiatives Database, which includes extensive information on corporate policies and reports, codes of conduct, accreditation and certification criteria, and labeling programs on its Web site. For example, the database contains an estimated 600 codes of conduct from corporations, nongovernmental organizations, and international organizations. From fiscal year 1995 through fiscal year 2001, the Congress has appropriated over $113 million for the Department of Labor for international child labor activities including the International Labor Organization’s International Program on the Elimination of Child Labor. The program has estimated that the United States will pledge $60 million for the 2002-3 period. The United States is the single largest contributor to the International Program on the Elimination of Child Labor, which has focused on the following four objectives: Eliminating child labor in specific hazardous and/or abusive occupations. These targeted projects aim to remove children from work, provide them with educational opportunities, and generate alternative sources of income for their families. Bringing more countries that are committed to addressing their child labor problem into the program. Documenting the extent and nature of child labor. Raising public awareness and understanding of international child labor issues. The program has built a network of key partners in 75 member countries (including government agencies, nongovernmental organizations, media, religious institutions, schools, and community leaders) in order to facilitate policy reform and change social attitudes, so as to lead to the sustainable prevention and abolition of child labor. During fiscal years 2000-2003, the United States is funding programs addressing child labor in the following countries or regions: Bangladesh, Brazil, Cambodia, Colombia, Costa Rica, the Dominican Republic, El Salvador, Ghana, Guatemala, Haiti, Honduras, India, Jamaica, Malawi, Mongolia, Nepal, Nicaragua, Nigeria, Pakistan, the Philippines, Romania, South Africa, Tanzania, Thailand, Uganda, Ukraine, Vietnam, Yemen, and Zambia and Africa, Asia, Central America, Inter-America, and South America. Business for Social Responsibility, headquartered in San Francisco, California, is a membership organization for companies, including retailers, seeking to sustain their commercial success in ways that demonstrate respect for ethical values, people, communities, and the environment. (Its sister organization, the Business for Social Responsibility Education Fund, is a nonprofit charitable organization serving the broader business community and the general public through research and educational programs.) In 1995, this organization created the Business and Human Rights Program to address the range of human rights issues that its members face in using factories located in developing countries. The Business and Human Rights Program provides a number of services; for example, it offers (1) counsel and information to companies developing corporate human rights policies, including codes of conduct and factory selection guidelines for suppliers; (2) information services on human rights issues directly affecting global business operations, including country-specific and issue-specific materials; (3) a means of monitoring compliance with corporate codes of conduct and local legal requirements, including independent monitoring; (4) a mechanism for groups of companies, including trade associations, to develop collaborative solutions to human rights issues; and (5) the facilitation of dialogue between the business community and other sectors, including the government, media, and human rights organizations. The Fair Labor Association, a nonprofit organization located in Washington, D.C., offers a program that incorporates both internal and external monitoring. In general, the Association accredits independent monitors, certifies that companies are in compliance with its code of conduct, and serves as a source of information for the public. Companies affiliated with the Association implement an internal monitoring program consistent with the Fair Labor Association’s Principles of Monitoring, covering at least one-half of all their applicable facilities during the first year of their participation, and covering all of their facilities during the second year. In addition, participating companies commit to using independent external monitors accredited by the Fair Labor Association to conduct periodic inspections of at least 30 percent of the company’s applicable facilities during its initial 2- to 3-year participation period. On January 16, 2001, the Fair Labor Association was awarded $750,000 by the Department of State’s Anti-Sweatshop Initiative to enable the organization to recruit, accredit, and maintain a diverse roster of external monitors around the world. The Fair Labor Association’s participating companies include the following: Adidas-Saloman A.G.; Nike, Inc.; Reebok International Ltd.; Levi Strauss & Co.; Liz Claiborne, Inc.; Patagonia; GEAR for Sports; Eddie Bauer; Josten’s Inc.; Joy Athletic; Charles River Apparel; Phillips-Van Heusen Corporation; and Polo Ralph Lauren Corporation. Global Exchange, headquartered in San Francisco, California, is a nonprofit research, education, and action center dedicated to increasing global awareness among the U.S. public while building international partnerships around the world. Global Exchange has filed and supported class-action lawsuits against 26 retailers and manufacturers concerning alleged sweatshop abuse in Saipan’s apparel factories. As of September 2001, 19 of those corporations had settled for $8.75 million and have agreed to adopt a code of conduct and a monitoring program in Saipanese factories that produce their merchandise. The International Labor Rights Fund is a nonprofit action and advocacy organization located in Washington, D.C. It pursues legal and administrative actions on behalf of working people, creates innovative programs and enforcement mechanisms to protect workers’ rights, and advocates for better protections for workers through its publications; testimony before national and international hearings; and speeches to academic, religious, and human rights groups. The Fund is currently participating in various lawsuits against multinational corporations involving labor rights in Burma, Colombia, Guatemala, and Indonesia. In 1996, the International Labor Rights Fund and Business for Social Responsibility were key facilitators in establishing a monitoring program for a Liz Claiborne, Inc., supplier factory in Guatemala. The Guatemalan nongovernmental monitoring organization, Coverco, was founded from this process and has since published two public reports on the results of its meetings with factory management and factory workers. Officials at Liz Claiborne, Inc., stated that the monitoring initiative has been very effective in detecting and correcting problems and helpful in offering ideas for best practices and has provided enhanced credibility for the company’s monitoring efforts. In 2001, the International Labor Rights Fund was awarded an Anti- Sweatshop Initiative grant from the Department of State in the amount of $152,880. The Fund plans to undertake a project to work with labor rights organizations in Africa, Asia, and Latin America to build a global campaign for national and international protections for female workers. The Fund will conduct worker surveys and interviews in Africa and the Caribbean to determine the extent of the problem. In addition, the Fund and its nongovernmental organization partners will develop an educational video to help alert women workers in these countries about the problem of sexual harassment. The Investor Responsibility Research Center, located in Washington, D.C., is a research and consulting organization that performs independent research on corporate governance and corporate responsibility issues. The Center contributed to the University Initiative Final Report, which collected information on working conditions in university-licensed apparel factories in China, El Salvador, Mexico, Pakistan, South Korea, Thailand, and the United States. The report addresses steps the universities can implement to address poor labor conditions in licensee factories and ongoing efforts by government and nongovernmental organizations to improve working conditions in the apparel industry. The report is based on factory visits and interviews with nongovernmental organizations, labor union officials, licensees, factory owners and managers, and government officials. The National Consumers League is a nonprofit organization located in Washington, D.C. Its mission is to identify, protect, represent, and advance the economic and social interests of consumers and workers. Created in 1899, the National Consumers League is the nation’s oldest consumer organization. The League worked for the national minimum wage provisions in the Fair Labor Standards Act (passed in 1938) and has helped organize the Child Labor Coalition, which is committed to ending child labor exploitation in the United States and abroad. The Child Labor Coalition comprises more than 60 organizations representing educators, health groups, religious and women’s groups, human rights groups, consumer groups, labor unions, and child labor advocates. The Coalition works to end child labor exploitation in the United States and abroad and to protect the health, education, and safety of working minors. National Labor Committee The National Labor Committee is a nonprofit human rights organization located in New York City. Its mission is to educate and actively engage the U.S. public on human and labor rights abuses by corporations. Through education and activism, the committee aims to end labor and human rights violations. The committee has led “Corporate Accountability Campaigns” against major retailers and manufactures to improve factory conditions. In El Salvador, the National Labor Committee has facilitated an independent monitoring program between (1) The GAP, the retailer; (2) Jesuit University in San Salvador, the human rights office of the Catholic Archdiocese; and (3) the Center for Labor Studies, a nongovernmental organization. The committee advocates that corporations should disclose supplier factory locations and hire local religious or human rights organizations to conduct inspections in factories. Social Accountability International, founded in 1997, is located in New York City, New York. It is a nonprofit monitoring organization dedicated to the development, implementation, and oversight of voluntary social accountability standards in factories around the world. In response to the inconsistencies among workplace codes of conduct, Social Accountability International developed a standard, named the Social Accountability 8000 standard, for workplace conditions and a system for independently verifying compliance of factories. The Social Accountability 8000 standard promotes human rights in the workplace and is based on internationally accepted United Nations and International Labor Organization conventions. Social Accountability 8000 requires individual facilities to be certified by independent, accredited certification firms with regular follow-up audits. As of November 2001, 82 Social Accountability 8000 certified factories were located in 21 countries throughout Asia, Europe, North America, and South America. U.S. and international companies adopting the Social Accountability 8000 standard are Avon, Cutter & Buck, Eileen Fisher, and Toys R Us. In 2001, Social Accountability International was awarded an Anti-Sweatshop Initiative grant from the Department of State of $1 million for improving social auditing through research and collaboration; capacity building; and consultation with trade unions, nongovernmental organizations, and small and medium-sized enterprises; and consumer education. These projects will take place in several countries, including Brazil, China, Poland, and Thailand, and consumer education will be focused on the United States. Worker Rights Consortium Worker Rights Consortium, a nonprofit monitoring organization located in Washington, D.C., provides a factory-based certification program for university licensees. University students, administrators, and labor rights activists created Worker Rights Consortium to assist in the enforcement of manufacturing codes of conduct adopted by colleges and universities; these codes are designed to ensure that factories producing goods bearing college and university logos respect the basic rights of workers. The Worker Rights Consortium investigates factory conditions and reports its findings to universities and the public. Where violations are uncovered, the Consortium works with colleges and universities, U.S.-based retail corporations, and local worker organizations to correct the problem and improve conditions. It is also working to develop a mechanism to ensure that workers producing college logo goods can bring complaints about code of conduct violations, safely and confidentially, to the attention of local nongovernmental organizations and the Worker Rights Consortium. As of November 2001, 92 colleges and universities had affiliated with the Worker Rights Consortium, adopting and implementing a code of conduct in contracts with licensees. The Worldwide Responsible Apparel Production, a nonprofit monitoring organization located in Washington, D.C., monitors and certifies compliance with socially responsible standards for manufacturing and ensures that sewn products are produced under lawful, humane, and ethical conditions. The basis for creating the monitoring and certification program came from apparel producers that requested that the American Apparel & Footwear Association address inconsistent company standards and repetitive monitoring. The program is a factory certification program that requires a factory to perform a self-assessment followed by an evaluation by a monitor from the Worldwide Responsible Apparel Production Certification Program. On the basis of this evaluation, the monitor will either recommend that the facility be certified or identify areas where corrective action must be taken before such a recommendation can be made. Following a satisfactory recommendation from the monitor, the Worldwide Responsible Apparel Production Certification Board will review the documentation of compliance and decide upon certification. The Certification Program was pilot tested in 2000 at apparel manufacturing facilities in Central America, Mexico, and the United States. As of November 2001, 500 factories in 47 countries had registered to become certified. The American Apparel & Footwear Association, a national trade association located in Washington, D.C., represents roughly 800 U.S. apparel, footwear, and supplier companies whose combined industries account for more than $225 billion in annual U.S. retail sales. The Association was instrumental in creating the Worldwide Responsible Apparel Production monitoring program. The Association’s Web site states that “members are committed to socially responsible business practices and to assuring that sewn products are produced under lawful, humane, and ethical conditions.” The American Apparel & Footwear Association has also created a Social Responsibility Committee, in which various manufacturers meet to discuss their programs to address worker rights issues. National Retail Federation As the world’s largest retail trade association, National Retail Federation, located in Washington, D.C., conducts programs and services in research, education, training, information technology, and government affairs to protect and advance the interests of the retail industry. The Federation’s membership includes the leading department, specialty, independent, discount, and mass merchandise stores in the United States and 50 nations around the world. It represents more than 100 state, national, and international trade organizations, which have members in most lines of retailing. The National Retail Federation also includes in its membership key suppliers of goods and services to the retail industry. The Federation has a Web site link entitled, “Stop Sweatshops,” which provides information on the retail industry’s response to sweatshops, including forms of monitoring and a brief history of U.S. sweatshops. The Federation also has an International Trade Advisory Council, comprising retail and sourcing representatives, which discusses various issues pertaining to international labor laws; international trade; and customs matters, both in the legislative and regulatory areas. The codes of conduct for the retailers we visited that have posted their codes on the Internet are at the Internet Web sites shown in table 4. In addition to those named above, Nelsie Alcoser, Jimmy Palmer, and Susan Woodward made key contributions to this report.
The military exchanges operate retail stores similar to department stores in more than 1,500 locations worldwide. The exchanges stock merchandise from many sources, including name-brand companies, brokers and importers, and overseas firms. Reports of worker rights abuses, such as child labor and forced overtime, and antilabor practices have led human rights groups and the press to scrutinize working conditions in overseas factories. GAO found that the military exchanges are not as proactive as private sector companies in determining working conditions at the overseas factories that manufacture their private label merchandise. Moreover, the exchanges have not sought to verify that overseas factories comply with labor laws and regulations. A single industrywide standard for working conditions at overseas factories was not considered practical by the 10 retailers GAO contacted. However, these retailers have taken the following three steps to ensure that goods are not produced by child or forced labor: (1) developing workplace codes of conduct that reflect their expectations of suppliers; (2) disseminating information on fair and safe labor conditions and educating their employees, suppliers, and factory workers on them; and (3) using their own employees or contractors to regularly inspect factories to ensure that their codes of conduct are upheld.
The four interagency groups we reviewed possessed varied characteristics related to their purposes and outcomes, leadership structures, agency participation, and funding sources and staffing, as discussed in more detail below. We reported in 2011 that there were approximately 1.1 million school-age dependents of military parents in the United States. Because of their family situations, military dependent students may face a range of unique challenges, such as frequent moves throughout their school career and the emotional difficulties of having deployed parents. DOD and Education officials have a history of collaborating on education issues for children of military families. They formalized and broadened these efforts with an MOU, which they signed in June 2008. The purpose of the MOU was to establish a framework for collaboration between DOD and Education to address the quality of education and the unique challenges faced by children of military families. The MOU defined, in general terms, the basis on which these departments would work together to strengthen and expand school-based efforts to ease student transitions and help military dependent students develop academic and coping skills during periods of parental deployments. In addition, the MOU required the creation of a working group to ensure that the agencies meet the objectives of the MOU. The DOD and Education MOU Working Group (MOU Working Group), is co-chaired by representatives from DOD’s Defense Education Activity’s Educational Partnership Branch and Education’s Office of Innovation and Improvement, Military Liaison Team. The working group is also composed of representatives from several DOD and Education offices. The MOU Working Group has no separate budget. Working group representatives participate in working group activities as part of fulfilling their respective responsibilities at their home agencies. According to DOD and Education officials, they have made progress on a number of initiatives. For example, the Chief of the Educational Partnerships and Non-DOD School Program for DOD told us in May 2013 that 47 states had signed an interstate compact that allowed flexibility during the transfer of military dependent students across jurisdictions. It also allowed credits and course work to more easily transfer to the students’ new schools. In December 2012, we reported that about 700,000 inmates are released from federal and state custody each year, and another 9 million are booked into and released from local jails, according to the Bureau of Justice Statistics. Moreover, we reported that these inmates face considerable challenges as they transition into, or reenter, society after incarceration. More than two-thirds of state prisoners are rearrested for a new offense within three years of their release and about half are reincarcerated. In January 2011, the U.S. Attorney General convened the Reentry Council, a group of 20 federal entities whose mission is to make communities safer, assist those who return from prison and jail in becoming productive citizens, and save taxpayer dollars by lowering the direct and collateral costs of incarceration. The premise of the Reentry Council is that many federal agencies have a major stake in assisting former inmates or inmates preparing for release from federal, state, and local correctional facilities. The U.S. Attorney General chairs the Reentry Council’s annual meeting. Also supporting the council is a staff-level working group that meets monthly. The Reentry Council has no separate budget, and its representatives participate in the group’s activities as part of fulfilling their responsibilities at their respective agencies. As we found in December 2012, among other accomplishments, the Reentry Council has been focused on reducing the barriers that exist for the reentry population. For example, the Reentry Council has taken several actions to address collateral consequences of criminal convictions—these are the laws and policies that restrict former inmates from things such as employment, welfare benefits, access to public housing, and eligibility for student loans for higher education. Such collateral penalties place substantial barriers to an individual’s social and economic advancement and can challenge successful reentry. As we reported in June 2012, during the 2007-2009 recession, the elevated unemployment rate and declining home prices worsened the financial circumstances for many families, along with their ability to make their mortgage payments. As we and the Department of Housing and Urban Development (HUD) reported, this period coincided with a rapid increase in the percentage of loans in foreclosure and increased demand for rental housing. In 2010, the Domestic Policy Council (DPC) established the Rental Policy Working Group, along with HUD, the U.S. Department of Agriculture (USDA), and the Department of the Treasury (Treasury), to respond to the need for better coordination of federal rental policy. We reported in August 2012 that HUD, Treasury, USDA, the Department of Labor and the Federal Home Loan Banks administered 45 programs or activities that supported rental housing in fiscal year 2010. DPC leads the Rental Policy Working Group meetings. This group is supported by various subgroup meetings, which are lead by the respective leads for those groups, USDA, HUD, or Treasury. The Rental Policy Working Group has no separate budget, and group representatives participate in the group’s activities as part of fulfilling their responsibilities at their respective agencies. As we discuss in more detail later, HUD official told us that, since January 2013, HUD has continued working with USDA and Treasury to implement a set of alignment recommendations that would improve coordinated government-wide oversight of subsidized rental housing properties, and reduce the administrative burden on affordable housing owners and managers. For one of those recommendations, the Rental Policy Working Group launched a pilot program in six states to test the feasibility of conducting a single, recurring physical inspection for jointly subsidized rental housing properties that would satisfy all agencies’ inspection requirements. According to the Rental Policy Working Group, this pilot program has avoided 120 duplicative inspections across the six states that participated in a second round of this pilot program in 2013. positions rotate among its members at the first meeting of each year. Additionally, an executive director, who is appointed by USICH member agencies and reports directly to the USICH’s chairperson, manages USICH’s daily activities. USICH is required by law to meet at least four times per year, although it has met more frequently. Unlike the other interagency groups we reviewed, USICH receives an appropriation from Congress and employs full-time staff. According to HUD, the total number of people identified as experiencing homelessness on a single night has decreased by 9.2 percent between 2007 and 2013. A number of sub-populations have also demonstrated reductions in homelessness. Specifically, HUD reported that, from 2010 through 2013, the number of people experiencing chronic homelessness was reduced by more than 15 percent, and the number of homeless veterans was reduced by about 24 percent during that same period. GPRAMA is a significant enhancement of GPRA, which was the centerpiece of a statutory framework that Congress put in place during the 1990s to help resolve long-standing management problems in the federal government, and provide greater accountability for results. GPRA sought to focus federal agencies on performance by requiring agencies to develop long-term and annual goals—contained in strategic and annual performance plans—and measure and report on progress towards those goals annually. In our past reviews of its implementation, we found that GPRA provided a solid foundation to achieve greater results in the federal government. However, several key governance challenges remained, including addressing crosscutting issues. To help address this and other challenges, GPRAMA revises existing provisions and adds new requirements. Some of the new provisions and requirements that emphasize collaboration include: Cross-agency priority goals: The Office of Management and Budget (OMB) is required to coordinate with agencies to establish federal government priority goals that include outcome-oriented goals covering a limited number of policy areas, as well as goals for management improvements needed across the government. The act also requires that OMB—with agencies—develop annual federal government performance plans to, among other things, define the level of performance to be achieved toward the cross-agency priority goals. GPRAMA also requires that OMB identify the agencies, organizations, program activities, regulations, tax expenditures, policies, and other activities contributing to each crosscutting priority goal. Agency priority goals: Certain agencies are required to develop a limited number of agency priority goals every two years. Both the agencies required to develop these goals and the number of goals to be developed are determined by OMB. These goals are to reflect the highest priorities of each selected agency, as identified by the head of the agency, and be informed by the cross-agency priority goals, as well as input from relevant congressional committees. GPRAMA requires agencies to identify organizations, program activities, regulations, policies, and other activities—both internal and external to the agency— that contribute to each of their agency priority goals and include this information in their performance plans and provide it to OMB for publication on Performance.gov. directs agencies to include tax expenditures in their identification of organizations and programs that contribute to their agency priority goals. OMB is required to develop a single, government-wide performance website to communicate government-wide and agency performance information. The website— implemented by OMB as Performance.gov—is required to make available information on agency priority goals and cross-agency priority goals, updated on a quarterly basis; agency strategic plans, annual performance plans, and performance updates; and an inventory of all federal programs. For more information, see GAO, Managing for Results: Leading Practices Should Guide the Continued Development of Performance.gov, GAO- 13-517 (Washington, D.C.: June 6, 2013). must also designate a goal leader, who is responsible for achieving the goal. Federal program inventory: GPRAMA requires OMB to compile and make publicly available a list of all federal programs, and to include the purposes of each program, how it contributes to the agency’s mission and goals, and recent funding information. Data-driven performance reviews: GPRAMA requires data-driven performance reviews at the federal level with a provision that federal agencies conduct quarterly performance reviews on progress toward their agency priority goals. Specifically, agencies are required to assess how relevant programs and activities contribute to achieving agency priority goals; categorize goals by their risk of not being achieved; and for those at risk, identify strategies to improve performance. GPRAMA also specified that the reviews must occur on at least a quarterly basis and involve key leadership and other relevant parties both within and outside the agency. Strategic reviews: OMB’s 2013 guidance directs agencies to conduct annual strategic reviews of progress toward strategic objectives to inform their decision making, beginning in 2014. Agency leaders are responsible for assessing progress on each strategic objective established in the agency strategic plan, including mission, as well as management or crosscutting objectives. Among other things, the reviews are intended to strengthen collaboration on crosscutting issues by identifying and addressing crosscutting challenges or fragmentation. Key Considerations for Implementing Interagency Collaborative Mechanisms Outcomes Have short-term and long-term outcomes been clearly defined? Started group with most directly affected participants and gradually broadened to others. Conducted early outreach to participants and stakeholders to identify shared interests. Held early in-person meetings to build relationships and trust. Identified early wins for the group to accomplish. represented the collective interests of participants. Developed a plan to communicate outcomes and track progress. Revisited outcomes and refreshed interagency group. Establishing shared outcomes and goals that resonate with, and are agreed upon by all participants, is essential to achieving outcomes in interagency groups, but can also be challenging. Participants each bring different views, organizational cultures, missions, and ways of operating. They may even disagree on the nature of the problem or issue being addressed. We identified a number of challenges in our prior work that interagency groups face when attempting to develop shared goals, including building a coalition of key federal participants, agreeing on the nature of a crosscutting issue, establishing mutually agreed-upon outcomes or objectives, and incorporating outcomes into strategic plans or implementation plans, among others. Furthermore, agency officials involved in several of the interagency groups we reviewed cautioned that if agencies don’t have a vested interest in the outcomes, and if outcomes are not aligned with agency objectives, participant agencies would not invest their limited time and resources. They told us that the process of developing shared or group outcomes takes time, requires building relationships, and creating trust. The following approaches were used by agency officials to avoid or address these challenges. We found that three of the four interagency groups we reviewed were started with a smaller group of key participants. Several expert practitioners we spoke with emphasized the importance of ensuring initial participation from agencies that have significant responsibility or interest in a crosscutting issue area. Officials reported that these early interactions helped to establish initial momentum and a vision for subsequent collaborative efforts. Over time, this smaller group of participants added agencies that had a more targeted commitment in the group’s activities and outcomes. For example, prior to the formation of the Reentry Council, a core group of agencies with considerable involvement in reentry issues and programs met to discuss their common interests and the possibility of further coordinating their efforts. According to one official we spoke with, these agencies included the Departments of Labor, Justice, Veterans Affairs, Education, Housing and Urban Development, and Health and Human Services. Following a number of early interactions, and meetings between officials from these agencies, interest grew for a more formal and coordinated approach to advance effective reentry policies. Subsequently, the U.S. Attorney General convened the Reentry Council in January 2011. Over time, interest has more than doubled to include 20 federal agencies. Agency officials in all four interagency groups we reviewed and a number of expert practitioners emphasized the importance of reaching out to potential participants and identifying shared interests. While the interagency groups we reviewed benefitted from starting with a smaller group of participants, our past work found that if collaborative efforts do not consider the input of all relevant stakeholders, important opportunities for achieving outcomes may be missed. Officials reported that shared interests are the driving force for collaborative efforts, and collecting early input from participants was necessary to determine whether interagency collaboration would be mutually beneficial. In some cases, agency officials agreed on the nature of an issue. However in other cases, officials held conflicting perspectives. To overcome conflicting perspectives, participants of interagency groups conducted outreach to stakeholders to build reasonable agreement. In one instance, USICH conducted extensive outreach to participants and stakeholders prior to developing shared interagency outcomes and a national strategic plan in 2010. Specifically, USICH’s outreach activities included feedback collected from workgroups composed of federal officials, expert practitioner panels, input from more than 750 leaders at regional stakeholder forums, focus groups, congressional staff, consumer advisory boards, and written comments from thousands of community experts and individuals. According to documents that outlined the process for gathering stakeholder input and interviews with officials that participate in USICH, this input helped to inform the plan’s priorities and strategies. In addition, participants reported that it was essential to develop a practical and evidence-based plan with on-the-ground solutions that have widespread support. Furthermore, agency officials from HUD, Department of Veterans Affairs (VA), HHS and Department of Labor (Labor), each reported that they are committed to the national strategic plan on homelessness and believe it reflects their own agency’s objectives and interests. Three of the interagency groups we examined and both expert practitioner panels stressed the importance of holding in-person meetings during the early stages of an interagency group. They each noted that personal interactions contributed to relationship-building, which formed the foundation for all subsequent activities and helped to break down silos. These meetings also enabled officials to learn about individual perspectives and aided in the transfer of knowledge between participating agencies. In addition, officials reported that in-person interactions helped build trust and strengthen professional networks. In our past work, we found that trust is an essential element to collaborative relationships. We also previously found that positive working relationships between participants from different agencies bridge organizational cultures. These relationships build trust and foster communication, which facilitates collaboration. Other officials we spoke with emphasized the importance of building trust on an individual basis with officials from participating agencies with related policy and program responsibilities. The purpose and activities for these in-person meetings varied and included planning, negotiating agreements, and information sharing, among others. For example, participants of the Rental Policy Working Group said that when they began working together, they spent several months building relationships and understanding each agency’s rental housing programs, policies, and efforts. One expert practitioner told us that in-person meetings were essential for the Southeast Environmental Partnership for Planning and Sustainability to negotiate an agreement on environmentally acceptable procedures for controlled burns. Controlled burns, sometimes called prescribed burns, refer to the process of setting fires under controlled conditions. Initially, partnership participants had a very different view of controlled burns as an environmental activity. Officials from the Environmental Protection Agency focused on controlled burns as a contributor to air pollution, whereas officials from other federal and state agencies that conduct controlled burns, such as the Departments of the Interior, Agriculture, and Defense, viewed it as an important activity to sustain and manage land. Over time, officials from these agencies began to gain a better understanding of each other’s perspectives by meeting face-to-face. This interaction built trust and allowed them to reach common ground. Ultimately, the officials who conducted controlled burns for preservation of landscape ecologies adopted methods to minimize the environmental effects of these burns. Meanwhile, officials responsible for regulating air pollution gained a better appreciation for the value of fire in ecological restoration and preservation. A number of agency officials and expert practitioners recommended that newly formed interagency groups identify and pursue “early wins” as an approach to build momentum and develop positive working relationships between group participants. According to officials, “early wins” should be practical and achievable projects that can be completed in the short-term. We were told that early wins allowed officials to establish relationships with their counterparts in other agencies and enabled teams to practice working together. This approach is consistent with our prior work that identified key practices from select efficiency initiatives, which highlighted the importance of identifying easily accomplished initiatives that can generate immediate returns to gain momentum for efficiency improvements. Early wins had a secondary benefit of demonstrating the benefits of collaboration. Officials from the groups reported that achieving early wins, allowed participants to build upon recent experiences, working relationships, improved knowledge of related programs, and team structures that had been established to coordinate group activities. Participants of the Reentry Council’s staff-level working group initially employed an approach to identify “low hanging fruit” and intentionally sought early successes to build support and momentum. According to these officials, these early wins kept participants engaged and involved. For example, after forming the Reentry Council, participants agreed to participate in a “myth busting” campaign to address common misconceptions and dispel myths associated with the reentry population. According to Department of Justice (DOJ) officials, the “myth busting” campaign was implemented within existing authorities and received widespread support among participant agencies. As part of the campaign, the Council and its subcommittees developed short one or two-page whitepapers that clarified government policies, rules, and regulations related to formerly incarcerated individuals, and distributed them to stakeholders at the federal, state, and local levels. In one instance, the myth buster noted that there is a misconception that housing assistance from public housing authorities (PHA) is not generally allowable for formerly incarcerated individuals who qualify under federal guidelines (see figure 2). According to DOJ officials, a number of local housing authorities—such as New York City and New Orleans, Louisiana—have since reconsidered admissions policies for formerly incarcerated individuals. In another example of a quick win, Reentry Council member agencies developed new policies that enhanced their ability to meet the needs of the reentry population. Specifically, DOJ officials told us that VA had previously not been permitted to conduct outreach to incarcerated veterans until six months prior to their release. According to Reentry Council documents, VA revised its administrative policy that limited prison outreach. According to these documents, the revised policy allows for assessment and release planning with incarcerated veterans earlier than six months before release, thus enhancing the odds of successful reentry to society. Agency officials in two of the groups we reviewed described a process for developing goals that represented the collective interests of participants, and articulating goals a high enough level that participants could reach agreement, but with enough specificity that participants felt they had a stake in the group’s goals. For example, an official from the Reentry Council’s staff-level working group told us that they developed six goals that were intentionally crafted at a high level to attract widespread support from participating agencies. Although broad, the goals were also focused on important issue areas and challenges that participating agencies expressed interest in addressing. The group goals included: promoting statutory, policy, and practice changes to reduce crime and identifying research- and evidence-based practices; identifying opportunities and barriers to improve outcomes; improve the well-being of formerly incarcerated people; supporting initiatives in the areas of education, employment, health, housing, faith-based reentry services, drug treatment, and family and community well-being; leveraging resources across agencies; and, coordinating messaging and communication about prisoner reentry. Agency officials we spoke with said that these goals had not changed since being adopted in 2011, and are likely to remain relevant into the future. To represent the collective interests of its participants, USICH has a policy to reach agreement among its members to ensure that all views are heard. As noted above, USICH is composed of the heads (or their designee) of 19 federal agencies. All 19 agencies have equal votes in any decisions brought before the group. USICH worked through its Council Policy Group to develop strategic interagency opportunities, built consensus, and laid the groundwork for the decisions brought before the leadership. We observed this process take place in June 2012 when USICH was considering revisions to objectives in its strategic plan. All of the interagency groups we examined had developed formal plans or strategies that included outcomes, objectives, and descriptions of the group. We have previously reported on the importance of reinforcing agency accountability for collaborative efforts through agency plans and reports. Our prior work found that agencies that articulate their agreements in formal documents can strengthen their commitment to working collaboratively. The DOD and Education MOU Working Group developed a strategic plan to track its progress toward objectives, actions, and measurable outcomes that fulfilled the intention of their interagency agreement. Specifically, the strategic plan was aligned to focus on areas identified in the MOU, including expanding the quality of educational opportunities for military-dependent students, overcoming challenges military-dependent students face due to transitions or deployments, collaborative use of data, and increasing awareness of relevant education-related issues. The working group’s strategic plan describes the areas of mutual interest, and outlines specific objectives within these areas of interest that promote greater collaboration and improve the education of children of military families. For example, one objective calls for increasing awareness of education-related issues for military dependent children. The strategic plan provides related action items, such as development of a joint strategic communication plan, and subtasks with measurable outcomes, target audiences, and individual agency leads to promote accountability. DOD and Education officials told us that the strategic plan helped them examine and prioritize their areas of collaboration to plan for future efforts, and reflect on the extent to which they are meeting the original intent of the MOU. Several expert practitioners emphasized that interagency groups should periodically revisit their outcomes, and ensure that their work is aligned with current needs. In past work, we have discussed the importance of sustainability of group leadership. However, several expert practitioners noted that the group’s duration should be dictated by the nature of the outcome. In fact, the expert practitioners added that, to stay productive, many groups need to refresh their focus. If groups are not able to agree to a clear outcome, one expert practitioner noted that the group may decide to cease operating entirely. In some cases, interagency groups achieve their outcomes and can cease to meet or change focus. In other cases, expert practitioners told us that the focus of some groups changed over time and needed to be refreshed or given a new focus for the group to continue. In the instance of the MOU Working Group, in May 2010—which was two years after the working group was formed—the President announced that an Interagency Policy Committee on Education would develop a new study directive to strengthen military families. This directive included outcomes to ensure excellence in military children’s education and their development, which included: Reducing negative impacts of frequent relocations and absences; and Encouraging the healthy development of military children. Improving the quality of the educational experience; These outcomes were directly related to the work of the DOD and Education MOU Working Group, which focused on improving educational outcomes for children from military families, according to DOD and Education officials. According to senior Education officials, the directive led Education to place an even greater priority on its collaborative efforts with DOD, and built upon the MOU Working Group’s strategic plan and related initiatives. The study directive provided another framework under which DOD and Education have worked together to improve the quality of education for military dependent children. DOD officials told us that, over the past two years, they have refocused on a number of new goals and emerging issues of importance. In one instance, the department has moved to focus on charter schools, with an emphasis on those in military instillations, and with high concentrations of military dependent students. Key Considerations for Implementing Interagency Collaborative Mechanisms Accountability Is there a way to track and monitor progress? Developed performance measures and Do participating agencies have tied them to shared outcomes. Identified and shared relevant agency performance data. collaboration-related competencies or performance standards against which individual performance can be evaluated? Developed methods to report on the group’s progress that are open and transparent. Incorporated interagency group activities into individual performance expectations. Agencies in all of the groups we reviewed developed performance measures—or other approaches to track contributions—within their own agencies that related to the outcomes of the interagency group. However, officials explained that within interagency efforts, the commitment of individual agencies varied. This difference in commitment is reflected in the prominence of interagency group activities in the agency’s performance measures. For example, some goals of the national strategic plan on homelessness are reflected in the agency priority goals of HUD and VA. HUD and VA also have some shared performance measures. For example, the agencies have a goal related to the percent of chronically homeless veterans who are served by the HUD-Veterans Affairs Supportive Housing program. This is a shared program between HUD and VA that combines housing choice voucher rental assistance for veterans experiencing homelessness provided by HUD with case management and clinical services provided by VA. Through shared performance management, coordinated technical assistance, and communication to the field, the percentage of chronically homeless veterans served by this program increased by 49 percent in fiscal year 2009 to more than 65 percent in fiscal year 2013, according to USICH. In another instance, two agencies participating in the Reentry Council– DOJ and Labor–have developed internal agency outcomes and performance measures to track progress toward their shared outcomes. For example, DOJ has established an outcome to increase the number of inmate participants in its Residential Drug Abuse Program by 6 percent over four years from 18,500 to 19,920. In contrast, while HUD officials participate in the Reentry Council, the Reentry Council’s outcomes are not explicitly included in the agency’s strategic plans. Nevertheless, the agency officials said to us that their participation in the Reentry Council aligned with HUD’s Strategic Plan, Goal 3, which focuses on utilizing housing as a platform for improving quality of life. In our June 2013 report on the initial implementation of GPRAMA, we found that performance information can be used across a range of management functions to improve results, from setting program priorities and allocating resources, to taking corrective action to solve program problems. Moreover, we found that, if agencies do not use performance measures and performance information to track progress toward outcomes, they may be at risk of failing to achieve their outcomes. We have found this practice also holds true for efforts between federal agencies. To develop performance measures, one interagency group helped participants by creating a number of guides and toolkits to assist federal officials and stakeholders in measuring the performance of their efforts. Specifically, HUD developed and shared resources on performance measurement related to homelessness with participants from USICH. These resources both provided training on developing performance measures, and identified available HUD data sources, which agencies could use when creating performance measures. USICH and HUD officials told us that within interagency groups, it was necessary to agree on common data sources that will be used to track performance. For example, USICH and its participants have agreed to use HUD’s point-in-time count, which provides a snapshot of the number of people experiencing homelessness on a given night in America. According to the point-in-time counts, the total number of people identified as experiencing homelessness on a single night declined by 9.2 percent, or from about 672,000 in 2007 to about 610,000 in 2013. Reaching agreement on a common data source for tracking homelessness was a challenging process because it required agencies to agree to common definitions of homelessness, and the methodology for collecting the data, which had been a long-standing problem. Identifying and collecting timely data is necessary to track and review performance over time. In our prior work on the use of data-driven performance reviews, we found that agencies should look for opportunities to leverage data produced by other agency components or outside entities. We also found that agreeing on common definitions is one way to bridge organizational cultures. USICH also leveraged additional useful data sources from participating agencies. In one instance, the development and implementation of HUD’s Homeless Management Information Systems provided counts of the total number of people who use emergency shelters or transitional housing programs during the course of a year. According to documents from USICH, these data allow USICH and its stakeholders to track lengths of stay in shelters, service use patterns, and flow in and out of the system. Based on these data, USICH learned that the annual estimate of individuals using shelter decreased by about 5 percent between 2007 and 2013 from 213,000 to 203,000 people, whereas, the number of persons in families using shelters has increased by about 7 percent from about 178,000 to about 192,000 people during that period. Officials from all four groups we reviewed and expert practitioners stressed the importance of developing processes to regularly report the progress of the group. Performance reporting happened in a variety of ways, including posting information on websites, public reporting in meetings, and developing written reports for Congress. Each of the groups we reviewed had different levels of transparency and their approaches for reporting mirrored this transparency. For example, USICH regularly posts progress reports on its website and provides an annual report to Congress. In the case of the Reentry Council, agencies provided written updates on their progress on specific initiatives, which were circulated among the group participants. Highlights from these written updates were also circulated through press releases. Furthermore, the Reentry Council posted information to a website, which described issues the group is addressing, summarized accomplishments to date, laid out priorities moving forward, and pointed to key resources and links. Both the MOU Working Group and the Rental Policy Working Group circulated updates through measures such as written reports to the White House and updates at group meetings. At various times, the Rental Policy Working Group shared progress through the Office of Urban Affairs’ Blog, which is posted on the website for the Executive Office of the President. We have previously reported about the importance of publicly reporting performance information as a tool for accountability. Senior agency officials from three of the groups we examined told us that the activities and outcomes of the interagency group they participated in are reflected in their individual performance contracts. In some cases, individuals told us that the interagency group was explicitly named in the performance contract. In other cases, individuals told us that the work of the interagency group was aligned with the policy areas named in their performance contract. For example, staff from participating agencies explicitly included performance expectations for collaboration with the Reentry Council within their performance expectations and rating standards. As such, a satisfactory performance rating for these individuals is contingent upon collaboration with the group. The agency’s performance management system also tracked individual contributions toward the Reentry Council, as well as participation in group meetings and activities. An explicit alignment of daily activities with broader results helps individuals see the connection between their daily activities and organizational goals and encourages individuals to focus on achieving goals, as we found in a 2003 report. Key Considerations for Implementing Interagency Collaborative Mechanisms Leadership Has a lead agency or individual been Designated group leaders exhibited identified? If leadership will be shared between one or more agencies, have roles and responsibilities been clearly identified and agreed upon? collaboration competencies. Ensured participation from high-level leaders in regular, in-person group meetings and activities. Rotated key tasks and responsibilities when leadership of the group was shared. Established clear and inclusive procedures for leading the group during initial meetings. for group activities among participants. Expert practitioners and agency officials we interviewed told us that the designated leaders of interagency groups that they had been involved with exhibited the following five competencies: worked well with people, communicated openly with a range of stakeholders, built and maintained relationships, understood other points of view, and set a vision for the group. These competencies are also discussed by scholars in the literature we reviewed. Worked well with people: A few expert practitioners told us that effective interagency leaders possessed “soft skills,” “people skills,” or “interpersonal skills.” One expert practitioner told us that effective interagency group leaders did not have to be extroverted, but they had to be able to work well with people. Another expert practitioner told us that the leader needed to talk in person with stakeholders rather than managing or interacting remotely. This competency is consistent with how some scholars have discussed the importance of collaborative leaders possessing interpersonal skills. For example, one scholar noted that collaborative leaders must be attuned to the needs and motivations of others to lead collaborative efforts. Communicated openly with a range of stakeholders: A few expert practitioners told us that effective interagency group leaders had open communications with a range of stakeholders. One expert practitioner told us that it was important that the interagency group he led had an open and honest discussion with key stakeholders (in this case state and local officials) before attempting to resolve an issue to recognize those officials’ concerns. Another expert practitioner told us the leader needed to be able to communicate openly with the group’s members about how they would benefit from the collaboration, and why they were important to the collaboration. Some scholars have also noted that it is important for collaborative leaders to possess good communication skills. According to one scholar, research has shown that, if communications are open and free, then stakeholders would feel more comfortable in establishing longer-term working relationships and collaboration on other projects. Built and maintained relationships: A few expert practitioners and agency officials stressed that the interagency group leader’s ability to build and maintain relationships was critical to interagency collaboration. According to one expert practitioner, it was important to form personal and trusting relationships so that the group had a basis for open and candid communication when difficulties arose. Another expert practitioner said that building relationships helped individuals know who to contact at other organizations involved in the collaboration. Some scholars have noted that it is important for collaborative leaders to build effecting working relationships. One study noted that it is the job of the leader to help increase trust by building working relationships and creating incentives for those in the collaboration. Understood other viewpoints: A few expert practitioners told us that effective interagency leaders had the ability to draw out, understand, and value other viewpoints. According to one expert practitioner, the best interagency leaders had the ability to understand others, especially those with other viewpoints. The expert practitioner added that this skill helps stakeholders build trust. This competency is consistent with how some scholars have discussed the need for collaborative leaders to elicit other points of view. According to one study, scholarly research has shown that leaders use this approach to repeatedly elicit ideas and build integrative solutions, to break down cultural barriers, to de-escalate conflict, and to provide feedback to the group that heightens its performance. Set a vision for the group: An expert practitioner and agency officials told us that it was important for interagency leaders to set the strategic vision for the group. According to agency officials, interagency group leaders needed to have both the subject matter expertise to understand what the interagency group could accomplish, while also working with the group’s participants to collaboratively determine the vision. Some scholars have reported on the need for collaborative leaders to build a common vision. For example, one study noted that the successful collaborator is a skilled visionary who has the ability to see the big picture, and who thinks strategically, developing goals and the structures, inputs, and actions needed to achieve them. The five competencies above are broadly consistent with the Office of Personnel Management’s (OPM) executive core qualifications (ECQs). OPM identified five ECQs for federal Senior Executive Service officials that assess executive experience and potential, and measure whether an individual has the broad executive skills needed to succeed in a variety of Senior Executive Service positions. OPM defines one of the ECQs as the ability to build coalitions internally with other federal agencies, sectors, and levels of government to achieve common outcomes. The competencies that are included in the ECQ for building coalitions and their definitions are included in table 1 below. In a January 2012 memorandum, OPM included partnering, political savvy, and influencing/negotiating in its list of the core or supplemental competencies for two leadership positions required under the GPRA Modernization Act of 2010. Those leadership positions are the Agency Priority Goal Leaders and the agency Performance Improvement Officers. Moreover, OPM’s memorandum includes other supplemental competencies related to collaboration—such as interpersonal skills for, among other things, developing and maintaining effective relationships with others. Rotational assignment programs are work assignments at a different agency from the one in which the participant is normally employed, with an explicit professional development purpose. communication and teamwork, and establish networks with their civilian counterparts. High-level leaders, such as Cabinet Secretaries (or agency heads), provided attention and support to each interagency group we reviewed by frequently attending meetings in-person, participating in a range of the groups’ activities, or both. Moreover, membership of three of the four interagency groups we reviewed included high-level officials from the Executive Office of the President, signaling that there was Presidential support for implementing the group’s initiatives. In our September 2012 report on interagency collaborative mechanisms, we found that the influence of leadership can be strengthened by a direct relationship with the President, Congress, other high-level officials, or all of these officials.involving high-level leaders because those leaders helped recruit key participants and made policy-related decisions requiring a high-level of authority. In one instance, officials told us that individuals were more likely to attend meetings because of the opportunity to interact with or brief high-level officials. Cabinet Secretaries (or agency heads), frequently attended in-person the meetings of two of the four interagency groups we reviewed. For instance, Cabinet Secretaries frequently attended USICH meetings in-person, and USICH’s leadership has rotated among the Secretaries of HUD, HHS, VA, and Labor since its formation. An HHS official said that, beginning in 2009, USICH leadership set a goal to have at least three Cabinet Secretaries attend each meeting. Officials from USICH told us in May 2013 that USICH had consistently met this goal. Officials told us that their interagency groups benefitted from A few expert practitioners and agency officials told us that high-level leaders publicly reported progress of group initiatives to their peers at group meetings. This practice is consistent with how we have discussed the use of data-driven performance reviews as a leadership strategy to drive performance improvement of federal agencies. In our February 2013 report on data-driven performance reviews, we found that attendance of high-level leaders fosters ownership and helps ensure participants take the reviews seriously and that decisions and commitments can be made. During a recent Reentry Council meeting, chaired by the Attorney General, a representative from each participant agency reported on his or her agency’s commitment to the Reentry Council’s efforts and progress made supporting innovative reentry policies or programs. According to agency officials, agency leaders were aware of their interagency commitments and responsibility for regularly briefing their counterparts at other federal agencies. The agency officials noted that this regular peer reporting created a strong incentive for agency leaders to keep informed of the progress made throughout the year, and to set expectations with group participants and other staff. Furthermore, a few expert practitioners and agency officials told us that this senior-level involvement created a cascading level of accountability and commitment within individual agencies because agency staff reported to their senior leadership on their progress towards interagency outcomes. High-level leaders participated in a range of activities for each interagency group we reviewed, such as speaking publicly about the group’s issues, visiting affected communities, and convening a White House conference with group participants and stakeholders. In addition to the public housing myth buster described earlier in this report, high-level HUD leaders conducted outreach to advance the Reentry Council’s agenda. Specifically, in 2011, the HUD Secretary and Deputy Secretary for Public and Indian Housing sent a letter to executive directors of public housing authorities (PHAs) to clarify misconceptions. Specifically, the letter explained current federal regulations and informed local PHAs that, in many circumstances, formerly incarcerated individuals should not be denied access to federally supported public housing. According to Reentry Council officials, the letter from HUD’s senior leaders provided important leadership commitment to the field on an issue that is perceived as a major barrier to reentry. Leadership of two of the groups that we reviewed—the DOD and Education MOU Working Group and USICH—is shared between two or more agencies. We previously found that agencies can convey their support for the collaborative effort by sharing leadership. However, in our prior work, we concluded that some agencies had difficulty implementing shared leadership of an interagency collaboration mechanism because it was unclear how the shared leadership model would work in practice. USICH has employed an approach for implementing its shared leadership model, which many of its participants told us has been beneficial. By law, USICH must elect a chair and vice-chair from among its members and rotate those positions among its members at the first meeting of each year. In practice, since the government-wide USICH’s strategic plan was published in 2010, the current vice-chair has always been elected as chair the following year. Moreover, USICH’s participants come from 19 departments and agencies, but at the time of this review, the chair and vice-chair have always come from four agencies—HUD, HHS, VA, and Labor—with significant homelessness programs. A HUD official who participates in USICH told us this leadership model provides continuity of federal agencies’ outcomes and strategies for reducing homelessness over time, and provides a longer-term perspective on important issues that will affect homelessness outcomes. VA officials told us that knowing in advance which agency will become the next chair enables the chair and co-chair to collaboratively establish short-term outcomes that have the buy-in of USICH’s members. Officials told us that both the VA and HUD Secretaries are approaching their current terms as equal co-chairs rather than chair and vice-chair to ensure both agencies buy-in to the Council’s outcomes and actions given the approaching deadline to complete USICH’s outcome to prevent and end veterans’ homelessness by 2015. 42 U.S.C. § 11312. official from the agency that hosted the meeting also served as the chair of the meeting. A DOD official told us that they used this approach because it provided a sense of ownership in the group’s activities. During initial meetings, participants of two of the four interagency groups we reviewed established procedures for leading the group, such as the frequency of meetings, protocols for communicating across agencies, whether group meetings will have an agenda, and whether stakeholders will take formal notes. We previously found that agencies bring diverse cultures to collaborative efforts, and it is important to address these differences to enable a cohesive working relationship and to create the mutual trust required to sustain the collaborative effort. In our prior work, we also found that it is important to establish ways to bridge organizational cultures, such as developing common terminology, compatible policies and procedures, and fostering open lines of communication. The MOU Working Group developed a communications protocol to provide a clear understanding of (1) the preferred methods of communicating, and (2) the chain-of-command protocol for communicating within the different levels of organizations within those agencies, including the military services. DOD officials told us it can be difficult for employees at civilian agencies, such as Education, to understand the terminology used by military officials as well as recognize officials’ ranks in the different military services. Among other things, the communications protocol defines a request for information and the information that should be included in such a request, including what type of information is needed, how and when it is needed, and the justification for the deadline. The communications protocol specifies that it is meant to guide the working relationships between DOD and Education, provide a common understanding of the best ways to communicate and collaborate, and that the communications protocol is not a rigid list of requirements that is appropriate for every situation. In each interagency group we reviewed, leadership responsibility for group activities was distributed among the different participant agencies. Moreover, the individual(s) with responsibilities for these activities were documented in the groups’ strategic plans, reports, or action plans. Officials said they distributed responsibility of activities among the group’s agencies and officials for various reasons, such as getting stakeholders to buy-in to the group’s objectives, keeping stakeholders engaged, and taking advantage of the individual expertise within the group. For example, a DOJ official who co-chairs the Reentry Council’s staff- level working group said that it intentionally distributed leadership of the subcommittees, in part, to disburse responsibility more broadly throughout the federal government and to allow for interaction and participation of a greater number of stakeholders. Reentry Council staff- level working group participants from VA, HUD, and HHS agreed that distributing leadership of subcommittees was an effective approach. An HHS official said this approach has allowed the subcommittees to be staffed by a broader group of participants from within the agencies. Moreover, VA officials told us that some agencies were a natural fit to lead certain subcommittees. In one instance, VA officials said it made sense for HHS to lead the subcommittee on health care because, among other things, HHS officials have the technical expertise to fulfill many of the subcommittees’ objectives. In addition to distributing leadership responsibility across the participating agencies, high-level agency officials and subject-matter experts from the different member agencies participate in different sub-groups that support USICH and the Reentry Council. Membership of USICH and the Reentry Council is largely composed of cabinet-level officials who meet regularly but infrequently to discuss a range of topics, such as the path of work to be completed or progress made on group initiatives. In addition to those meetings, the Reentry Council and USICH are supported by a sub-group of high-level agency officials who, among other things, prepare recommendations for consideration at Reentry Council and USICH meetings. Moreover, both groups sometimes convene working groups composed of subject-matter experts to work on specific program-level initiatives and tasks. Key Considerations for Implementing Interagency Collaborative Mechanisms Resources How will the collaborative mechanism Created an inventory of resources be funded? If interagency funding is needed, is it permitted? How will the collaborative mechanism be staffed? developed online tools or other resources that facilitate joint interactions? dedicated towards interagency outcomes. Leveraged related agency resources toward the group’s outcomes. Pilot tested new collaborative ideas, programs, or policies before investing resources. Agency officials from all four of the interagency groups we reviewed, and OMB staff told us that agencies generally do not receive specific funding for interagency activities. interagency group had available, two of the interagency groups developed a detailed inventory of programs and authorities that related to the outcomes of the interagency group. According to officials, the inventories that each of the groups developed were intended to help the group better understand the full range of federal programs and resources devoted to government-wide outcomes or initiatives. For example, officials told us the Rental Policy Working Group developed an inventory of government programs that were related to each of the group’s 10 rental alignment proposals. The inventory was based on information from federal, state, and local government officials as well as housing developers and managers and included relevant regulations, statutes, and policies. Additionally, the inventory was used to promote To understand the resources that the understanding of government-wide rental programs, and according to officials, was useful in making decisions about the coordination of related programs across agency lines and between levels of government. An inventory of relevant resources can also be used to identify the range of federal spending on an issue, which can result in more coordinated spending. In fiscal year 2011, DOJ, Labor, and HHS separately administered reentry grant programs. The Attorney General convened the Reentry Council, in part, to coordinate agencies’ reentry efforts to further prevent unnecessary duplication and share promising practices. Participants of the Reentry Council told us they developed an inventory of federal resources that are used to assess where resources are targeted to enable federal and local stakeholders to leverage these investments. To develop this inventory, participants of the Reentry Council created a spreadsheet that listed relevant funding streams and resources from their agencies that were dedicated to reentry programs. The inventory identified the amount of funding, the intended purpose, and jurisdictions associated with resources. The information from this inventory is available in an online resource with an interactive map of reentry resources across the United States. Officials we spoke with said this type of inventory can also help communicate some of the differences between agency organizational cultures, capabilities of agencies to control spending, and array of program tools being used to achieve mission objectives. For example, in the instance of USICH, agency officials reported that their agencies often had different policy and program tools, such as grants, at their disposal. Accordingly, in the early days of the Council, participants from the different agencies needed to understand the different purposes and requirements of the policy and program tools that each agency could bring to the table. In the case of HHS, homeless individuals may be eligible for the Temporary Assistance for Needy Families (TANF) program. But, since TANF is administered by the states, HHS cannot require states to use those grant funds for certain purposes. HHS officials told us that HHS’s Administration for Children and Families sent out an informational memorandum informing community-based organizations that they can spend TANF grant funds on homeless individuals. Officials from USICH told us that this memo sent a powerful message to the field about the opportunity of TANF agencies to engage in state and local efforts to end homelessness, and strategic steps they can take that are within their authority. HHS officials also noted that, given the nature of the program, they cannot require TANF funds to be dedicated to any specific group, including those experiencing homelessness. According to HHS officials, the nature of TANF funds can sometimes present a challenge to working across agency cultures because partners may expect that HHS can target funds more directly toward homeless individuals than they can. In contrast, the VA directly provides services to homeless individuals through medical centers. Therefore, it has more direct control over the specific homelessness outcomes that USICH is trying achieve. Officials from USICH told us their role is to facilitate a broad understanding of the policy and program tools that each member agency brings to the table. Our annual reports on fragmentation, overlap, and duplication have highlighted the challenges associated with the lack of a comprehensive list of federal programs and funding information. We have found that a first step in identifying potential fragmentation, overlap, or duplication among federal programs or activities involves creating a comprehensive list of programs along with related funding information. Currently, no comprehensive list exists, nor is there a common definition for what constitutes a federal “program.” In our prior work, we found that the lack of a common definition for a program makes it difficult to develop a comprehensive list of all federal programs. The lack of a list, in turn, makes it difficult to determine the scope of the federal government’s involvement in particular areas and, therefore, where action is needed to avoid fragmentation, overlap, or duplication. We also found that federal budget information is often unavailable or insufficiently reliable to identify the level of funding provided to programs or activities. For example, agencies could not isolate budgetary information for some programs because the data were aggregated at higher levels. Without knowing the full range of programs involved or the cost of implementing them, gauging the magnitude of the federal commitment to a particular area of activity, or the extent to which associated federal programs are duplicative is difficult. To help address these challenges, GPRAMA requires the Director of OMB to compile and make publicly available a comprehensive list of all federal programs, and to include the purposes of each program, how it contributes to the agency’s mission and goals, as well as recent funding information. In May 2013, OMB published program inventories developed by 24 agencies. We will report on these inventories later this year. Officials who participate in the Reentry Council told us that they identified the range of resources dedicated to the crosscutting issue, and looked for ways to leverage existing activities, tools, or programs that can benefit the interagency group. By assessing their relative strengths and limitations, collaborating agencies looked for opportunities to leverage each others’ resources, thus obtaining additional benefits that would be unavailable if they were working separately. In the case of the Reentry Council, it used two technological resources to share information. For external information sharing, information about the Council is available on an existing website of the National Reentry Resource Center that is funded in part through DOJ’s Second Chance Act grant program. website that was entirely dedicated to reentry issues. Therefore, it made sense to put the Reentry Council’s information on that website. The Second Chance Act of 2007 authorized grant funding for the establishment of a reentry resource center to, among other things, provide education, training, and technical assistance for States, tribes, territories, local governments, service providers, nonprofit organizations, and corrections institutions and disseminate best practice information to states and other relevant entities. See, Pub. L. No. 110-199, § 101(c)(2),122 Stat. 657, 666-667 (2008) (codified at 42 U.S.C. § 3797w(m). documents on MAX is given on a page-by-page basis, which allows interagency groups to control access as needed. Some groups involved with more sensitive policy development have decided to have a closed group where only specific individuals have access. Interagency groups we reviewed also leveraged the expertise of other agency officials to improve their programs and spending. Several agencies—including HHS, DOJ, and Labor—used the Reentry Council to identify individuals from other agencies to provide input into the grants programs that they administer. In one instance, as part of their work on the Reentry Council, Labor officials developed “Face Forward” grants, which were designed to give youth a chance at success by offering support services, training, and skills development that can help them obtain employment and overcome the stigma of a juvenile record. They included officials from DOJ who helped Labor officials score the grant applications and decide which grants would receive funding. Labor staff members said that this relationship helped them to spend the funds in a more strategic manner because they had additional information on the topic. To solicit this participation from DOJ staff, Labor staff members identified the specific time commitment and expertise that they needed from the DOJ staff. They provided this list of commitments to a DOJ official who co-chairs the Reentry Council’s staff-level working group. The DOJ official was then able to recruit the appropriate officials from DOJ. Given limited resources, interagency groups we reviewed pilot-tested selected ideas, programs, or policies before investing more extensive resources into implementation. Through pilot testing, groups were able to allow time to identify unanticipated consequences, implementation challenges, or to gather information on program effectiveness. In the case of the Rental Policy Working Group, in November 2011, USDA, HUD, and Treasury worked with their housing finance agency counterparts at the state level in Wisconsin, Michigan, Washington, Minnesota, Oregon, and Ohio to eliminate duplicative physical inspections of rental housing subsidized through more than one public funding source. A second round of this initial pilot was conducted in 2013 with the same states participating. The purpose of the second round was to address issues that arose during the first pilot. Specifically, this pilot worked to ensure that (1) all of the inspections could be completed in a timely manner, and (2) all of the inspection reports were shared with all relevant parties in a timely manner. The 2013 pilot achieved both of these goals, and HUD reports that as a result, the Rental Policy Working Group avoided 120 duplicative inspections across six states. The Rental Policy Working Group plans to expand this pilot in 2014 by adding an additional 25 states. In another instance, USICH noted that it focuses on implementing strategies it has found to be effective at reducing or ending homelessness. It shared that innovative program models can begin as pilots and use evidence of savings to make the case for sustainability and expansion. Specifically, the Community Support Program for People Experiencing Chronic Homelessness in Massachusetts provides non- clinical support services to adults who are experiencing chronic homelessness so that they can be permanently housed in the community, and prevent avoidable hospitalizations. Initially, this program began as a pilot program with a cap on enrollment. This approach allowed the partners to launch the program and establish new partnerships, service models, and payment mechanisms. When the pilot program demonstrated results, including savings associated with reductions in hospitalizations, it was sustained and the enrollment cap was lifted. We provided a draft of this report for review and comment to the heads of the nine key agencies that participated in the four interagency groups that we reviewed for this study. These included the Secretaries of the U.S. Departments of Agriculture, Defense, Education, Health and Human Services, Housing and Urban Development, Justice, Labor, Treasury, and Veterans Affairs. We also shared a draft of this report for review and comment with the Director of the Office of Management and Budget (OMB), and the Executive Director of the U.S. Interagency Council on Homelessness (USICH). The Departments of Defense, Education, Labor and USICH had no comments on the report. We received technical comments from the Departments of Agriculture, Health and Human Services, Housing and Urban Development, Justice, Treasury, and Veteran Affairs, as well as OMB, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or mihmj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This report is part of a series of reports under our mandate in GPRAMA to periodically examine how agencies are implementing the law. The objectives of this report are to examine how select interagency groups (such as task forces, working groups, councils, and committees): 1) defined their missions and desired outcomes; 2) measured performance and ensured accountability 3) established leadership approaches; and 4) used resources, such as funding, staff and technology. Following issuance of our 2012 report on interagency collaboration, we continued our work to identify the most commonly implemented mechanism for collaboration, from the list of mechanisms (listed in appendix III). Accordingly, this report focuses on one of these mechanisms—interagency groups—(also referred to as councils, committees, task forces, and working groups), because they were the most commonly cited interagency mechanism in our sample of GAO reports on collaboration published from January 2005 through February 2013. After narrowing our scope to focus on interagency groups, we conducted an analysis to determine the most common challenges that interagency groups face when collaborating. To identify the most common challenges, we reviewed a sample of GAO reports on collaboration, which were issued from January 2005 through February 2013. We also reviewed relevant recommendations from our prior work directed at interagency groups. We organized the challenges that these reports identified into the key issues from our 2012 report and found that the most common challenges that interagency groups experienced fell under the key features of: Outcomes Accountability Leadership Resources To identify implementation approaches that agencies have used to address or avoid these challenges, we identified a limited number of interagency groups from our prior work that have addressed or avoided one or more of these challenges. To identify interagency groups that have addressed one or more collaboration-related challenges, we examined our sample of reports, including areas that we previously identified as being at risk for fragmentation, overlap, and duplication; high-risk; crosscutting federal priority goals under GPRAMA; and prior recommendations, to identify groups that addressed, or partially addressed, these challenges. Based on a review of our prior work, we identified potential interagency groups that exhibited some of the practices to enhance and sustain collaboration. We then narrowed the list of interagency groups to four groups that represented a balanced and diverse set of characteristics, such as the number of participating agencies, duration, creation vehicle (for example, through laws, etc.), and groups with both voluntary and mandated participation. Our final selection of interagency groups includes the following: Department of Defense (DOD) and Department of Education (Education) Memorandum of Understanding (MOU) Working Group Federal Interagency Reentry Council (Reentry Council) Rental Policy Working Group U.S. Interagency Council on Homelessness (USICH) To arrive at a subset of agencies to interview about their participation in the selected groups, we selected the lead agency, entity, or agencies from each group. We also interviewed agencies that we determined to be key contributors. We interviewed officials from the following entities: Office of Management and Budget; U.S. Department of Agriculture; Department of Defense; Department of Education; Department of Health and Human Services; Department of Housing and Urban Development; Department of Justice; Department of Labor; Department of the Treasury; Department of Veterans Affairs; USICH; and the Domestic Policy Council in the Executive Offices of the President. During these interviews, we asked working group members about the implementation approaches that they had employed to establish and enhance outcomes, accountability, leadership, and resources. We did not interview all contributors to all interagency groups. For interagency groups with three or fewer federal agency participants, we interviewed officials from all participant agencies. We also observed a Reentry Council event and a USICH meeting to identify potential implementation approaches. In addition, to identify collaborative leadership competencies, we reviewed relevant academic literature and relevant reports. In addition to the illustrative examples described above, we hosted two expert practitioner panels, in coordination with the Senior Executive Association, to identify and discuss useful practices and lessons for implementing interagency groups. We selected panelists that were recipients of the Presidential Distinguished Rank Award in 2011 or 2012 and that had experience leading or participating in interagency groups. We invited these panelists to share their perspectives on interagency groups; we did not ask them to speak on behalf of the federal agencies or organizations that these participants represent or represented. Of the expert practitioners listed below seven practitioners participated in two small group panels and we conducted individual interviews with the other four. The following expert practitioners participated in our panels and interviews: Charles A. Casto, Retired Regional Administrator, Region III, U.S. Dr. John Clifford, Chief Veterinary Officer and Deputy Administrator, Animal and Plant Health Inspection Service, U.S. Department of Agriculture William J. Fleming, Retired Deputy Chief Human Capital Officer and Director for Human Resources Management, U.S. Department of Commerce James D. Giattina, Director of the Water Protection Division, U.S. Environmental Protection Agency Dr. Rowan Gould, Deputy Director for Operations, Fish and Wildlife Service, U.S. Department of the Interior Lana T. Hurdle, Deputy Assistant Secretary for Budget and Programs, U.S. Department of Transportation Michael W. Lowder, Director, Office of Intelligence, Security, and Emergency Response, U.S. Department of Transportation Dr. Alexander E. MacDonald, Deputy Assistant Administrator for Laboratories and Cooperative Institutes and Director, Earth System Research Laboratory, National Oceanographic and Atmospheric Administration, U.S. Department of Commerce Dr. A. Stanley Meiburg, Deputy Regional Administrator for Region 4, Craig H. Middlebrook, Deputy Administrator, Saint Lawrence Seaway Development Corporation, U.S. Department of Transportation Thomas P. Skelly, Director, Budget Service, U.S. Department of Based on our interviews with interagency group participants and expert practitioner panelists, we identified and categorized recurring themes under each of the four considerations from our prior work and developed from these a set of approaches associated with each challenge. If more than one group had used the approach and found it to be effective, we included it in our list. If a limited number of groups experienced a specific challenge, but identified a way to address it, we noted this in the text. For example, only two interagency groups in our sample had a shared leadership model. While our examples were limited to two groups, this is a frequently cited challenge in our work, so we included the approach, but noted in the text that the finding was only supported by two groups we reviewed. We asked agency officials to review the examples for accuracy and incorporated their comments where appropriate. We did not independently verify the effectiveness of these examples, but did examine agency documentation and support for testimonial evidence where available. We also note that our findings rest on the examples we reviewed and the practitioners we interviewed and thus may not be applicable to all interagency groups. For example, in this report, we focus on interagency groups that respond to non-emergency situations, which require a different type of response than emergencies. Also, all of the policy or topic areas in our examples are currently high-priorities of this administration, as demonstrated by the involvement of the Executive Office of the President or Cabinet-level officials in the groups. As such, this report provides agency perspectives and approaches that have been effective in addressing collaboration-related considerations for the groups we studied, but does not provide a comprehensive or universally applicable set of implementation approaches for interagency group. We previously found that interagency groups can vary widely depending on the purpose, composition, or other unique characteristics, and as such, approaches that have proven effective for one group may not always apply to other groups. Nevertheless, we identified common approaches among the groups we reviewed. We conducted this performance audit from November 2012 to February 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Sarah Veale, Assistant Director, and Mallory Barg Bulman, Analyst-in-Charge, supervised the development of this report. Peter Beck, Martin De Alteriis, and Don Kiggins made significant contributions to this report. Karin Fangman provided legal counsel and Alicia Cackley, Andrew Finkel, David Maurer, and Paul Schmidt made key contributions to this report.
Many of the meaningful results that the federal government seeks to achieve require the coordinated efforts of more than one federal agency, level of government, or sector. The GPRA Modernization Act of 2010 (GPRAMA) takes a more crosscutting and integrated approach to improving government performance. GPRAMA requires that GAO periodically review implementation of the law. As a part of a series of reports responding to this requirement, GAO assessed how interagency groups addressed the central collaboration challenges identified in its prior work of 1) defining outcomes; 2) measuring performance and ensuring accountability; 3) establishing leadership approaches; and 4) using resources, such as funding, staff, and technology. GAO selected four interagency groups that met its key practices for enhancing and sustaining collaboration to learn about the approaches they used and found to be successful. These groups addressed issues of homelessness, reentry of former inmates into society, rental housing policy, and the education of military dependent students. To identify successful approaches, GAO reviewed agency documents, and interviewed agency officials that participated in these groups. Additionally, GAO convened recipients of the Presidential Distinguished Rank Award, who had experience with interagency collaboration. GAO is not making any recommendations in this report. GAO shared a draft of this report with key agencies that participated in the interagency groups GAO reviewed. The agencies either had no comments or provided technical comments, which GAO incorporated as appropriate. The interagency groups GAO selected and expert practitioners—including those who received the Presidential Distinguished Rank Award—have used a range of approaches to address some of the key considerations for implementing interagency collaborative mechanisms, related to defining outcomes; measuring performance and ensuring accountability; establishing leadership approaches; and using resources, such as funding, staff, and technology.
Nearly all students change school at some point during their school years, most typically when they are promoted to a higher grade at a different school. Specifically, students may change schools as they are promoted from elementary to middle school and again from middle to high school. In addition, students may also change schools when their families move to a new home or to relocate closer to jobs. In 1994, we issued a report that highlighted concerns about the education of elementary school students who changed schools more frequently than the norm. This report found that one in six third graders changed schools frequently, attending at least three different schools since the beginning of first grade. Students who changed schools frequently were often from low- income families, the inner city, migrant families, or had limited English proficiency. These highly mobile students had low math and reading scores and were more likely to repeat a grade. We recommended that Education ensure low-income students have access to ESEA’s Title I services, which they have taken steps to do so. Since we issued our 1994 report, policymakers have continued to focus attention on students’ educational achievement. Specifically, the No Child Left Behind Act of 2001 (NCLBA), which reauthorized ESEA, established a deadline of 2014 for all students to reach proficiency in reading, math, and science. Under NCLBA, districts and schools must demonstrate adequate yearly progress toward meeting state standards for all students and every key subgroup of students, including low-income students, minority students, students with disabilities, and students with limited English proficiency. While nearly all students change schools at some point before reaching high school, some students change schools with greater frequency (see figure 1). According to Education data, which followed a cohort of kindergarteners from 1998 to 2007, the majority of students—about 70 percent—changed schools two times or less and about 18 percent changed three times before reaching high school. Some of these school changes could occur as a result of students being promoted to a higher grade in a different school or parents moving to a new home or relocating closer to their jobs. However, for the students who changed schools four or more times (about 13 percent), our analysis of Education’s data revealed statistically significant differences between them and students who had changed two times or less, not only in the frequency of their changes but along several important dimensions. We compared students who changed schools two or fewer times (referred to in this report as “less mobile”) to students who changed schools four or more times (referred to as “more mobile”). We selected this comparison because the differences were most pronounced and because the two groups combined represent a significant fraction (about 82 percent) of the population of the students in the cohort. We also found statistically significant differences between students who changed schools two or fewer times and students who changed schools three or more times, but these differences were less pronounced. See appendix II for Education’s Early Childhood Longitudinal Study: Kindergarten Class of 1998-1999 (ECLS-K) data on the mobile student population. Students who changed schools four or more times were disproportionately poor, African American, and from families that did not own their home or have a father present in the household. These more mobile students— compared to those who changed schools two times or less—had a significantly larger percentage of students with family incomes below the poverty threshold, according to Education’s survey data. Furthermore, a significantly larger percentage of the more mobile students, compared to less mobile students, received benefits under the National School Lunch Program (NSLP), the Supplemental Nutrition Assistance Program, and the Temporary Assistance for Needy Families (TANF) program. As shown in figure 2, about 26 percent of students who changed schools four or more times had family incomes below the poverty threshold, compared to about 17 percent of the students who changed schools two times or less. Moreover, significantly smaller percentages of the more mobile students had a father present in the household, when compared to their less mobile peers who changed schools two times or less. African-American students comprised a disproportionately larger percentage of the students who changed schools four or more times when compared to African-American students, as well as all other racial ethnic groups, who changed schools two times or less, as shown in figure 3. African-American students represented about 15 percent of students in kindergarten through eighth grade who changed schools two times or less; however, they represented about 23 percent of students who changed schools four or more times. In contrast, white students, who represented about 60 percent of all students in the same grade range who changed schools two times or less, accounted for about 51 percent of students who changed schools four times or more. Finally, a significantly larger percentage of students who changed schools four or more times came from families that did not own their home. Students from families that did not own their own home represented about 39 percent of students who changed schools four or more times compared to about 20 percent for those who changed schools two or fewer times—a difference of about 100 percent. According to principals and teachers we interviewed, the more mobile students’ families may rent, live with relatives, or move back and forth between relatives and friends. Further, some students may be homeless; however, teachers and other school officials we interviewed said that, in some cases, it may be difficult to know whether a student is homeless because families may not disclose that they are homeless or may not consider their particular living arrangements as being homeless, for example, staying with relatives or doubling up—that is, living with another family or families in a residence designed for a single family. See appendix II for additional information about the mobile student population. The schools with the highest rates of student mobility also showed differences across several characteristics. According to Education’s data, about 11.5 percent of schools had the highest rates of student mobility— those where more than 10 percent of their eighth grade students started the year at the school but left by the end of the school year. These schools had larger percentages of at-risk eighth grade students compared to schools where less than 10 percent of the students changed schools. According to Education’s data, these schools had larger percentages of eighth grade students eligible for Title I assistance, the federal government’s largest program for low-income school age children. For example, about 62 percent of the schools with high mobility rates received Title I funding, compared to about 46 percent of the schools where students’ mobility rates were lower. Moreover, the schools with high mobility rates were more often eligible for Title I “school-wide” programs, a designation that allows schools with a population of at least 40 percent low-income students, to offer services to every student in the school. As shown in figure 4, about 45 percent of the schools with high mobility rates were classified as school-wide, compared to about 21 percent of the schools that had lower rates of student mobility. Moreover, the schools with high mobility rates were more likely to participate in NSLP. Specifically, as shown in figure 5, about 91 percent of the schools with high mobility rates participated in the school lunch program, compared to about 68 percent of the schools with lower rates of student mobility. In addition, for about 10 percent of the schools with high mobility rates, all of the students in these schools were eligible for free or reduced-price lunch, compared to about 5 percent of the schools with lower rates of student mobility (see figure 6). The schools with high mobility rates also had larger percentages of eighth grade students receiving special education services, with limited English proficiency, and having higher rates of absenteeism. Specifically, as shown in figure 7, of the schools that had 11-25 percent of their eighth grade students receiving special education services, about 50 percent had high mobility rates compared to about 32 percent that had lower rates of mobility. Schools with high mobility rates also had larger percentages of their eighth grade students who had limited English proficiency. For example, as shown in figure 8, of the schools that had 26-50 percent of students with limited English proficiency, about 11 percent had high mobility rates compared to about 2 percent that had lower rates of mobility. Finally, the schools with high mobility rates had larger percentages of students absent. About 30 percent of the schools with high mobility rates had 6-10 percent of students absent on an average day, compared to about 11 percent of the schools with lower rates of mobility. See appendix III for additional information comparing schools with high rates of mobility to schools with less mobility. Teachers, principals, and parents told us that financial difficulties and family instability often underlie why students change schools frequently, but some cited other reasons as well, such as parents’ desire to send their children to a better-performing or safer school. Some school officials and parents in all three states we visited (California, Michigan, and Texas) said that economic difficulties, including job loss, played a role in student mobility. For example, the principal of one Detroit-area school serving a large low-income population said that families lost their jobs when the automobile industry declined and moved out of the area in search of jobs. Several principals and teachers also cited foreclosures on homes and the inability of some families to pay the rent as reasons that students changed schools. For example, officials at a rural California high school said that relatively inexpensive real estate attracted many homeowners who later lost their homes. One teacher in California told us that some families who are unable to pay the rent and are evicted will move from one apartment complex to another complex offering a free month’s rent. In addition, school officials in all three states we visited said that they saw more families “doubled-up”—sharing a single-family residence with one or more other families. School officials said all of these situations have resulted in students changing schools. Family instability also plays a role in mobility, according to parents and school officials we interviewed. School officials in all three states we visited cited divorce as a reason for mobility. For example, school officials in Michigan told us that one student had changed schools four times during one school year when his parents’ custody arrangement changed. In an urban school in Texas and a rural school in California, teachers and principals also said that school changes can result when students are passed around among relatives or friends when there is conflict in the student’s family. Officials in California and Michigan told us that mobility also results when social services personnel need to remove students from their homes and that foster children are highly mobile, too. In addition to family issues, school officials and parents in all three states said that, in some cases, mobility results from family choice related to safety concerns or the desire to provide different educational options for their children. For example, one parent in Texas said she changed residences and her child’s school after two home break-ins and in California, a principal said that some families come to his school district to escape gang activity and violence. A body of research suggests that student mobility has a negative effect on students’ academic achievement, but research on its effect on their social and emotional well-being is inconclusive. With respect to academic outcomes, while research suggests that the academic achievement of students is affected by a set of interrelated factors that includes socio- economic status and parental education, there is evidence that mobility has an effect on achievement apart from these other factors. Specifically, the body of research suggests that students who changed schools more frequently tended to have lower scores on standardized reading and math tests and to drop out of school at higher rates than their less mobile peers. For example, a national study that tracked high school age students found that changing high schools was associated with lower performance on math and reading tests. Another study using the same national, longitudinal dataset found that students who changed schools two or more times from 8th to 12th grade were twice as likely to drop out of high school, or not obtain a General Equivalency Diploma, compared to students who did not change schools. In addition, a meta-analysis found that student mobility was associated with lower achievement and higher rates of high school dropout. Further, some studies found that the effect of mobility on achievement varied depending on other factors, such as the student’s race/ethnicity, special needs, grade level, frequency of school change, and characteristics of the school change—whether it was between school districts or within a district, or whether it was to an urban or suburban/rural district. For example, one study found that school changes from one school district to another tended to result in long-term changes in academic performance and that this long-term change tended to be positive for students who moved to schools in nonurban districts but negative for those who moved to urban areas. In addition, this study found that school changes within the same school district were not associated with any long-term changes in performance, but were associated with short-run negative effects on performance that were generally greater for African-American, Hispanic, and poor students. The small body of research that exists about the effect of mobility on students’ social and emotional well-being is limited and inconclusive. These studies generally used methods that do not support strong conclusions about specific relationships between mobility and social and behavioral outcomes. One important limitation is that these studies typically did not account for pre-existing differences between more mobile and less mobile students. For example, we were unable to report the results of two national longitudinal studies that we reviewed because the studies used narrow, limited measures of student behavior and other social outcomes, and the studies did not control for prior student behavior and social conditions. A complete list of the studies we reviewed is included in appendix IV. Officials we interviewed in schools with high rates of student mobility said they often face the dual challenge of meeting the needs of their students who change schools at high rates and the needs of the entire student body, which is comprised largely of low-income, disadvantaged students. A number of teachers and principals told us that when new students arrive, it can sometimes affect the pace of instruction for the entire classroom, as teachers attend to the needs of a new student. Moreover, some teachers and principals said that for a new student, there may be differences in what and how instruction has been delivered to them from school to school, and this can make it difficult for teachers to assess where students are academically when they arrive and make decisions about proper placement. Further, teachers in two schools said that the order in which course material is taught varies from school to school, presenting challenges for teachers in the classrooms. For example, one teacher told about a student who moved to Texas from California and was placed in an algebra class based on her academic record, but was later moved to a more appropriate class after the teachers saw her struggling to keep up with her peers. Also, a teacher from a Texas middle school, whose district teaches pre-algebra reasoning skills beginning in kindergarten, said that students from other states are taught these skills in later grades. A number of teachers and principals also told us that mobile students’ records are often not transferred to the new school in a timely way or at all, and, as a result, this can make it difficult for school officials to determine class placement, credit transfer, and the need for special services, such as services related to special education and language proficiency. Several teachers said that when students arrive without records, the school must observe and document whether students need special education services—a process that is very comprehensive and can take several weeks or months. In an effort to help schools make more informed decisions about class placement and identification of students with special needs, Texas has developed a system to electronically transfer student records between schools in the state. This system allows schools to share information on what classes students took at the previous school, their grades and standardized test scores, reasons for withdrawal, annual absences, immunization records, and special circumstances, such as English proficiency, migrant status, homeless status, participation in gifted programs or special education, whether the student has an Individualized Education Program, and eligibility for NSLP. Schools also face the challenge of helping mobile students adjust socially and emotionally to the new school environment. While some students adjust well to their new school, some do not. A few teachers, principals, and other school officials said that some mobile students may feel like they do not belong, fail to make new friends, exhibit poor attendance, and, in some cases, drop out. Others who have difficulty fitting in socially may try to gain attention by exhibiting certain behavior, such as disrupting other students in the class. Also, some guidance counselors and teachers told us some mobile students often act detached, especially when they have changed schools repeatedly and anticipate changing again. In some of the schools we visited, new students were paired with a “buddy” who walks them to class, sits with them at lunch, and helps them learn classroom routines and procedures. Some schools also provided orientation tours of the school for new students and parents and arranged for new students to meet with the guidance counselor to help with the transition. For example, in a suburban/rural public school district in Michigan we visited, the principal and teachers at the elementary school meet with new students and their parents on the first day of the school year; students officially start school the next day. This gives advance notice to teachers about incoming students and allows them time to prepare. In addition, the junior high school in this district has a welcoming committee to introduce new students and parents to the school faculty and provide a tour of the school. Several school officials told us that the needs of mobile and nonmobile students can extend beyond the classroom and often their families are in need of services too. To help address the family circumstances that contribute to mobility, two school districts we visited use school-based family resource centers that rely on partnerships between the school, community, church, and city agencies to arrange for “wraparound” services for the entire family—such as services related to housing, employment and finances, health care, education for parents and children, and social support networks. In all three states we visited, some schools have specific school-based or community outreach to parents that can benefit both mobile and nonmobile families, such as parenting classes on a range of topics, like budgeting and accessing housing. Also, homeless students, who are often mobile, may lack basic supplies, for example, backpacks, school supplies, and school uniforms, and they may miss school frequently because of issues such as lack of transportation or domestic violence. In addition, some school officials told us they help arrange for services for homeless mobile students and their families, such as coordinating with local homeless shelters and arranging to provide homeless mobile students with food on the weekends when they do not have access to free breakfast and lunch at school. Because the highly mobile schools we visited also had large percentages of low-income, disadvantaged students and special populations already targeted by federal programs, the schools met the needs of mobile students using funding from programs already in place. For example, during our site visits, a number of school officials and state and local educational agency officials told us they relied on funds from Title I, Part A of ESEA, a federal program targeted to disadvantaged students, including those who are from low-income families, have limited English proficiency, are from migrant families, have disabilities, or are neglected or delinquent. Services available under Title I, Part A are intended to ensure that disadvantaged children have a fair and equal opportunity to obtain a high-quality education and to reach proficiency on assessments based on the state’s academic standards. Some school officials and state and local educational agency officials told us they used funds from Title I, Part A to pay for tutoring, after-school instruction, teachers’ salaries, technology upgrades, school field trips, and staff development and training on addressing diverse needs of mobile and nonmobile students. One school we visited used funding provided by the American Recovery and Reinvestment Act of 2009 (Recovery Act) for ESEA Title I, Part A to, among other things, hire additional teachers to provide small-group instruction to all students who are behind academically, including mobile students. See table 1 for information about school-based federal programs for disadvantaged and special needs students. School officials in one district we visited told us that some of their mobile students are eligible for services under the Individuals with Disabilities Education Act, a program that provides early intervention and special education services for children and youths with disabilities. The schools we visited also received funding through the Department of Agriculture’s school nutrition programs, which provide free and reduced-price school meals for low-income, disadvantaged students. School officials in some locations said that this program allows them to provide school meals to a large percentage of their student body, including both mobile and nonmobile students. In addition, some schools we visited used the McKinney-Vento Education for Homeless Children and Youth Program (McKinney-Vento Program), which is designed to meet the educational needs of homeless students. Some school officials told us that homeless students are often mobile. Specifically, the McKinney-Vento Program requires all school districts to put in place homeless education liaisons. Some homeless education liaisons and other school officials we interviewed said they used funds from the McKinney-Vento Program to provide homeless students with food, clothing, school uniforms, backpacks of toiletries and school supplies, tutoring at homeless shelters, academic enrichment services, and summer programs. The McKinney-Vento Program also requires all school districts to provide transportation to those homeless students who choose to remain in their school of origin, however funding for transportation is provided by the school district. Some schools we visited used their own school funds to pay for transportation, such as bus passes and gas cards, as needed, for homeless students to get to school. Schools we visited also used McKinney-Vento Program funds for various other purposes, including one school that used the funds to hire staff to identify homeless students and two other schools that used the funds to provide outreach to parents. Across all three states we visited, homeless education liaisons help provide a stable environment for homeless students to learn by arranging for services for their families, such as referrals to soup kitchens, health services including free dental clinics, free school supplies, and domestic violence groups. According to state education agency officials we interviewed, schools in their states relied on the Migrant Education Program, which supports the educational needs of a specific population of mobile students—students who are migrant workers or children of migrant parents. The Migrant Education Program (1) provides students with services, such as academic (tutoring and summer school) and health services; (2) allows school districts to share migrant student information electronically across state boundaries; (3) encourages states to collaborate in administering state assessments and sharing lesson plans; and (4) provides funding for “portable” education services, such as instructional booklets and CD-ROM learning modules that help migrant students earn school credits as they move from school to school or undergo extended absences. States use the Migrant Student Information Exchange—a Web-based database—to collect, maintain, and share student record information to facilitate school enrollment, grade and course placement, and accrual of secondary school course credits. We did not evaluate the effectiveness of these federal programs in meeting the needs of mobile students. We provided a draft copy of this report to the Department of Education for review and comment. Education did not have any comments on the report. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to relevant congressional committees, the Secretary of Education, and other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or ashbyc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix V. This appendix discusses in more detail our methodology for our study examining the scope and implications of student mobility on students and schools. Our study was framed around four questions: (1) What are the numbers and characteristics of students who change schools, and what are the reasons students change schools? (2) What is known about the effects of mobility on student outcomes including academic achievement, behavior, and other outcomes? (3) What challenges does student mobility present for schools in meeting the educational needs of students who change schools? (4) What key federal programs are schools using to address the needs of mobile students? To obtain information on the number and characteristics of mobile students and schools they attend, we analyzed two nationally representative datasets that are administered by the Department of Education’s (Education) National Center for Education Statistics (NCES)—the Early Childhood Longitudinal Study, Kindergarten Class of 1998-1999 (ECLS-K) and the National Assessment of Educational Progress (NAEP). We selected these datasets in consultation with our methodologists and Education officials. For both datasets, we assessed the quality, reliability, and usability of the data for reporting descriptive statistics on the characteristics of students and the schools they attend. For our data reliability assessment, we reviewed agency documents about the datasets’ variable definitions, survey and sampling methods, and data collection and analysis efforts. We also conducted electronic tests of the files and interviewed Education officials about the steps they took to ensure data reliability. We determined that the Education data were sufficiently reliable for the purposes of our review. The surveys used weighted probability sampling of students (ECLS-K) and schools (NAEP). We followed recommended statistical techniques to estimate standard errors of estimates from the ECLS-K and NAEP data. The ECLS-K’s measure of individual-level student mobility is limited in that its measure of school changes includes the number of promotional school changes—for example, the typical school change from an elementary school to a middle school—as well as the nonpromotional school changes. The ECLS-K is a longitudinal survey of students from kindergarten through eighth grade. The survey population is a nationally representative cohort of 21,260 students who began kindergarten in 1998. The survey collects data from students, parents, teachers, and school officials from 1998 to 2007. In our analysis of ECLS-K data, we focused on the eighth grade survey round, to ensure that we captured the most complete data on school changes. During each spring survey round from first through eighth grade, parents were asked how many times their child changed schools since the last survey period. We used the responses from those questions, as well as school identification information, to estimate the number of school changes for each student. We examined the following student characteristics available in the ECLS-K data: (1) race; (2) measures of family income, including poverty threshold, receipt of free or reduced- price lunch, food stamps, or assistance from the Temporary Assistance for Needy Families (TANF) program; (3) whether a father was present in the household; and (4) whether the family owned their home. We compared students who changed schools two or fewer times (referred to as “less mobile”) to those who changed schools four or more times (referred to as “more mobile”). We chose those groups for comparison because they provide a clear separation between the more mobile and less mobile groups and also because the two groups combined represent a significant fraction—about 82 percent—of the population of the students in the cohort. Students who changed schools four or more times would generally have experienced at least three nonpromotional school moves. We also considered defining high mobility students as those who changed five or more times. However, such students only made up about 5 percent of the population followed by the ECLS-K. Because table cell sample sizes were often very small using the five change cut-off, resulting in wide confidence intervals, we decided against the use of this definition. In addition to the analyses we presented in the main body of this report, we compared students who changed schools two or fewer times to those who changed schools three or more times. We found statistically significant differences among some of the relationships we explored, but as expected, the differences were more pronounced when the highly mobile population was defined as students who changed four or more times. See appendix II for ECLS-K data on the mobile student population. The NAEP—the results of which are issued as the Nation’s Report Card— provides nationally representative results on school characteristics based on samples of 4th, 8th, and 12th grade students. Similar to our analysis of the ECLS-K, our analysis of NAEP focused on the eighth grade year. We used the results from survey questions related to school environment and characteristics to describe the characteristics of schools and their student mobility rates. To determine schools’ student mobility rates, we used responses from the following question administered in the 2007 survey: “About what percentage of students who are enrolled at the beginning of the school year is still enrolled at the end of the school year?” Further, using the NAEP data, we explored relationships between schools’ mobility rates and the following school characteristics: (1) geographic location; (2) measures of low-income students, such as receipt of Elementary and Secondary Education Act of 1965’s (ESEA) Title I funding and participation in the National School Lunch Program (NSLP); (3) students in special education; (4) students with limited English proficiency; and (5) students absent on an average day. For our comparison of schools with “low” student mobility rates and schools with “high” student mobility rates, we sorted the NAEP data into three pairings to determine which pairing provided a clear separation between low mobility and high mobility schools. When we compared schools that had 5 percent or fewer of their students no longer enrolled at the end of the school year (low mobility) with schools that had more than 5 percent of their students no longer enrolled at the end of the year (high mobility), we found few statistically significant differences. When we compared schools that had 10 percent or fewer of their students no longer enrolled at the end of the school year (low mobility) with schools that had more than 10 percent of their students no longer enrolled at the end of the year (high mobility), we found several statistical differences. When we compared schools that had 20 percent or fewer of their students no longer enrolled at the end of the school year (low mobility) with schools that had more than 20 percent of their students no longer enrolled at the end of the year (high mobility), cell sample sizes were too small to make meaningful comparisons. We thus selected the 10 percent pairing because it provides a clear separation between the low mobility and high mobility schools and the sample sizes were sufficient to make meaningful comparisons. See appendix III for NAEP data on schools. We reviewed existing studies to determine what research says about the effects of mobility on student outcomes, including academic and nonacademic outcomes, such as behavior. To identify existing studies, we searched several electronic databases using the keywords “student mobility,” “school mobility,” and “transience.” We identified 151 studies that met the following criteria: original analysis of data based on students in the United States or original quantitative synthesis of such previously conducted research (also referred to as meta-analysis) and published or prepared during or after 1984. We screened the studies to identify those that were relevant for our study and identified 62 of the 151 studies that met the following criteria: assessed a student’s school change as distinct from a student’s residential used quantitative measurement of the association between school change and at least one student outcome, either academic or nonacademic; and peer-reviewed journal article, association or agency paper, state or local education agency paper, or a conference paper from the last 2 years (2007 onward). Each of these 62 studies was reviewed by a social scientist to determine whether the study (1) contained sufficient information on methods to make a determination about the study’s soundness and limitations and (2) for studies on academic outcomes only—controlled for students’ academic performance prior to changing schools. For the purpose of controlling, we considered a variety of methods to be sufficient, such as using a statistical model that included prior academic performance as a matching students on prior performance, or analyzing difference scores (i.e., difference between premobility academic performance and postmobility performance) rather than absolute measures of achievement. The result of this stage of the review was a set of studies that we determined used sound methods and, in the case of studies of academic outcomes, controlled for prior academic achievement. For each of these studies, we also reviewed the other studies these authors used as references, screened these studies using the same methods described above, and identified one additional study that met our inclusion criteria. Further, we excluded a few studies due to redundancy (covering the same or nearly the same data and analysis as other studies included in the review). At the end of the screening process, 26 studies on the effects of mobility on student outcomes remained, of which 21 assessed academic outcomes and 11 assessed nonacademic outcomes. To review the findings, methods, and limitations of the selected studies, we developed a data collection instrument to obtain information systematically about each study’s methods, findings, and limitations on the reliability, scope, and generalizability of these findings. We based our data collection and assessments on generally accepted social science standards. A senior social scientist with training in survey methods and statistical analysis of survey data reviewed each study using the data collection instrument. A second senior social scientist reviewed each completed data collection instrument and the relevant portions of the study in question to verify the accuracy of the information recorded. Most of our selected studies measured academic outcomes using standardized test scores or school dropout or completion rates and nonacademic outcomes using misbehavior and social capital (i.e., richness of students’ social networks). We selected the studies for our review based on their methodological soundness and not on the generalizability of the results. Although the findings of the studies we identified are not representative of the findings of all studies of student mobility, the studies consist of those published studies we could identify that used the strongest designs to assess the effects of mobility. The selected studies varied in methods and in scope. For example, some studies distinguished among types of mobility (e.g., intra-city versus city-to-suburbs, or school-change-only versus school- change-plus-residential-move), but others did not. Some studies used nationally representative samples of students, while others focused on specific populations, such as low-income students in one city. Some studies assessed effects of mobility at the student level, while others assessed effects at higher levels, such as classrooms. See appendix IV for a list of the studies we reviewed. We conducted site visits to a nonprobability sample of eight schools across six school districts in three states (California, Michigan, and Texas) where we interviewed school officials and others about issues related to student mobility. We selected states that provided geographic coverage and that had high percentages of economically disadvantaged students and/or high rates of foreclosures to provide insight on how the economic downturn might be affecting students and schools in high poverty areas. We selected schools with high percentages of mobile students and that would illustrate school type (public and charter), grade level (elementary, middle, and high school), and location (urban, suburban, and rural). During our school site visits, we interviewed state education agency officials, local homeless education liaisons, principals, teachers, guidance counselors, school social workers, community group representatives, and parents of mobile students. During our interviews, we collected information about the number and demographic characteristics of mobile students; reasons for student mobility and timing of mobility; challenges related to student mobility, including meeting academic, social, and emotional needs of mobile and nonmobile students; and how schools address challenges of student mobility, including use of federal programs and community resources. In preparation for our site visits, we reviewed relevant laws, regulations, and agency documents, and interviewed federal officials and representatives of education and homeless associations about issues related to student mobility and federal programs that serve low-income, disadvantaged, and special needs students, including those who change schools. We conducted this performance audit from October 2009 through November 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides information from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-1999 (ECLS-K)—which followed a cohort of students from 1998 to 2007—on the number of schools students attended, by various student and parent characteristics. In each table, we provide a comparison of the percent of students who changed schools two times or less to students who changed schools three or more times, and students who changed schools four or more times. This appendix includes data from the National Assessment of Educational Progress (NAEP), which is also known as the Nation’s Report Card. The NAEP is a continuing assessment of student progress conducted nationwide periodically in reading, math, science, writing, U.S. history, civics, geography, and the arts. The NAEP assessment collects data from students and school officials for a nationally representative sample of 4th, 8th, and 12th graders. In the following tables, we present data on the characteristics of students in grades four and eight from the 2007 NAEP assessment for schools with “low” and “high” mobility rates. Schools with low mobility rates had fewer than 10 percent of their students who were no longer enrolled at the end of the year while schools with high mobility rates had more than 10 percent of their students who were no longer enrolled at the end of the school year. The tables are based on a selection of variables relevant to our review. This appendix includes studies of possible academic and nonacademic outcomes of student mobility that met our criteria for inclusion in our review. Alexander, Karl L., Doris R. Entwisle, and Susan L. Dauber. The Journal of Educational Research, vol. 90, no. 1 (September/October 1996): 3- 12. District/city (Baltimore); urban, poor (data were intended to be representative of all Baltimore schoolchildren, but attrition over the 5 years of the study resulted in bias towards a African-American, low-socio- economic status (SES) population) Booker, Kevin et al. Journal of Public Economics, vol. 91 (2007): 849-876. State (Texas) Burkam, David T., Valerie E. Lee, and Julie Dwyer. Prepared for the Workshop on the Impact of Mobility and Change on the Lives of Young Children, Schools, and Neighborhoods (June 29-30, 2009). Hanushek, Eric A., John F. Kain, and Steven G. Rivkin. Journal of Public Economics, vol. 88 (2004): 1721-1746. State (Texas) Heinlein, Lisa Melman, and Marybeth Shinn. Psychology in the Schools, vol. 37, no. 4 (2000): 349-357. Mantzicopoulos, Panayota, and Dana J. Knutson. The Journal of Educational Research, vol. 93, no. 5 (May/June 2000): 305-311. Mao, Michael X., Maria D. Whitsett, and Lynn T. Mellor. Paper presented at the Annual Meeting of the American Educational Research Association, Chicago (Mar. 24- 28, 1997). State (Texas) Ou, Suh-Ruu, and Arthur J. Reynolds. School Psychology Quarterly, vol. 23, no. 2 (2008): 199-229. Reynolds, Arthur J., and Barbara Wolfe. Educational Evaluation and Policy Analysis, vol 21, no. 3 (Autumn 1999): 249-269. Rumberger, Russell W., and Katherine A. Larson. American Journal of Education, vol. 107, no. 1 (November 1998): 1-35. Rumberger, Russell W., Katherine A. Larson, Gregory J. Palardy et al. University of California, Berkeley: Chicano/Latino Policy Project (CLPP) Policy Report, vol. 1, no. 2, (October 1998). State (California) Rumberger, Russell W., Katherine A. Larson, Robert K. Ream et al. University of California, Berkeley and Stanford University: Policy Analysis for California Education Research Series 99-2, (March 1999). State (California) Temple, Judy A., and Arthur J. Reynolds. Journal of School Psychology, vol. 37, no. 4 (1999): 355-377. Xu, Zeyu, Jane Hannaway, and Stephanie D’Souza. National Center for Analysis of Longitudinal Data in Education Research (CALDER) Working Paper no. 22, March 2009. State (North Carolina) Griffith, James. The Elementary School Journal, vol. 99, no. 1 (September 1998): 53-80. Mann, Emily A., and Arthur J. Reynolds. Social Work Research, vol. 30, no. 3 (September 2006): 153-167. Reynolds, Arthur J., Suh-Ruu Ou, and James W. Topitzes. Child Development, vol. 75, no. 5 (September/October 2004): 1299-1328. Reynolds, Arthur J., and Dylan L. Robertson. Child Development, vol. 74, no. 1 (January/February 2003): 3-26. South, Scott J., and Dana L. Haynie. Social Forces, vol 83, no. 1 (September 2004): 315- 350. Gruman, Diana H. et al. Child Development, vol. 79, no. 6 (November/December 2008): 1833-1852. Schools (10 suburban schools in the Pacific Northwest that had high-risk population of low income, single-family households, high mobility, and poor academic performance) Pribesh, Shana, and Douglas B. Downey. Demography, vol. 36, no. 4 (November 1999): 521- 534. Reynolds, Arthur J. American Educational Research Journal, vol. 28, no. 2 (Summer 1991): 392-422. Reynolds, Arthur J., and Nikolaus Bezruczko. Merrill- Palmer Quarterly, vol. 39, no. 4 (October 1993): 457-480. Swanson, Christopher B., and Barbara Schneider. Sociology of Education, vol. 72, no. 1 (January 1999): 54-67. In addition to the contact above, Sherri Doughty (Assistant Director), Linda Siegel (Analyst-in-Charge), Vida Awumey, Robert Grace, Erin O’Brien, and Stacy Spence made significant contributions to this report. Jack Wang, Ruben Montes de Oca, Luann Moy, and John Karikari assisted with data analysis and methodology. Russell Burnett, Lorraine Ettaro, and Jay Smale assisted with the review of external studies. James Rebbe provided legal support. Mimi Nguyen and Jeremy Sebest assisted with graphics. Susannah Compton assisted in report development.
Educational achievement of students can be negatively affected by their changing schools often. The recent economic downturn, with foreclosures and homelessness, may be increasing student mobility. To inform Elementary and Secondary Education Act of 1965 (ESEA) reauthorization, GAO was asked: (1) What are the numbers and characteristics of students who change schools, and what are the reasons students change schools? (2) What is known about the effects of mobility on student outcomes, including academic achievement, behavior, and other outcomes? (3) What challenges does student mobility present for schools in meeting the educational needs of students who change schools? (4) What key federal programs are schools using to address the needs of mobile students? GAO analyzed federal survey data, interviewed U.S. Department of Education (Education) officials, conducted site visits at eight schools in six school districts, and reviewed federal laws and existing research. While nearly all students change schools at some point before reaching high school, some change schools with greater frequency. According to Education's national survey data, the students who change schools the most frequently (four or more times) represented about 13 percent of all kindergarten through eighth grade (K-8) students and they were disproportionately poor, African American, and from families that did not own their home. About 11.5 percent of schools also had high rates of mobility--more than 10 percent of K-8 students left by the end of the school year. These schools, in addition to serving a mobile population, had larger percentages of students who were low-income, received special education, and had limited English proficiency. Research suggests that mobility is one of several interrelated factors, such as socio-economic status and lack of parental education, which have a negative effect on academic achievement, but research about mobility's effect on students' social and emotional well-being is limited and inconclusive. With respect to academic achievement, students who change schools more frequently tend to have lower scores on standardized reading and math tests and drop out of school at higher rates than their less mobile peers. Schools face a range of challenges in meeting the academic, social, and emotional needs of students who change schools. Teachers we interviewed said that students who change schools often face challenges due to differences in what is taught and how it is taught. Students may arrive without records or with incomplete records, making it difficult for teachers to make placement decisions and identify special education needs. Also, teachers and principals told us that schools face challenges in supporting the needs of these students' families, the circumstances of which often underlie frequent school changes. Moreover, these schools face the dual challenge of educating a mobile student population, as well as a general student population, that is often largely low-income and disadvantaged. Schools use a range of federal programs already in place and targeted to at-risk students to meet the needs of students who change schools frequently. Teachers and principals told us that mobile students are often eligible for and benefit from federal programs for low-income, disadvantaged students, such as Title 1, Part A of ESEA which funds tutoring and after-school instruction. In addition, school officials we interviewed said they rely on the McKinney-Vento Education for Homeless Children and Youth Program, which provides such things as clothing and school supplies to homeless students and requires schools to provide transportation for homeless students who lack permanent residence so they can avoid changing schools. GAO did not evaluate the effectiveness of these programs in meeting the needs of mobile students. GAO is not making recommendations in this report. Education had no comments on this report.
Despite some similarities, each of the recent attacks is very different in its makeup, method of attack, and potential damage. Generally, Code Red and Code Red II are both “worms,” which are attacks that propagate themselves through networks without any user intervention or interaction. They both take advantage of a flaw in a component of versions 4.0 and 5.0 of Microsoft’s Internet Information Services (IIS) Web server software. Code Red originally sought to do damage by defacing Web pages and by denying access to a specific Web site by sending it massive amounts of data, which essentially would shut it down. This is known as a denial-of- service (DoS) attack. Code Red II is much more discreet and potentially more damaging. Other than sharing the name of the original worm, the only similarity Code Red II has with Code Red is that it exploits the same IIS vulnerability to propagate itself. Code Red II installs “backdoors” on infected Web servers, making them vulnerable to hijacking by any attacker who knows how to exploit the backdoor. It also spreads faster than Code Red. Both attacks have the potential to decrease the speed of the Internet and cause service disruptions. More importantly, these worms broadcast to the Internet the servers that are vulnerable to this flaw, which allows others to attack the servers and perform other actions that are not related to Code Red. SirCam is a malicious computer virus that spreads primarily through E- mail. Once activated on an infected computer, the virus searches through a select folder and mails user files acting as a “Trojan horse” to E-mail addresses in the user’s address book. A Trojan horse, or Trojan, is a program containing hidden code allowing the unauthorized collection, falsification, or destruction of information. If the user’s files are sensitive in nature, then SirCam not only succeeds in compromising the user’s computer, but also succeeds in breaching the data’s confidentiality. In addition to spreading, the virus can attempt to delete a victim’s hard drive or fill the remaining free space on the hard drive making it impossible to perform common tasks such as saving files or printing. This form of attack is extremely serious since it is one from which it is very difficult to recover. SirCam is much more stealthy than the Melissa and ILOVEYOU viruses because it does not need to use the victim’s E-mail program to replicate. It has its own internal capabilities to mail itself to other computers. SirCam also can spread through another method. It can copy itself to other unsuspecting computers connected through a Windows network (commonly referred to as Windows network computers) that has granted read/write access to the infected computer. Like Code Red and Code Red II, SirCam can slow the Internet. However, SirCam poses a greater threat to the home PC user than that of the Code Red worms. Table 1 provides a high-level comparison of the attacks. The attachment to this testimony answers the questions in the table in greater detail. Systems infected by Code Red and SirCam can be fixed relatively easily. A patch made available by Microsoft can remove the vulnerability exploited by Code Red and rebooting the infected computer removes the worm itself. Updating and using antivirus software can help detect and partially recover from SirCam. Patching and rebooting an infected server is not enough when a system is hit by Code Red II. Instead, the system’s hard drive should be reformatted, and all software should be reinstalled to ensure that the system is free of other backdoor vulnerabilities. Of course, there are a number of other immediate actions organizations can take to ward off attacks. These include: using strong passwords, verifying software security settings, backing up files early and often, ensuring that known software vulnerabilities are reduced by promptly implementing software patches available from vendors, ensuring that policies and controls already implemented are operating as using scanners that automatically search for system vulnerabilities, using password-cracking tools to assess the password strength of the using network monitoring tools to identify suspicious network activity, developing and distributing lists of the most common types of vulnerabilities and suggested corrective actions. Reports from various media and computer security experts indicate that the impact of these viruses has been extensive. On July 19, the Code Red worm infected more than 250,000 systems in just 9 hours, according to the National Infrastructure Protection Center (NIPC). An estimated 975,000 servers have been infected in total, according to Computer Economics, Inc. Code Red and Code Red II have also reportedly disrupted both government and business operations, principally by slowing Internet service and forcing some organizations to disconnect themselves from the Internet. For example, reports have noted that (1) the White House had to change the numerical Internet address that identifies its Web site to the public, and (2) the Department of Defense was forced to briefly shut down its public Web sites. Treasury’s Financial Management Service was infected and also had to disconnect itself from the Internet. Code Red worms also reportedly hit Microsoft’s popular free E-mail service, Hotmail; caused outages for users of Qwest’s high-speed Internet service nationwide; and caused delays in package deliveries by infecting systems belonging to FedEx Corp. There are also numerous reports of infections in other countries. The economic costs resulting from Code Red attacks are already estimated to be over $2.4 billion. These involve costs associated with cleaning infected systems and returning them to normal service, inspecting servers to determine the need for software patches, patching and testing services as well as the negative impact on the productivity of system users and technical staff. Although Code Red’s reported costs have not yet surpassed damages estimated for last year’s ILOVEYOU virus, which is now estimated to be more than $8 billion, the Code Red attacks are reportedly more costly than 1988’s Morris worm. This particular worm exploited a flaw in the Unix operating system and affected VAX computers from Digital Equipment Corp. and Sun 3 computers from Sun Microsystems, Inc. It was intended to only infect each computer once, but a bug allowed it to replicate hundreds of times, crashing computers in the process. Approximately 10 percent of the U.S. computers connected to the Internet effectively stopped at the same time. At that time, the network had grown to more than 88,000 computers and was a primary means of communication among computer security experts. SirCam has also reportedly caused some havoc. It is allegedly responsible for the leaking of secret documents from the government of Ukraine. And it reportedly infected a computer at the Federal Bureau of Investigation (FBI) late last month and sent some private, but not sensitive or classified, documents out in an E-mail. There are reports that SirCam has surfaced in more than 100 countries. GAO has identified information security as a governmentwide high risk issue since 1997. As these incidents continue, the federal government continues to face formidable challenges in protecting its information systems assets and sensitive data. These include not only an ever changing and growing sophistication in the nature of attacks but also an urgent need to strengthen agency security controls as well as a need for a more concerted and effective governmentwide coordination, guidance, and oversight. Today, I would like to briefly discuss these challenges. I would also like to discuss progress that has been made in addressing them, including improvements in agency controls, actions to strengthen warning and crisis management capabilities, and new legislation to provide a comprehensive framework for establishing and ensuring effectiveness of information security controls over information resources that support federal government operations and assets. These are positive steps toward taking a proactive stand in protecting sensitive data and assets. First, these latest incidents again show that computer attack tools and techniques are becoming increasingly sophisticated. The Code Red attack was more sophisticated than those experienced in the past because the attack combined a worm with a denial-of-service attack. Further, with some reprogramming, each variant of Code Red got smarter in terms of identifying vulnerable systems. Code Red II exploited the same vulnerability to spread itself as the original Code Red. However instead of launching a DoS attack against a specific victim, it gives an attacker complete control over the infected system, thereby letting the attacker perform any number of undesirable actions. SirCam was a more sophisticated version of the ILOVEYOU virus, no longer needing the victim’s E-mail program to spread. In the long run, it is likely that hackers will find ways to attack more critical components of the Internet, such as routers and network equipment, rather than just Web site servers or individual computers. Further, it is likely that viruses will continue to spread faster as a result of the increasing connectivity of today’s networks and the growing use of commercial-off-the-shelf (COTS) products, which, once a vulnerability is discovered, can be easily exploited for attack by all their users because of the widespread use of the products. Second, the recent attacks foreshadow much more devastating Internet threats to come. According to official estimates, over 100 countries already have or are developing computer attack capabilities. Further, the National Security Agency has determined that potential adversaries are developing a body of knowledge about U.S. systems and methods to attack them. Meanwhile, our government and our nation have become increasingly reliant on interconnected computer systems to support critical operations and infrastructures, including telecommunications, finance, power distribution, emergency services, law enforcement, national defense, and other government services. As a result, there is a growing risk that terrorists or hostile foreign states could severely damage or disrupt national defense or vital public operations through computer-based attacks on the nation’s critical infrastructures. Third, agencies do not have an effective information security program to prevent and respond to attacks—both external attacks, like Code Red, Code Red II, and SirCam, and internal attempts to manipulate or damage systems and data. More specifically, we continue to find that poor security planning and management are the rule rather than the exception. Most agencies do not develop security plans for major systems based on risk, have not formally documented security policies, and have not implemented programs for testing and evaluating the effectiveness of the controls they rely on. Agencies also often lack effective access controls to their computer resources and consequently cannot protect these assets against unauthorized modification, loss, and disclosure. Moreover, application software development and change controls are weak; policies and procedures governing segregation of duties are ineffective; and access to the powerful programs and sensitive files associated with a computer systems operation is not well-protected. In fact, over the past several years, our analyses as well as those of the Inspectors General have found that virtually all of the largest federal agencies have significant computer security weaknesses that place critical federal operations and assets at risk to computer-based attacks. In recognition of these serious security weaknesses, we and the Inspectors General have made recommendations to agencies regarding specific steps they should take to make their security programs effective. Also, in 2001, we again reported information security as a high-risk area across government, as we did in our 1997 and 1999 high-risk series. Fourth, the government still lacks robust analysis, warning, and response capabilities. Often, for instance, reporting on incidents has been ineffective—with information coming too late for agencies to take proactive measures to mitigate damage. This was especially evident in the Melissa and ILOVEYOU attacks. There is also a lack of strategic analysis to determine the potential broader implications of individual incidents. Such analysis looks beyond one specific incident to consider a broader set of incidents or implications that may indicate a potential threat of national importance. Further, as we recently reported, the ability to issue prompt warnings about attacks is impeded because of (1) a lack of a comprehensive governmentwide or nationwide framework for promptly obtaining and analyzing information on imminent attacks, (2) a shortage of skilled staff, (3) the need to ensure that undue alarm is not raised for insignificant incidents, and (4) the need to ensure that sensitive information is protected, especially when such information pertains to law enforcement investigations underway. Lastly, government entities have not developed fully productive information-sharing and cooperative relationships. We recently made a variety of recommendations to the Assistant to the President for National Security Affairs and the Attorney General regarding the need to more fully define the role and responsibilities of the NIPC, develop plans for establishing analysis and warning capabilities, and formalize information-sharing relationships with the private sector and federal entities. Fifth, most of the nation’s critical infrastructure is owned by the private sector. Solutions, therefore, need to be developed and implemented in concert with the private sector, and they must be tailored sector by sector, through consultation about vulnerabilities, threats, and possible response strategies. Putting together effective partnerships with the private sector is difficult, however. Disparate interests between the private sector and the government can lead to profoundly different views and perceptions about threats, vulnerabilities, and risks, and they can affect the level of risk each party is willing to accept and the costs each is willing to bear. Moreover, industry has raised concerns that it could potential face antitrust violations for sharing information. Lastly, there is a concern that an inadvertent release of confidential business material, such as trade secrets or proprietary information, could damage reputations, lower consumer confidence, hurt competitiveness, and decrease market shares of firms. Fortunately, we are beginning to see improvements that should help agencies ward off attacks. We reported earlier this year that several agencies have taken significant steps to redesign and strengthen their information security programs. For example, the Internal Revenue Service (IRS) has made notable progress in improving computer security at its facilities, corrected a significant number of identified weaknesses, and established a service-wide computer security management program. Similarly, the Environmental Protection Agency has moved aggressively to reduce the exposure of its systems and data and to correct weaknesses we identified in February 2000. Moreover, the Federal Computer Incident Response Center (FedCIRC) and the NIPC have both expanded their efforts to issue warnings of potential computer intrusions and to assist in responding to computer security incidents. In responding to the Code Red and Code Red II attacks, FedCIRC and NIPC worked together with Carnegie Mellon’s CERT Coordination Center, the Internet Security Alliance, the National Coordinating Center for Telecommunications, the Systems Administrators and Network Security (SANS) Institute, and other private companies and security organizations to warn the public and encourage system administrators and home users to voluntarily update their software. We also recently reported on a number of other positive actions taken by NIPC to develop analysis, warning, and response capabilities. For example, since its establishment, the NIPC has issued a variety of analytical products to support computer security investigations. It has established a Watch and Warning Unit that monitors the Internet and other media 24 hours a day to identify reports of computer-based attacks. It has developed crisis management capabilities to support a multi-agency response to the most serious incidents from FBI’s Washington, D.C., Strategic Information Operations Center. The administration is currently reviewing the federal strategy for critical infrastructure protection that was originally outlined in Presidential Decision Directive (PDD) 63, including provisions related to developing analytical and warning capabilities that are currently assigned to the NIPC. On May 9, 2001, the White House issued a statement saying that it was working with federal agencies and private industry to prepare a new version of the “national plan for cyberspace security and critical infrastructure protection” and reviewing how the government is organized to deal with information security issues. Lastly, the Congress recently enacted legislation to provide a comprehensive framework for establishing and ensuring the effectiveness of information security controls over information resources that support federal government operations and assets. This legislation—known as Government Information Security Reform (GISR)—requires agencies to implement an agencywide information security program that is founded on a continuing risk management cycle. GISR also added an important new requirement by calling for an independent evaluation of the information security program and practices of an agency. These evaluations are to be used by OMB as the primary basis for its summary report to the Congress on governmentwide information security. In conclusion, the attacks we are dealing with now are smarter and more threatening than the ones we were dealing with last year and the year before. But I believe we are still just witnessing warning shots of potentially much more damaging and devastating attacks on the nation’s critical infrastructures. To that end, it’s vital that federal agencies and the government as a whole become proactive rather than reactive in their efforts to protect sensitive data and assets. In particular, as we have recommended in many reports and testimonies, agencies need more robust security planning, training, and oversight. The government as a whole needs to fully develop the capability to strategically analyze cyber threats and warn agencies in time for them to avert damage. It also needs to continue building on private-public partnerships—not just to detect and warn about attacks—but to prevent them in the first place. Most of all, trust needs to be established among a broad range of stakeholders, roles and responsibilities need to be clarified, and technical expertise needs to be developed. Lastly, becoming truly proactive will require stronger leadership by the federal government to develop a comprehensive strategy for critical infrastructure protection, work through concerns and barriers to sharing information, and institute the basic management framework needed to make the federal government a model of critical infrastructure protection. Mr. Chairman and Members of the Subcommittee, this concludes my statement. I would be pleased to answer any questions that you or Members of the Subcommittee may have. For further information, please contact Keith Rhodes at (202) 512-6412. Individuals making key contributions to this testimony included Cristina Chaplain, Edward Alexander, Jr., Tracy Pierson, Penny Pickett, and Chris Martin. Answer Code Red is a worm, which is a computer attack that propagates through networks without user intervention. This particular worm makes use of a vulnerability in Microsoft’s Internet Information Services (IIS) Web server software—specifically, a buffer overflow. The worm looks for systems running IIS (versions 4.0 and 5.0) that have not patched the unchecked vulnerability, and exploits the vulnerability to infect those systems. Code Red was initially written to deface the infected computer’s Web site and to perform a distributed denial of service (DDoS) attack against the numerical Internet address used by www.whitehouse.gov. Two subsequent versions of Code Red do not deface Web pages but still launch the DDoS attack. Code Red was first reported on July 17, 2001. The worm is believed to have started at a university in Guangdong, China. The worm scans the Internet, identifies vulnerable systems, and infects these systems by installing itself. Each newly installed worm joins all the others causing the rate of scanning to grow rapidly. The first version of Code Red created a randomly generated list of Internet addresses to infect. However, the algorithm used to generate the list was flawed, and infected systems ended up reinfecting each other. The subsequent versions target victims a bit differently, increasing the rate of infection. Users with a Microsoft IIS server installed with Windows NT version 4.0 and Windows 2000. The original variant of Code Red (CRv1) can deface the infected computer’s Web site and used the infected computer to perform a DDoS attack against the Internet address of the www.whitehouse.gov Web site. Subsequent variants of Code Red (CRv2a and CRv2b) no longer defaced the infected computer’s Web site making detection of the worm harder. These subsequent variants continued to target the www.whitehouse.gov Web site and used smarter methods to target new computers for infection. The uncontrolled growth in scanning can also decrease the speed of the Internet and cause sporadic but widespread outages among all types of systems. Although the initial version, CRv1, defaces the Web site, the primary impact to the server is performance degradation as a result of the scanning activity of this worm. This degradation can become quite severe since it is possible for a worm to infect the same machine multiple times. Other entities, even those that are not vulnerable to Code Red, are impacted because servers infected by Code Red scan their systems and networks. Depending on the number of servers performing this scan, these entities may experience network denial of service. This was especially true with the implementation of CRv1 since a “flaw” in the random number generator essentially targeted the same servers. As noted above, this behavior is not found in the later variants. However, the end result may be the same since CRv2a and CRv2b use improved randomization techniques that facilitate more prolific scanning. Install a patch made available by Microsoft and reboot the system. (The patch should also be installed as a preventative measure). Question Technical Details on How the Code Red Worm Operates The Code Red worm has three phases – discovery and propagation, attack, and dormancy. Execution of these phases is based upon the day of the month. Phase 1: Discovery and Propagation Between day 1 and day 19 of any month, Code Red performs its discovery and propagation function. It does this by generating 100 subprograms on an infected server. All but one of these subprograms has the task of identifying and infecting other vulnerable Web servers by scanning a generated list of Internet addresses. Once a target system is identified, Code Red uses standard Web server communication to exploit the flaw and send itself to the vulnerable server. Once a new server is infected, the process continues. CRv1 created a randomly generated list of Internet addresses to infect. However, the algorithm used to generate the random number list was “flawed”, and infected systems ended up re- infecting each other because the random list that each computer generated was the same. CRv2a and CRv2b were modified to generate actual random lists of Internet addresses that were more effective at identifying potential servers that had not already been attacked. Therefore, these versions can ultimately infect greater numbers of unprotected servers. CRv1 also defaced the target system’s Web site. This was done by replacing site’s actual Web page with the message, “HELLO! Welcome to http://www.worm.com! Hacked by Chinese!”This message enabled system administrators to easily identify when their servers had been infected. CRv2a and CRv2b modified the functionality so it would no longer deface Web pages, forcing system administrators to be proactive in determining infection. Descriptions of the variants are listed below. CRv1: Web site defacement and “random” target selection for additional attacks. CRv2a: No Web defacement and modified random target selection CRv2b: No Web defacement and better target selection by optimizing the random number generation process, i.e., better target addresses are generated. Due to the target optimization, systems infected with version 2b are able to infect new systems at a faster rate than version 2a. Between day 20 and day 27 of any month is Code Red’s attack phase. Once Code Red determines the date to be within this designated attack date range, each infected server participates in a DDoS attack by sending massive amounts of data to its intended target, the numeric Internet address of the White House Web site. Since all infected servers are set to attack the same target on the same set of dates, the large amount of Internet traffic is expected to flood the Internet with data and bombard a numeric address used by www.whitehouse.gov with more data than it can handle. This flooding of data would cause the Web server to stop responding to all Web server requests, including legitimate users surfing the White House Web site. From day 28 to the end of the month, the Code Red worm lays dormant, going into an infinite sleep phase. Although the worm remains in the computer’s memory until the system is rebooted, Code Red will not propagate or initiate any attacks once it enters dormancy. According to testing performed by Internet Security Systems, Carnegie Mellon’s CERT Coordination Center (CERT/CC), and the Federal Bureau of Investigation’s (FBI) National Infrastructure Protection Center (NIPC), the dormant worm cannot be awakened to restart the process. Answer Code Red II is also a worm that makes use of a buffer overflow vulnerability in Microsoft’s IIS Web server software. Except for using the buffer overflow injection mechanism, the worm is very different than the original Code Red and its variants. In fact, it is more dangerous because it opens backdoors on infected servers that allow any follow-on remote attackers to execute arbitrary commands. There is no DDoS attack function in Code Red II. Code Red II was reported on August 4, 2001, by industry analysts. Like Code Red, the worm scans the Internet, identifies vulnerable systems, and infects these systems by installing itself. Each newly installed worm joins all the others causing the rate of scanning to grow. Code Red II, however, mostly selects Internet addresses in the same range as the infected computer to increase the likelihood of finding susceptible victims. Users with Microsoft IIS Web server software (versions 4.0 and 5.0) installed with Windows 2000. Like Code Red, Code Red II can decrease the speed of the Internet and service disruptions. Unlike Code Red, it also leaves the infected system open to any attacker who can alter or destroy files and create a denial of service attack. Specifically, Because of the worm’s preference to target its closest neighbors, combined with the enormous amount of scanning traffic generated by the numerous subprograms running in parallel, a large amount of broadcast request traffic is generated on the infected system’s network. If several machines on a local network segment are infected, then the resulting attempt to propagate the infection to their neighbors simultaneously can generate broadcast requests at “flooding” rates. Systems on the receiving end of an effective “broadcast flood” may experience the effects of a DoS attack. Code Red II allows remote attackers and intruders to execute arbitrary commands on infected Windows 2000 systems. Compromised systems are then subject to files being altered or destroyed. This adversely entities that may be relying on the altered or destroyed files. Furthermore, compromised systems are also at high risk for being exploited to generate other types of attacks against other servers. Several anti-virus software vendors have created tools that remove the harmful effects of the worm and reverse the changes made by the worm. This fix, however, is useless if the infected computer had been accessed by an attacker who installed other backdoors on the system that would be unaffected by the Code Red II patch tool. According to FedCIRC (Federal Computer Incident Response Center), due to the malicious actions of this worm, patching and rebooting an infected server will not solve the problem. The system’s hard drive should be reformatted and all software should be reinstalled. Technical Details of the Code Red II Worm The Code Red II worm also has three phases – preparation, propagation, and Trojan insertion. Based upon current analysis, Code Red II only affects Web servers running on the Microsoft Windows 2000 operating system platform. Phase 1: Preparation During the preparation phase, the worm checks the current date to determine whether it will run at all. If the date is later than October 1, 2001, then the worm will cease to function and will remain infinitely dormant. If the date is before October 1, 2001, then all functions will be Answer performed. Although this discovery may bring hope that after October 1, 2001, this worm will no longer be a threat, this date constraint can be easily changed in a variant. The other activities conducted during the preparation phase include: The functionality of Code Red II is dependent on both the system’s environment and the current date. Code Red II checks the default system’s language, e.g., English, Chinese, etc., and stores that information. The worm also checks if the system has been previously infected, by searching for the existence of a specific file. If the file exists, then Code Red II becomes dormant and does not re-infect the system. If the file does not exist, Code Red II creates the file and continues the process. Preparation is finalized when the worm disables the capability of the Windows 2000 operating system to repair itself if it discovers that one of its required system files has been modified in any way. This becomes important during the Trojan Insertion function. Once the worm has completed the preparation phase, it immediately starts the propagation and Trojan insertion phases to complete infection. Code Red II creates hundreds of subprograms to propagate itself. The number of subprograms created depends upon the default language that the worm identified in the Preparation phsse. If the system’s default language is Chinese, then 600 subprograms are created. If the default language is not Chinese, then 300 subprograms are generated. The propagation phase is unique because Code Red II seeks to copy itself to computers that are mostly near the infected system. The algorithm uses the infected system’s own Internet address to generate a list of random Internet addresses. The generated list is comprised of Internet addresses that are closely related to the infected system. The rationale is that similar systems should reside in the “neighborhood” of the infected system, resulting in an increased chance of infection. Each of the subprograms is tasked with scanning one of the randomly generated Internet addresses to identify and infect another vulnerable system. Like Code Red, this worm uses the buffer overflow vulnerability to infect its target. Once a new target is infected, the process continues. Code Red II is more malicious than the Code Red worm discussed earlier, due to the existence of the Trojan horse backdoor programs that Code Red II leaves behind on the infected computer. The basic process follows: Initially, executable files are copied to specific locations on the Web server, which by necessity, are accessible by any remote user. These executable files can run commands sent by a remote attacker to the server through the use of well-crafted Web commands. A Trojan horse program is planted on the server that allows further exploit of the infected computer. The Trojan horse program is named after a required system program that executes when the next user logs into the system. It is also placed in a location that ensures that the Trojan horse program will be run instead of the required system program. Upon execution, the Trojan horse changes certain system settings that grant remote attackers read, write, and execute privileges on the Web server. Twenty-four to forty-eight hours after the preparation function is initiated, Code Red II forces the infected system to reboot itself. Although the reboot eliminates the memory resident worm, the backdoor and the Trojan horse programs are left in place since they are stored on the system’s disks. The reboot also restarts the IIS software, which, in turn, ensures that the Web server uses the newly compromised system settings. Answer Since the Trojan horse will always be executed each time a user logs on, Code Red II guarantees that remote attackers will always have access to the infected system. This is important, since even if the executable files copied at the beginning of the Trojan Insertion phase are deleted, the excessive privileges the Trojan sets at reboot are still in place. Therefore, the Trojan enables a remote attacker to perform similar exploits using these excessive privileges. Answer SirCam is a malicious computer virus that spreads through E-mail and potentially through unprotected Windows network connections. What makes SirCam stealthy is that it does not rely on the E-mail capabilities of the infected system to replicate. Other viruses, such as Melissa and ILOVEYOU, used the host machine’s E-mail program while SirCam contains its own mailing capability. Once the malicious code has been executed on a system, it may reveal or delete sensitive information. SirCam was first detected on July 17, 2001. This mass-mailing virus attempts to send itself to E-mail addresses found in the Windows Address Book and addresses found in cached files. It may be received in an E-mail message saying “Hi! How are you?” and requesting help with an attached file. The same message could be received in Spanish. Since the file is sent from a computer whose owner is familiar enough with the recipient to have their E-mail address in their address book, there is a high probability that the recipient will trust the attachment as coming from a known sender. This helps ensure the virus’s success in the wild and is similar to the social engineering approach used by Melissa and ILOVEYOU. The E-mail message will contain an attachment that will launch the code when opened. When installed on a victim machine, SirCam installs a copy of itself in two files. It then “steals” one of the target system’s files and attempts to mail that file with itself as a Trojan, that is, a file with desirable features, to every recipient in the affected system’s address book. It can also get E-mail addresses from the Web browser. SirCam can also spread to other computers on the same Windows network without the use of E- mail. If the infected computer has read/write access to specific Windows network computers, SirCam copies itself to those computers, infecting the other computer. Any E-mail user or any user of a PC with unprotected Windows network connections that is on the same Windows network as an infected computer. SirCam can publicly release sensitive information and delete files and folders. It can also completely fill the hard drive of the infected computer. Furthermore, it can also lead to a decrease in the speed of the Internet.
Organizations and individuals have recently had to contend with particularly vexing computer attacks. The most notable is Code Red, but potentially more damaging are Code Red II and SirCam. Together, these attacks have infected millions of computer users, shut down websites, slowed Internet service, and disrupted businesses and government operations. They have already caused billions of dollars of damage, and their full effects have yet to be completely assessed. Code Red and Code Red II are both "worms," which are attacks that propagate themselves through networks without any user intervention or interaction. Both take advantage of a flaw in a component of versions 4.0 and 5.0 of Microsoft's Internet Information Services Web server software. SirCam is a malicious computer virus that spreads primarily through E-mail. Once activated on an infected computer, the virus searches through a select folder and mails user files acting as a "Trojan horse" to E-mail addresses in the user's address book. In addition to spreading, the virus can delete a victim's hard drive or fill the remaining free space on the hard drive, making it impossible to save files or print. On July 19, 2001, the Code Red worm infected more than 250,000 systems in just nine hours, causing more than $2.4 billion in economic losses. SirCam is allegedly responsible for the leaking of secret documents from the Ukrainian government. U.S. government agencies do not have an effective information security program to prevent and respond to these attacks and often lack effective access controls to their computer resources and consequently cannot protect these assets against unauthorized modification, loss, and disclosure. However, several agencies have taken significant steps to redesign and strengthen their information security programs. Also, Congress recently enacted legislation to provide a comprehensive framework for establishing and ensuring the effectiveness of information security controls over information resources that support federal operations and assets.
The Navy ordnance business area, which consists of the Naval Ordnance Center (NOC) headquarters and subordinate activities, such as Naval weapons stations, operates under the revolving fund concept as part of the Navy Working Capital Fund. It provides various services, including ammunition storage and distribution, ordnance engineering, and missile maintenance, to customers who consist primarily of Defense organizations, but also include foreign governments. Revolving fund activities rely on sales revenue rather than direct congressional appropriations to finance their operations and are expected to operate on a break-even basis over time—that is, to neither make a profit nor incur a loss, but to recover all costs. During fiscal year 1996, the Navy ordnance business area reported revenue of about $563 million and costs of about $600 million, for a net operating loss of about $37 million. In accordance with current Department of Defense (DOD) policy, this loss and the $175 million the business area lost during fiscal years 1994 and 1995 will be recouped by adding surcharges to subsequent years’ prices. As discussed in our March 1997 report, higher-than-expected overhead costs were the primary cause of the losses that the business area incurred during fiscal years 1994 through 1996. We also testified on this problem in May 1997, and recommended that the Secretary of the Navy develop a plan to streamline the Naval ordnance business area’s operations and reduce its overhead costs. The Navy has initiated a restructuring of the business area that, according to the Secretary of the Navy, is “akin to placing it in receivership.” The objective of our audit of the Navy ordnance business area was to assess the Navy’s efforts to reduce costs and streamline its operations. Our current audit of the restructuring of the Navy ordnance business area is a continuation of our work on the business area’s price increases and financial losses (GAO/AIMD/NSIAD-97-74, March 14, 1997). In that report we recommended that the Secretary of Defense direct the Secretary of the Navy to develop a plan to streamline the Navy ordnance business operations and reduce its infrastructure costs, including overhead. This plan should (1) concentrate on eliminating unnecessary infrastructure, including overhead, (2) identify specific actions that need to be accomplished, (3) include realistic assumptions about the savings that can be achieved, (4) establish milestones, and (5) clearly delineate responsibilities for performing the tasks in the plan. To evaluate the actions being taken or considered by the NOC to streamline its operations and reduce costs, we (1) used the work that we performed in analyzing the business area’s price increases and financial losses and (2) analyzed budget reports to identify planned actions and discussed the advantages and disadvantages of the planned actions with Navy, OSD, U.S. Transportation Command, and Joint Staff officials. In analyzing the actions, we determined (1) if specific steps and milestones were developed by the NOC to accomplish the actions, (2) whether the initiatives appeared reasonable and could result in improved operations, (3) what dollar savings were estimated to result from the implementation of the actions, (4) whether the actions went far enough in reducing costs and improving operations, and (5) what other actions not being considered by the NOC could result in further cost reductions or streamlined operations. We did not independently verify the financial information provided by the Navy ordnance business area. We performed our work at the Office of the DOD Comptroller and Joint Staff, Washington, D.C.; Offices of the Assistant Secretary of Navy (Financial Management and Comptroller), Naval Sea Systems Command, Naval Air Systems Command, and Headquarters, Defense Finance and Accounting Service, all located in Arlington, Virginia; Headquarters, U.S. Atlantic Fleet, Norfolk, Virginia; Naval Ordnance Center Headquarters, Indian Head, Maryland; Naval Ordnance Center Atlantic Division, Yorktown, Virginia; Naval Ordnance Center Pacific Division, Seal Beach, California; Naval Weapons Station, Yorktown, Virginia; Naval Weapons Station, Charleston, South Carolina; Naval Weapons Station, Earle, New Jersey; Naval Weapons Station, Seal Beach, California; Naval Weapons Station, Concord, California; Naval Weapons Station Detachment, Fallbrook, California; Naval Warfare Assessment Division, Corona, California; and U.S. Transportation Command, Scott Air Force Base, Illinois. Our work was performed from June 1996 through September 1997 in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report. The Under Secretary of Defense (Comptroller) provided us with written comments, which we incorporated where appropriate. These comments are reprinted in appendix I. The Navy has incorporated a goal to reduce annual costs by $151 million into its ordnance business area’s budget estimate and has identified the major actions that will be taken to achieve this goal. Our analysis of available data indicates that the planned actions should result in substantial cost reductions and more streamlined operations. However, we cannot fully evaluate the reasonableness of the cost reduction goal at this time because the Navy does not expect to finalize the cost reduction plan until October 1997. During the fiscal year 1998 budget review process, OSD officials worked with the Navy to formulate a restructuring of the Navy ordnance business area. According to the budget estimate the Navy submitted to the Congress in February 1997, this restructuring will allow the ordnance business area to achieve substantial cost and personnel reductions without adversely affecting ordnance activities’ ability to satisfy their customers’ peacetime and contingency requirements. Specifically, the budget estimate indicated that between fiscal years 1996 and 1999, the business area’s civilian and military fiscal year end strengths will decline by 18 percent and 23 percent, respectively, and its annual costs will decline by $151 million, or 25 percent. The budget also indicated that the business area will increase its fiscal year 1998 prices in order to recover $224 million of prior year losses and achieve a zero accumulated operating result by the end of fiscal year 1998. The Navy’s fiscal year 1998 budget submission also indicated that the planned restructuring of the business area (1) is based on an assessment of whether current missions should be retained in the business area, outsourced to the private sector, or transferred to other organizations and (2) will make fundamental changes in how the business area is organized and conducts its business. Our assessment of the individual actions—most of which are expected to be initiated by October 1997 and completed during fiscal year 1998—shows that the Navy is planning to reduce costs by eliminating or consolidating redundant operations and reducing the number of positions in the business area. These actions, which are listed below, should help to streamline the Navy ordnance operations and reduce costs. Properly sizing the business area’s workforce to accomplish the projected workload by eliminating about 800 positions, or about 18 percent of the total, before the end of October 1997. Enhancing the business area’s ability to respond to unanticipated workload changes by increasing the percentage of temporary workers in the work force from 8 percent to 20 percent. Enhancing the business area’s ability to identify redundant ordnance engineering capability and to streamline its information resource functions by consolidating management responsibility for these areas by October 1, 1997. Reducing overall operating costs by significantly cutting back on operations at the Charleston and Concord Weapons Stations, beginning in October 1997. Eliminating redundant capability and reducing costs by consolidating (1) some weapons station functions, such as safety and workload planning, at fewer locations, (2) inventory management functions at the Inventory Management and Systems Division, and (3) maintenance work on the Standard Missile at the Seal Beach Naval Weapons Station. Reducing overhead contract costs, such as utilities and real property maintenance during fiscal year 1998. Enhancing business area managers’ ability to focus on their core ordnance missions of explosive safety, ordnance distribution, and inventory management by transferring east coast base support missions to the Atlantic Fleet on October 1, 1997. The Navy’s planned restructuring of its ordnance business area will reduce overhead costs and is an important first step toward the elimination of the redundant capability both within the business area and between the business area and other organizations. However, as discussed in the following sections, our analysis indicates that there are opportunities for additional cost reductions by (1) developing and implementing a detailed plan to eliminate redundant ordnance engineering capability, (2) converting military guard positions to civilian status, and (3) implementing two actions that Navy ordnance officials are currently considering. Navy ordnance officials plan to consolidate management responsibility for the business area’s nine separate ordnance engineering activities under a single manager on October 1, 1997. This will allow this manager to have visibility over all of the business area’s engineering resources and should facilitate more effective management of these engineering resources. However, it will not result in any savings unless action is also taken to eliminate the redundant ordnance engineering capability that previous studies have identified both within the ordnance business area and between the business area and other Navy organizations. For example, a 1993 Navy study estimated that 435 work years, or $22 million, could be saved annually by reducing Navy-wide in-service ordnance engineering functions from 20 separate activities to 8 consolidated activities. However, Navy ordnance officials stated that these consolidations were never implemented. They also stated that although they did not know why the consolidations were not implemented, they believe it was because (1) the Navy’s ordnance engineering personnel are managed by the NOC and three different major research and development organizations and (2) the Navy did not require these four organizations to consolidate their ordnance in-service engineering functions. Since 1954, DOD Directive 1100.4 has required the military services to staff positions with civilian personnel unless the services deem a position military essential for reasons such as combat readiness or training. This is primarily because, as we have previously reported, on average, a civilian employee in a support position costs the government about $15,000 per year less than a military person of comparable pay grade. Our analysis showed that the percentage of military personnel in the NOC workforce is about six times greater than in other Navy Working Capital Fund activities, with most of these positions being military guards such as personnel who guard access to the weapons station at the main entrance. Further, Navy ordnance officials indicated that they know of no reason why the guard positions should not be converted to civilian status. In fact, these officials said that they would prefer to have civilian guards since they are cheaper than military guards, and they noted that all of their activities already have some civilian security positions. Consequently, the Navy can save about $6.8 million annually by converting the NOC’s guard positions to civilian status (based on the $15,000 per position savings estimate). NOC officials told us that they reviewed the need for all of their military positions, and indicated that they plan to eliminate some of these positions. However, they stated that they do not plan to convert any military guard positions to civilian status. A Navy Comptroller official told us that (1) all of the NOC’s guard functions will probably be transferred to the Atlantic and Pacific fleets as part of the ordnance business area restructuring and (2) the fleet commanders, not the NOC, should, therefore, decide whether the military guard positions should be converted to civilian status. Navy ordnance officials are currently considering two additional actions—further consolidating the business area’s missile maintenance work and charging individual customers for the storage of ammunition—that would result in additional cost reductions and a more efficient operation, if implemented. As discussed below, consolidating missile maintenance work would allow the business area to reduce the fixed overhead cost that is associated with this mission, and charging customers for ammunition storage services would give customers an incentive to either relocate or dispose of unneeded ammunition and, in turn, could result in lower storage costs. The Navy ordnance business area, which has had a substantial amount of excess missile maintenance repair capacity for several years, is being forced to spread fixed missile maintenance overhead costs over a declining workload base that is expected to account for only 3 percent of the business area’s total revenue in fiscal year 1998. This problem, which is caused by factors such as force structure downsizing, continues even though the business area recently achieved estimated annual savings of $2.3 million by consolidating all maintenance work on the Standard Missile at one location. The following table shows the substantial decline in work related to four specific types of missiles. NOC officials are currently evaluating several alternatives for consolidating missile maintenance work, including (1) consolidating all work on air launched missiles at one Naval weapons station, (2) transferring all or part of the business area’s missile maintenance work to the Letterkenny Army Depot, Ogden Air Logistics Center and/or a private contractor, and (3) accomplishing all or part of the work in Navy regional maintenance centers. According to DOD, the evaluation of these alternatives should be completed in the spring of 1998. Based on our discussions with Navy ordnance and maintenance officials, the NOC’s evaluations of maintenance consolidation alternatives should identify the total cost of the various alternatives, including onetime implementation costs and costs that are not included in depot maintenance sales prices, such as the cost of shipping items from coastal locations to inland depots and/or contractor plants and assess each alternative’s potential impact on readiness. The Navy ordnance business area incurs costs to store ammunition for customers that are not required to pay for this storage service. Instead, this storage cost is added to the price charged to load ammunition on and off Naval ships and commercial vessels. As shown in the following figure, the business area’s inventory records indicate that 51,231 tons, or about 43 percent, of ammunition stored at the weapons stations was not needed as of May 1, 1997, because (1) there is no requirement for it or (2) the quantity on hand exceeds the required level. If the business area charged customers for ammunition storage, the costs of the storage service would (1) be charged to the customers that benefit from this service and (2) provide a financial incentive for customers to either relocate or dispose of unneeded ammunition. This, in turn, could allow the business area to reduce the number of locations where ammunition is stored and thereby reduce operating costs. This approach has been adopted by the Defense Logistics Agency, which also performs receipt, storage, and issue functions, and the agency stated that instituting such user charges has helped to reduce infrastructure costs by allowing it to eliminate unneeded storage space. In addition, we recently recommended such an approach in our report, Defense Ammunition: Significant Problems Left Unattended Will Get Worse (GAO/NSIAD-96-129, June 21, 1996). Navy ordnance officials told us that they are currently considering charging customers for the storage of ammunition and are taking steps to do so. These officials informed us that they (1) have discussed DLA’s experience in charging a storage cost with DLA officials, (2) have discussed this matter with the torpedo program manager and sent a letter addressing the cost to move the torpedoes off the weapons stations, (3) are drafting similar letters to the other ordnance program managers, and (4) are in the process of determining ammunition storage costs for use in developing storage fees. Most aspects of the Navy’s planned restructuring of its ordnance business area appear to be cost-effective alternatives. However, DOD budget documents indicate that the Navy’s fiscal year 1998 budget submission for its ordnance business area did not adequately consider the impact that planned personnel reductions would have on the business area’s ability to support non-Navy customers during mobilization. These documents also indicate that the Navy was proposing to reduce the operating status of some weapons stations, including Concord. However, OSD officials were concerned with the Navy’s proposal because these weapons stations would handle a majority of all DOD-wide, Army, Air Force, and U.S. Transportation Command explosive cargo in the event of a major contingency; have 10 times the explosive cargo capacity of the ports considered for are having their facilities expanded by the Army to accomplish additional U.S. Transportation Command work; and have specialized explosive storage areas that must be retained to support current inventories of Navy missiles. OSD officials concluded that no alternative to these ports exists and that DOD must, therefore, keep these ports operational. The Deputy Secretary of Defense agreed with this assessment and, in December 1996, directed the Navy not to place any port in a functional caretaker status or reduce its ordnance handling capability until a detailed plan is (1) coordinated within OSD, the Joint Staff, and the other Military Departments and (2) approved by the Secretary of Defense. According to U.S. Transportation Command and Navy ordnance officials, a May 1997 DOD-wide paper mobilization exercise validated the OSD officials’ concerns about Concord Naval Weapons Station performing its mobilization mission. Specifically, the exercise demonstrated that, among other things, (1) the Concord Naval Weapons Station is one of three ports that are essential to DOD for getting ordnance items to its warfighters during mobilization and (2) if Concord is not sufficiently staffed or equipped, there could be a delay in getting ordnance to the warfighter during mobilization. According to Navy ordnance, OSD, the Joint Staff, and U.S. Transportation Command officials, although there is widespread agreement that Concord is needed by all of the military services to meet ammunition out-loading requirements during mobilization, there is no agreement on how to finance the personnel that will be needed in order to accomplish this mission. The Army and Air Force do not believe they should subsidize the operations of a Navy base. At the same time, Navy officials do not believe they should finance the entire DOD mobilization requirement at Concord because (1) most of their facilities in the San Francisco Bay area have been closed and Concord is, therefore, no longer needed by the Navy during peacetime, (2) the Army and Air Force need Concord more than the Navy does, and (3) Concord does not receive enough ship loading and unloading work during peacetime to keep the current work force fully employed. Accordingly, the Navy plans to retain some personnel at Concord, but has shifted all of its peacetime ship loading and unloading operations out of Concord and plans to gradually transfer ammunition currently stored at Concord to other locations. Navy, OSD, and Joint Staff officials informed us that several actions are needed to ensure that Concord has sufficient, qualified personnel to load ammunition onto ships: (1) revalidate the ammunition out-loading mobilization requirements for Concord, (2) determine the minimum number of full-time permanent personnel that Concord needs during peacetime in order to ensure that it can quickly and effectively expand its operations to accomplish its mobilization mission (the core workforce), (3) ensure that Concord’s core workforce is sufficiently trained to accomplish its mobilization mission, and (4) determine a method, either through a direct appropriation or the Working Capital Funds, to finance the Concord’s mobilization requirements. To the Navy’s credit, it has acted to reduce its ordnance business area’s annual cost by $151 million and has incorporated this cost reduction goal into the business area’s budget estimate. Our analysis of available data indicates that, in general, the planned actions should result in substantial cost reductions and more streamlined Navy ordnance operations. The Navy could reduce its cost further and prevent a possible degradation of military readiness by taking the additional actions recommended in this report. Further, the Navy still needs to ensure that a final restructuring plan is completed so that it can tie together all of its planned actions and establish specific accountability, schedules, and milestones as needed to gauge progress. In order for the Concord Weapons Station to accomplish its mobilization mission, we recommend that the Secretary of Defense revalidate the amount of ammunition Concord Weapons Station needs to load onto ships during mobilization, direct the Secretary of the Navy to determine the minimum number of personnel Concord Weapons Station needs during peacetime in order to ensure that it can quickly and effectively expand its operations to accomplish its mobilization mission, and ensure that Concord’s core workforce is sufficiently trained to accomplish its mobilization mission. We recommend that the Secretary of the Navy incorporate into the NOC’s detailed cost reduction plan (1) specific actions that need to be accomplished, (2) realistic assumptions about the savings that can be achieved, (3) milestones, and (4) clearly delineated responsibilities for performing the tasks in the plan; evaluate the cost-effectiveness of (1) consolidating all or most of the business area’s missile maintenance workload at one location and/or (2) transferring all or some of this work to public depots or the private sector; develop and implement policies and procedures for charging customers for ammunition storage services; evaluate the appropriateness of converting military guard positions to direct the NOC Commander to determine if it would be cost-beneficial to convert non-guard military positions to civilian status; and eliminate the excess ordnance engineering capability that previous studies have identified both within the NOC and between the NOC and other Navy organizations. In its written comments on this report which identifies the actions the Navy ordnance business area is taking to reduce costs and streamline its operations, DOD agreed fully with five of our eight recommendations. It partially concurred with the remaining three recommendations, as discussed below. In our draft report, we recommended that the Secretary of Defense direct the Secretary of the Navy to (1) determine the minimum number of personnel Concord Weapons Station needs during peacetime in order to ensure that it can quickly and effectively expand its operation to accomplish its mobilization mission and (2) ensure that this core workforce is sufficiently trained to accomplish its mobilization mission. In partially concurring with this recommendation, DOD agreed that both of these tasks should be accomplished and that the Navy should be responsible for identifying the peacetime manning requirement. However, it indicated that this core workforce cannot be adequately trained for its mobilization mission unless it is given the appropriate amount and type of work during peacetime. DOD further stated it will take steps during the fiscal year 1999 budget process to ensure that adequate and funded workload is provided to Concord. We agree with DOD’s comment and revised our final report to recommend that DOD act to ensure that the core workforce is sufficiently trained. Concerning our recommendation to charge customers for ammunition storage services, the Navy agreed that action should be taken to (1) store only necessary ammunition at its weapons stations and (2) transfer excess ammunition to inland storage sites or disposal. The Navy believes that this can be accomplished without imposing a separate fee for storing ammunition. However, Navy records show that 51,231 tons, or about 43 percent, of ammunition stored at weapons stations was not needed as of May 1997. As stated in this report, because of the persistent nature of this problem, we continue to believe that charging customers for ammunition storage will provide the financial incentive for customers to relocate or dispose of unneeded ammunition. Finally, concerning our recommendation to convert military guard positions to civilian positions, the Navy stated that it is in the process of transferring the Navy ordnance east coast security positions to the Atlantic Fleet and that it plans to transfer the west coast security positions to the Pacific Fleet. It believes that the two Fleet Commanders need time to evaluate the appropriateness of converting the military guard positions to civilian positions. We agree with DOD’s comment that this decision should be made by the Fleet Commanders and have revised our recommendation accordingly. As part of this evaluation, the Navy needs to consider the cost of the guard positions since a civilian employee in a support position costs the government about $15,000 per year less than a military person of comparable pay grade. We are sending copies of this report to the Ranking Minority Member of your Subcommittee; the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services; the Senate Committee on Appropriations, Subcommittee on Defense; the House Committee on Appropriations, Subcommittee on National Security; the Senate and House Committees on the Budget; the Secretary of Defense; and the Secretary of the Navy. Copies will also be made available to others upon request. If you have any questions about this report, please call Greg Pugnetti at (202) 512-6240. Other major contributors to this report are listed in appendix II. Karl J. Gustafson, Evaluator-In-Charge Eddie W. Uyekawa, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed financial and management issues related to the ordnance business area of the Navy Working Capital Fund, focusing on: (1) the Navy's proposed and ongoing actions to reduce the business area's costs; and (2) additional cost reduction opportunities. GAO noted that: (1) the Navy is in the process of developing the cost reduction plan GAO recommended in its March 1997 report and has proposed and begun implementing a number of actions to reduce its ordnance business area's annual operating costs by $151 million, or 25 percent, between fiscal year 1996 and 1999; (2) this is a significant step in the right direction and should result in substantial cost reductions and more streamlined operations; (3) GAO's review of the business area's operations and discussions with the Office of the Secretary of Defense (OSD) and Navy ordnance officials indicate that the Navy has both an opportunity and the authority to further reduce Navy ordnance costs; (4) specifically: (a) redundant ordnance engineering capability exists within the business area and other Navy organizations; (b) military personnel are performing work that could be performed by less expensive civilian employees; (c) redundant missile maintenance capability exists; and (d) no financial incentive exists for customers to store only needed ammunition (the business area's inventory records show that 43 percent of the ammunition stored was unneeded as of May 1, 1997) since they do not directly pay for storage costs; (5) while most of the planned cost reduction actions appear to be appropriate, it remains to be seen whether the business area will reduce costs by $151 million; (6) in addition, GAO's review of available data indicates that one of the cost reduction actions--the planned personnel reductions--may adversely affect the Concord Naval Weapons Station's ability to load ships during mobilization, thus creating potential readiness problems; and (7) these personnel reductions are likely to have little impact on the Navy, but could have a significant impact on the Army and Air Force, which would rely heavily on Concord during a major contingency operation.
Through its disability compensation program, VBA pays monthly benefits to veterans for injuries or diseases incurred or aggravated while on active military duty. VBA rates such disabilities by using its Schedule for Rating Disabilities. For each type of disability, the Schedule assigns a percentage rating that is intended to represent the average earning reduction a veteran with that condition would experience in civilian occupations. Veterans are assigned a single or combined (in cases of multiple disabilities) rating ranging from 0 to 100 percent, in increments of 10 percent. Basic monthly payments range from $115 for a 10 percent disability to $2,471 for a 100 percent disability. About 58 percent of veterans receiving disability compensation have disabilities rated at 30 percent and lower; about 9 percent have disabilities rated at 100 percent. The most common impairments for veterans who began receiving compensation in fiscal year 2005 were, in order, hearing impairments, diabetes, post-traumatic stress disorder, back-related injuries, and other musculoskeletal conditions. VA performs disability reevaluations for disabilities required by regulation and whenever it determines that it is likely that a disability has improved, or if evidence indicates there has been a material change in a disability or that the current rating may be incorrect. Federal regulations generally instruct VA to conduct reevaluations between 2 and 5 years after any initial or subsequent VA examination, except for disabilities where another time period is specifically mentioned in the regulations. The latter generally require a reexamination 6 or 12 months after the discontinuance of treatment or hospitalization. The reevaluation process starts when a VBA Rating Veterans Service Representative (RVSR) completes a disability compensation claim and determines whether the veteran should be reevaluated at some time in the future. RVSRs base this decision on a number of factors. The disability reevaluation may be mandated by the Schedule for Rating Disabilities. For example, a veteran with a 100 percent disability rating due to a heart valve replacement is required to be reevaluated 6 months after discharge from the hospital. Alternatively, the RVSR may determine that the severity of the disability may change. For instance, medical evidence may suggest that a veteran with limited range of motion will be continuing physical rehabilitation and is expected to improve. To ensure that the disability is reviewed in the future, the RVSR enters a diary date into VBA’s claims processing system, which later generates a reminder that the disability needs to be reviewed. When this reminder is generated, the veteran’s file is retrieved and an RVSR performs a preliminary assessment of whether a reevaluation should be conducted. If the RVSR determines that a reevaluation is no longer needed, the reevaluation is cancelled. For example, staff may cancel a reevaluation when a veteran dies or if the file is already being reviewed by VBA following the veteran’s claim that his disability has worsened. If the RVSR determines that a reevaluation of the disability should be conducted, the RVSR can simply review the information in the file or, if needed, collect supplemental medical information which can include the results of a physical examination. Once all of the information has been analyzed, an RVSR can make a decision to increase, decrease, or continue the current rating. Figure 1 summarizes the disability reevaluation process. VBA maintains a quality assurance review program known as the Systematic Technical Accuracy Review (STAR) program. VBA selects random samples of each regional office’s disability compensation decisions and assesses the regional office’s accuracy in processing and deciding such cases. For each decision, the STAR quality review unit reviews the documentation contained in the regional office’s claim file to determine, among other things, whether the regional office complied with the Veterans Claims Assistance Act duty-to-assist requirements for obtaining relevant records, made correct service connection determinations for each claimed condition, and made correct disability rating evaluations for each condition. VBA has a fiscal year 2008 performance goal that 90 percent of compensation decisions contain no errors that could affect decision outcomes; its long-term strategic goal is 98 percent. In addition to STAR, regional offices conduct their own local quality assurance reviews. The guidance for these local quality assurance reviews calls for reviewing a random sample of an average of five claims for each RVSR, per month. VA is currently projecting that it will fully implement a new processing and benefits payment system—VETSNET, for their disability compensation process in May 2008. VA anticipates that VETSNET will be faster, more flexible, and have a higher capacity than VBA’s aging Benefits Delivery Network (BDN). For the past 40 years, BDN has been used to process compensation and pension benefits payments to veterans and their dependents each month. However, this system is based on antiquated software programs that have become increasingly difficult and costly to maintain. VBA’s operational controls do not adequately ensure that staff schedule or conduct disability reevaluations as necessary. VBA’s claims processing software does not ensure that diary dates are established. To the extent that staff do not enter diary dates, some cases that need reevaluations may never be brought to the attention of claims processing staff. As a result, some reevaluations may not be conducted. Staff can also cancel disability reevaluations and VBA does not track or review cancelled reevaluations. Thus, VBA does not have assurances that reevaluations are being cancelled appropriately. Also, completed reevaluations are not likely to receive quality assurance reviews. VBA plans on improving some of its control mechanisms through its new claims management system, VETSNET. However, VETSNET will not address all of the issues we found regarding VBA’s operational controls. VBA operational controls do not ensure that cases that should be reevaluated are scheduled for disability reevaluations. VA’s regulations require VBA to schedule disability reevaluations either when VBA determines that a veteran’s disability is likely to change or when mandated by the Schedule for Rating Disabilities. For cases where VA determines that a disability is likely to change, VBA staff must manually enter diary dates into VBA’s claims processing system in order to ensure that a reminder is generated. The diary date is the only VBA procedural trigger that alerts regional offices that a claim needs to be reviewed. However, claims processing staff can complete a rating decision on a disability claim without entering a reevaluation diary date. To the extent that staff do not enter a diary date, a case that needs to be reevaluated may never be brought to the attention of claims processing staff. As a result, the case will likely not be reevaluated. The VA Office of Inspector General has found some instances where this has occurred. For example, during a review at the Little Rock, Arkansas regional office, the VA IG found that staff failed to enter required dates for 10 of 41 cases sampled at that office. VBA’s electronic claims processing system also does not automatically set up diary dates for all disabilities where a reevaluation is mandated by VA’s Schedule for Rating Disabilities. According to VA, there are 31 disabilities where reevaluations are required by the Schedule. VBA has automated diary dates for 14 of these disabilities. As a result, staff must manually enter diary dates into the system for the remaining 17 disabilities. VBA does not currently have a plan for expanding its automated diary date protocol to include all disabilities where reevaluations are mandatory. VBA officials said that their first priority is to ensure VETSNET is operational and their conversion plan is completed. Once diary dates have been entered by RVSRs into the claims processing system, the dates are transferred to VBA’s centralized data processing center in Hines, Illinois. When the diary dates mature, the data processing center prints and mails out paper notices to VBA’s regional offices alerting them that reevaluations are needed. However, once the centralized data processing center prints out these notifications, the diary dates are erased from the centralized computer system. In addition, VBA does not track which disability cases were identified for reevaluation. Since the notices are single pieces of paper, they could be lost or misplaced. If this occurs, these disability reevaluations would likely be delayed or not performed at all. VBA is planning on improving its ability to track reevaluations. According to VBA officials, VETSNET will eliminate the paper notification of a matured diary date. Instead, once a disability reevaluation diary date matures, VETSNET will automatically create an electronic record, which can be tracked. Although VA plans on processing all disability compensation claims using VETSNET by May 2008, VBA officials told us that the automatically created electronic record would not be included. These officials were unable to provide us with a timetable for when such a control system would be rolled out. Once the regional office receives the paper notice that a reevaluation is due, staff perform a preliminary assessment of the veteran’s claim file to determine if more comprehensive reevaluation should be conducted. If staff determine during this preliminary assessment that a reevaluation is no longer needed, they can cancel the reevaluation. Regional office staff noted several reasons for canceling reevaluations, such as when a veteran dies. Additionally, a reevaluation would be cancelled if the veteran reopens their claim because the disability has worsened. However, VBA does not track the number or reasons for cancellations. Also, cancelled reevaluations are not subject to quality assurance reviews. VBA plans on improving its ability to track cancellations using VETSNET. According to VBA officials, when VETSNET is fully implemented for disability compensation claims in May 2008, VBA will be able to track the number and reasons for cancelled disability reevaluations. While completed disability reevaluations are subject to quality assurance review, very few are likely to be reviewed. Disability reevaluations represent a small portion of the total disability claims workload that VBA reviews for quality. For example, reevaluations represented about 2 percent of the total number of disability claims decisions completed in fiscal year 2005. Since VBA randomly selects claims for review from the total number of disability decisions, it is not likely that VBA will review many reevaluations. Similarly, each regional office’s quality assurance review would not likely select many reevaluation claims. Specifically, the local quality assurance guidance calls for reviewing a random selection of an average of five claims for each RVSR per month. Disability reevaluations are part of the sample, but since they are a small portion of the total caseload, they have a low likelihood of being selected. Some of the regional office quality assurance review staff we spoke with reported that in the course of a month, they may only see a handful of disability reevaluation claims. Thus, VBA may not have a sufficient handle on the accuracy and consistency of these reevaluations agencywide. VBA cannot effectively manage the disability reevaluation process because some of the data it collects are not consistent and it does not systematically collect and analyze key management data. While VBA collects data on the amount of time regional offices take to conduct disability reevaluations, these data are not reliable. Also, VBA does not know the number of reevaluation diary dates that mature in a year or the types of disabilities being reevaluated, the length of time before reevaluations are conducted, or if the reevaluation decisions result in an increase, decrease, or no change in the severity of veterans’ disabilities. VBA’s electronic system is unable to capture the entire amount of time it takes to complete a disability reevaluation and VBA does not currently collect and analyze outcome data. VBA’s disability reevaluation timeliness data are inconsistent because regional offices use different starting points for measuring how long it takes to complete reevaluations. For example, staff at one regional office told us they start measuring the length of time to complete disability reevaluations from the date that VBA’s centralized data processing center in Hines, Illinois, prints the paper notifications. Since the paper notifications are mailed from Hines to the regional office, several days can pass before the regional office receives the paper notifications. As a result, the actual time it takes this office to complete disability reevaluations would be overcounted. Other regional offices we visited indicated that measuring timeliness is not started until the date that staff review the claims file and determine that a reevaluation should proceed. Staff at one regional office we visited stated that it takes about 10 days for the claim to reach the desk of staff who perform the review. Since this review may not always take place as soon as the office receives the notification, the actual time it takes to complete disability reevaluations for these offices would be undercounted. VBA does not collect and analyze key management data on disability reevaluations. Thus, VBA does not have a firm grasp on its performance in handling claims that are due for a reevaluation. That is, while VA collects data on the number of revaluations that it completes, it does not compare this information to the number of claims that were initially scheduled for a reevaluation. Therefore, VA does not know if it is performing well in completing the claims scheduled for review. By not tracking this information, VA does not have a clear sense of the extent to which reevaluations are being cancelled (as noted) or whether some reevaluations are simply never started. According to VBA officials, VBA also does not collect data on the types of disabilities being reevaluated and how far in the future reevaluations are scheduled. Also according to VBA officials, VBA does not collect data on the outcomes of reevaluations and, as a result, does not have the benefit of historical results data that could be used to calibrate its decisions on which disabilities are likely to change and thus should be a higher priority for reevaluation. Regional office staff stated that such information on the disability reevaluation process could be useful in aiding their daily decision making on which disabilities to reevaluate and when to schedule them. Having such historical data could also aid VBA in workload management decisions. For example, in January 2002, as a temporary effort to free up staff for processing its backlog of disability compensation and pension claims, VBA postponed most of their currently scheduled reevaluations for 2 years. VBA made this decision without historical data on the extent to which reevaluations affect the benefit levels of disabilities and lost an opportunity to target only those cases likely to result in a change in status. As such, VBA did not know the potential number of veterans it could be over- or under-compensating for the 2 years the reevaluations were postponed. If VBA had a better data-driven feedback component, it could have avoided wholesale postponement of reviews for 2 years. Figure 2 summarizes the disability reevaluation process with an added data-driven feedback loop. It is important that veterans have confidence in the system designed to compensate them for their service-connected disabilities and that taxpayers have faith in VBA’s stewardship of the disability compensation program. Inadequate management controls could result in some veterans being under-compensated for conditions that have worsened or over- compensated for conditions that have improved. VBA is improving some of its operational controls over reevaluations. For example, through its VETSNET system VBA plans to track the number and reasons for cancellations. However, without a system to remind staff to schedule disability reevaluation diary dates or a system that automatically schedules diary dates for all claims that require reevaluation, staff could inadvertently fail to enter diary dates, and reevaluations may not be scheduled and performed as needed. Meanwhile, measuring regional office performance requires reliable performance data. VBA cannot adequately measure how long it actually takes regional offices to complete disability reevaluations since offices use different starting points for measuring timeliness. For offices that start measuring their timeliness after the claim review has been started, the measurement can result in undercounting the total amount of time to complete a disability reevaluation. Also, without reliable performance data, VBA cannot accurately evaluate regional office timeliness or compare regional offices’ performance. Therefore, VBA cannot reward good performance and take actions to improve lagging performance. In addition, without data on the results of reevaluations, VBA cannot ensure that it is prioritizing its resources to reevaluate those veterans whose disabilities are likely to change, and that it is reevaluating those disabilities at the appropriate point in time. Moving in this direction becomes increasingly more important given Operation Enduring Freedom and Operation Iraqi Freedom. Outcome data on the reevaluation process could be used to target certain disabilities in the future. For example, if VBA found that reevaluating a certain disability never resulted in a change in the rating level, then it could consider not reevaluating that disability in the future. In addition, data on the timing of reevaluations could also be used strategically to refine when disabilities are reevaluated. For example, some regional offices may be scheduling reevaluations for 2 years into the future for a particular disability, whereas other regional offices may be using a 3-year time period. This information could be combined with the outcomes of such reevaluations to refine guidance and training on scheduling reevaluations. We recommend that the Secretary of the Department of Veterans Affairs direct the Under Secretary for Benefits to take the following five actions to enhance VBA’s disability reevaluation process: VA should modify its electronic claims processing system so that a rating decision cannot be completed without staff completing the diary date field. VA should modify its electronic claims processing system to ensure that a diary date is automatically generated by the system for all disabilities where a reevaluation is required by VA’s Schedule for Rating Disabilities. VBA should include cancelled reevaluations in its quality assurance reviews and should evaluate the feasibility of periodically sampling a larger number of completed disability reevaluations for quality assurance review. VBA should clarify its guidance so that all regional offices consistently use the date they are notified of a matured diary date as the starting point for measuring timeliness. VBA should develop a plan to collect and analyze data on the results of disability reevaluations. To the extent necessary, this information could be used to refine guidance on the selection and timing of future disability reevaluations. In its written comments on a draft of this report (see app. II), VA generally agreed with our conclusions and concurred with our recommendations. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution until two weeks after the date of this report. At that time, we will send copies of this report to the Secretary of Veterans Affairs, appropriate congressional committees, and other interested parties. The report will also be available at GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please call me at (202) 512-7215. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and staff acknowledgements are listed in appendix III. To develop the information for this report, we analyzed Veterans Benefits Administration (VBA) workload and timeliness data on disability reevaluations. We found that VBA workload reports, which detail the length of time it takes regional offices to complete disability reevaluations are not reliable, since VBA guidance allows regional offices the ability to begin measuring when disability reevaluations begin at different points in time. Because VBA does not routinely collect and analyze data on the time allowed prior to reevaluating disabilities or the results of reevaluations, we requested a VBA analysis of claims-level data. In November 2006, VBA agreed to develop a one-time analysis of reevaluations completed in 2006. However, because of difficulties in developing the data VBA was unable to provide the analysis in time for us to incorporate the results into this report. We also reviewed federal regulations on disability reevaluations, VBA’s written guidance and training materials on reevaluations, and VBA’s procedures for conducting reevaluations. We discussed the procedures for ensuring that reevaluations are conducted and the information used to manage the reevaluation program with VBA headquarters and regional office officials and observed control procedures at 5 of VBA’s 57 regional offices. Specifically, we visited VA’s regional offices in Chicago, Illinois; Columbia, South Carolina; Muskogee, Oklahoma; Nashville, Tennessee; and Seattle, Washington. We selected the Columbia, Muskogee, and Nashville regional offices based on fiscal year 2005 VBA data that showed they completed reevaluations faster than the national average. Chicago and Seattle took longer than the national average. All five offices also completed a greater than average number of reevaluations. We also selected these five offices based on their geographic dispersion. During our site visits, we toured the regional office’s facilities and interviewed regional office management, 30 staff involved in regional office claims processing, 6 staff tasked with quality assurance, and other staff. We did not perform a case file review during our visits. The VA Office of Inspector General had performed a limited case file review and found that in some instances reevaluations were not scheduled where required. We built on the Inspector General’s work by looking at VBA’s processes for ensuring that reevaluations are scheduled when required. The following individuals made important contributions to the report: Brett Fallavollita, Assistant Director; Martin Scire; David Forgosh; as well as Susannah Compton; James Rebbe; Christine San; and Walter Vance.
To help ensure that veterans are properly compensated for disabilities, VA is required to perform disability reevaluations for specific disabilities. VA also performs reevaluations whenever it determines there is a need to verify either the continued existence or current severity of veterans' disabilities. VBA completed about 17,700 reevaluations in fiscal year 2005. GAO was asked to review the Veterans Benefits Administration's (VBA) disability reevaluation program. This report assesses (1) the operational controls VA uses to ensure the effectiveness of the disability reevaluation process and (2) the management information VA collects and uses to manage the disability reevaluation process. To conduct this study, GAO analyzed VBA data, reviewed federal regulations and VBA procedures, conducted site visits, and interviewed VBA officials. VBA's operational controls do not adequately ensure that staff schedule or conduct disability reevaluations as necessary; however, VBA is planning to improve some of the controls. VBA claims processing software does not automatically establish or prompt regional office staff to schedule a time - known as a diary date - to determine whether a disability reevaluation should proceed. Consequently, some cases that require a reevaluation may never receive it. After the diary date matures, staff perform a preliminary review of a veteran's claim file to determine if a more comprehensive reevaluation should be conducted. If staff determine during this review that a reevaluation is no longer needed, the reevaluation is cancelled. However, cancellations are not tracked or subject to quality assurance reviews to ensure adherence to program policies and procedures. VBA plans on improving some of its control mechanisms through its new claims management system, the Veterans Service Network (VETSNET), including developing the ability to track cancellations. However, VBA has no plans to include a prompt for scheduling reevaluation diary dates in VETSNET. VBA cannot effectively manage the disability reevaluation process because some of the data it collects are inconsistent and it does not systematically collect and analyze key management data. While VBA collects data on the amount of time regional offices take to conduct disability reevaluations, these data are not consistentbecause regional offices use different starting points for measuring timeliness. Also, VBA does not know the types of disabilities being reevaluated, the length of time before reevaluations are conducted, or the results of the reevaluations. As a result, VBA cannot ensure that it is effectively and appropriately using its resources.
VA provides tax-free compensation to veterans who have service- connected disabilities.The payment amount is based on a disability rating scale that begins at 0 for the lowest severity and increases in 10-percent increments to 100 percent for the highest severity. More than half of initial applicants claim multiple disabilities, and veterans who believe their disabilities have worsened can reapply for higher ratings and more compensation. For veterans who claim more than one disability, VA rates each claim separately and then combines them into a single rating. About two-thirds of compensated veterans receive payments based on a rating of 30 percent or less. At the base compensation level, these payments range from $98 per month at 10-percent disability to $288 per month at 30-percent disability.Base compensation for veterans with a 100-percent disability rating is significantly higher—$2,036 per month in 2000. Disability ratings are also used to determine eligibility for certain other VA benefits. For example, veterans with a 30-percent disability rating are entitled to an additional allowance for dependents, and those with higher ratings can become eligible for free VA nursing home care and grants to adapt housing for their needs. In addition, priority for care for VA health care is partly tied to disability ratings. VA has had long-standing difficulties in keeping up with its claims processing workload, resulting in increasing backlogs of pending claims.In fiscal year 1999, VA received approximately 468,000 compensation claims—about 345,000 of which were repeat claims. More than 207,000 claims were still pending at the end of fiscal year 1999—an increase of nearly 50 percent from the end of fiscal year 1996—and the average processing time was 205 days. Of the 207,000 pending claims, about 69,000 were initial claims, and about 138,000 were repeat claims. In its 1996 report, the Veterans’ Claims Adjudication Commission observed that 56 percent of veterans with pending repeat claims were rated as 30- percent or less disabled. Questioning whether VA should expend a significant share of its resources processing claims for veterans who are already compensated and have relatively minor disabilities, the Commission raised the possibility of offering lump sum payments to veterans with minimal disabilities. Other federal agencies have established this type of payment program. For example, under DOD’s disability program, mandatory lump sum payments are given to separating military personnel with less than 20 years of service and a disability rating of less than 30 percent, and the Department of Labor allows injured civilian federal employees to request lump sum payments for bodily loss or impairment instead of the scheduled duration of weekly payments. Six countries—Australia, Canada, Germany, Great Britain, Israel, and Japan—provide lump sum payments to at least some of their disabled veterans. Britain, Canada, Israel, and Japan make these payments to veterans with minor disabilities, while Germany supplements veterans’ pensions with a lump sum payment for those whose ability to work has been severely restricted. For peacetime service, Australia pays lump sum compensation for noneconomic losses from permanent impairments; it also provides a lump sum payment for a reduced capacity to work, if the incapacity is likely to be stable and would otherwise entitle the veteran to only a relatively small weekly pension. Veterans’ views captured through our survey and focus groups were based on the following features of both the lump sum and monthly payment options: Both types of payment—monthly and lump sum—would be tax-free. Under both types of payment, veterans would continue to be entitled to VA medical and other current benefits. Under the monthly payment system, veterans could reapply for increased payments for a worsening disability; under the lump sum system, veterans could not reapply for additional payments for a worsening disability for which a lump sum had been received. When the lump sum recipient dies, the surviving family would not have to repay any portion of the lump sum. Reactions to this hypothetical framework yielded no clear consensus among compensated veterans about whether a choice between monthly payments and a lump sum should be offered to newly compensated veterans. Among compensated veterans, 49 percent said they would definitely or probably support a lump sum option for newly compensated veterans, 43 percent said they would definitely or probably not support it, and 8 percent were unsure. Respondents whose views were “definite” were also about equally split—about 24 percent definitely supported offering a choice, and about 28 percent definitely opposed it (see fig. 1). Veterans’ responses indicate that experience could influence interest in taking a lump sum payment. Among all veterans, 32 percent reported they would have been interested in taking a lump sum payment when first compensated had such an option been available.Half as many—16 percent—reported that, knowing what they know today, it would have been a good choice for them. This ratio was borne out among supporters of offering a lump sum choice—56 percent indicated they would have been interested in a lump sum payment, and 28 percent said it would have been a good choice for them. Age and severity of disability also seemed to influence the degree of interest in taking a lump sum payment. For example, among veterans aged 43 or younger, 46 percent reported they definitely or probably would have been interested in taking a lump sum payment, compared to 21 percent of veterans aged 61 or older.Similarly, among veterans whose current disability rating is 10 percent or less, 39 percent reported definite or probable interest in a lump sum, compared to 22 percent with disability ratings of 40 percent or more (see fig. 2). Younger, more recently rated, and less severely disabled veterans—groups that expressed greater interest— could be a better gauge of newly compensated veterans’ interest in taking a lump sum payment because they may be more similar to potential recipients than are other veterans. Thus, if future newly compensated veterans are offered a lump sum option, the actual percentage of those interested in it could exceed the 32 percent found among current veterans. Although our results indicate some receptivity to a lump sum option, interest and support would likely depend on the specific design of the payment program. For example, among military retirees with 20 years of service—whose compensation is now a tax-free portion of their retirement pay—interest in a lump sum payment increased from 29 percent to 66 percent after they learned in the survey that the lump sum might be offered in addition to their full retirement pay. Veterans and military personnel in our focus groups expressed considerable interest in knowing additional details about the proposed lump sum option—particularly about the lump sum payment amount. Others asked for clarifications about the program, such as whether there could be circumstances under which lump sum recipients could reapply for additional compensation. In reacting to the option, some indicated that they had made assumptions about the amount. Others felt they could not give an informed opinion or make a decision without more information—or the “fine print,” as one individual put it. Some were skeptical and suggested that the lump sum option was a way for the government to cut VA benefits and reduce its obligations to those whose disabilities may get worse. Through our focus group sessions and discussions with veteran and military organizations, we found that veterans and military personnel perceive advantages and disadvantages of offering a lump sum option. However, information on the actual effects of lump sum payments on veterans’ financial well-being is limited. While some studies have examined how recipients use lump sum payments, they do not address how likely lump sum recipients are to be financially advantaged or disadvantaged as a result of receiving a lump sum payment rather than monthly payments. Veterans and military personnel identified several advantages and disadvantages associated with a lump sum payment option (see table 1). These advantages and disadvantages generally weigh the benefit of financial flexibility against the risk of financial loss. Veterans and military personnel who said the lump sum payment would put recipients at risk of being less well off or unable to pay for basic necessities such as food and housing provided several reasons to support their perception. Some reported that most lump sum recipients—particularly younger veterans and those already in financial need—would not have adequate money management skills. For example, some said that recipients may squander the one-time payment before reaching old age. They also said that more lump sum recipients would spend rather than invest the money, and those who did invest would be at risk of making poor investments. These veterans and military personnel also expressed concern that the lump sum amounts would be inadequate to protect recipients from financial setbacks that could result from a progressive disability and the inability to reapply for a higher disability rating. Some were similarly concerned that the initial rating could be inaccurate or unfairly low or that the average life span on which the lump sum was calculated would be insufficient to support recipients who outlived this average. Finally, veterans and military personnel said that choice creates risk because information may be incomplete or biased, individual judgment may be poor, or both. Some said a lump sum option would actually lead to more poor judgments because people would find a large sum of money so immediately attractive that they would not adequately consider the long- term financial consequences of taking it. On the other hand, others said that there would be benefits to a lump sum payment option. For example, some said lump sums could be used to make investments or large purchases, such as a house or an education; settle debts; or start a business. In addition, veterans and military personnel said that the benefit of providing a choice outweighed any risks. This high value placed on choice seems to underlie much of the option’s support, since our survey indicated that, among veterans who supported the lump sum option, 28 percent thought in hindsight that a lump sum would have been the better choice for them. As one veteran said, “I don’t believe that the lump sum option is a good idea, but it’s America and veterans should have a choice.” Another supporter of choice argued that, while a lump sum payment invested in stocks could be substantially reduced if the market falls, monthly payments could be routinely squandered. It was also pointed out that while a veteran who opted for a lump sum could outlive the average age used to calculate the payment, a veteran who chose monthly payments could die relatively young and therefore receive less total compensation. Moreover, focus group participants also said that veterans who were fully informed about their options, would have to take responsibility for the consequences of their choice. Little definitive information is available to validate perceptions about the potential financial effects on veterans taking a lump sum payment. Our review of the literature and inquiries about lump sum provisions for disabled veterans in several countries yielded very few studies on veterans receiving lump sum payments, and none addressing the long-term financial effects of such payments. We did find two qualitative accounts, provided to us by British and Australian officials, which told of financial difficulties among foreign disabled veterans who received lump sum compensation before World War II. In 1939, the British Ministry of Pensions stopped allowing veterans to convert their disability pensions into lump sum payments because it found that some recipients had sustained serious financial losses, particularly through business ventures. Allowing conversions of pensions to a lump sum has never been reinstated under the British War Pensions Scheme, but lump sums are paid for lower-rated disabilities. In Australia, a lump sum provision was discontinued when some impoverished World War I veterans returned for pension benefits after exhausting their lump sum payments. While Australia’s act covering service during armed conflicts still does not provide for lump sum disability compensation, a separate act directs lump sum compensation for certain disabilities incurred during peacetime service, on essentially the same basis as for other government employees. Although not addressing long-term financial effects or disabled veterans, certain studies examine recipients’ use of lump sum payments from other sources, indicating different ways recipients would typically manage a lump sum.In general, studies of retirement distributions suggest that many factors affect how individuals use lump sum payments. For example, one recent study of lump sum retirement distributions reported that recipients under age 25 spent almost half of their money on everyday expenses and consumer items, compared to older age groups who spent 22 percent or less.Another study reported that the recipient’s age, education and income level, and the payment amount are influential factors, but together these factors explain less than 20 percent of the variation in saving behavior among lump sum recipients.However, findings from these studies depend on the definitions of savings, investment, and spending used, and may have less relevance for different populations and lump sum programs. Some veterans and active duty personnel we spoke with suggested certain strategies—some of which have been used in other lump sum payment programs—to minimize the potential risks associated with receiving a one- time payment. However, others had concerns about whether they would be effective, feasible, or fair. To help ensure that beneficiaries make a wise choice, some veterans and military personnel suggested that VA develop an information and education plan—one that would fully inform beneficiaries of the benefits and risks of the two payment types and project for individual beneficiaries the likely effects of each. They further suggested that such information and education be provided well before the time the choice would be made to allow beneficiaries sufficient time to consider their options. To ensure unbiased information, it was also suggested that independent counseling on the payment choices be encouraged, as well as a second medical opinion on the disability. However, some expressed concern about VA’s ability to develop an effective information and education strategy. This skepticism was based on their perceptions that the government’s past efforts to inform and educate veterans about benefits were inadequate and a lump sum decision would involve complex assessments of future disability, individual financial situations, and investment risks. Veterans and military personnel also suggested strategies that they believe would limit the risk of forgone compensation or other benefits if a veteran’s disability were to progress. For example, one strategy would be to delay veterans’ choice of a lump sum until they are comfortable with the stability of their condition. Others said that the progression—or stability—of an individual’s disability could not be predicted accurately enough to allow fully informed choice. According to VA and medical experts in disability evaluation, definitive medical knowledge is often insufficient to fully inform veterans of whether their disabling condition would progress or remain the same.The course of disability is highly individualized and can be complicated by multiple impairments. The limited historical data from our survey suggest that while some veterans get higher ratings over time for worsening disabilities, others get lower ratings for improved disabilities.For example, among veterans who received their first ratings before 1970, about 21 percent reported higher current ratings than initial ratings, and almost 17 percent had lower current ratings. Another strategy veterans and military personnel suggested would be to estimate veterans’ lifetime disabilities and use these estimates in calculating their lump sum payments. The VA Inspector General has similarly proposed that VA revise its disability rating criteria to reflect expected lifetime impairment.While projecting the progression of an individual’s disability over his or her lifetime would prove difficult, determining average progression factors using VA historical data may be possible. Another suggested strategy would be to allow reevaluations of lump sum recipients’ disability ratings—not for the purpose of providing additional payment but to determine their eligibility, and that of their dependents, for other VA benefits that are tied to disability ratings, such as medical care or survivor benefits.Some participants suggested, however, that disabled veterans should also be able to seek reevaluation for additional compensation payments if their disability progresses. Other strategies for reducing the financial risk associated with a lump sum payment were aimed at encouraging responsible financial management. For example, focus group respondents recommended financial counseling and education; investment options, such as in the federal government’s Thrift Savings Plan; or payment allocations, such as paying lump sums in allotments or initially putting the money into a trustee account. It was also suggested that returns on investments could be tax-free. However, concerns were also raised about these strategies, including perceptions that the government would not be able to successfully instruct people on how to manage their money and that these strategies would increase government bureaucracy. Some of the suggested strategies were aimed at protecting vulnerable populations from financial risk. Specifically, some suggested that a lump sum payment option not be offered to those who would be least able to manage the money well—such as those who have been declared incompetent or have a history of significant psychological disabilities—or that the lump sum payment be assigned to someone who could manage the money for the payee.One concern that was raised with this type of strategy was that there would not be enough time to declare a newly compensated veteran incompetent or in need of a representative before the veteran was offered a choice. A similar strategy suggested by veterans and military personnel was to limit the lump sum option to the least financially vulnerable—that is, veterans who would not be likely to suffer great economic hardship if they were to lose the lump sum payment. These veterans would include those who would receive small monthly compensation payments, have stable or less severe disabilities, or have alternative income sources.However, respondents raised concerns that any safeguard restricting who would be offered the lump sum option could be viewed as unfair. In its written comments, VA highlighted our points that veterans had mixed views about offering this hypothetical lump sum program and that further development of program details could affect veterans’ views. (The full text of VA’s comments is presented in app. II.) We are sending copies of this report to the Honorable Hershel W. Gober, Acting Secretary of Veterans Affairs, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. If you or your staff have any questions concerning this report, please contact me at (202) 512-7101 or one of the GAO contacts listed in appendix III. Other key contributors to this report are also listed in this appendix. To gain an understanding of support for and interest in lump sum disability payments as a potential option for veterans, we surveyed and met with a variety of interested parties, including veterans currently receiving VA disability payments, active-duty service members, and military and veteran service organizations. For our survey, we mailed a questionnaire asking for views about a possible lump sum option to a representative sample of 2,481 veterans who currently receive disability compensation and reside at a domestic address. During pretests of the survey questionnaire with over 30 veterans, we discussed the perceived advantages and disadvantages of a lump sum option and what might be done to mitigate the disadvantages. We also discussed reactions to the option in focus groups of veterans and active-duty service members in the Air Force, Army, Marines, and Navy. To determine what is known about the impact on recipients of receiving a lump sum, we reviewed relevant literature on lump sum payments and communicated with representatives from other federal agencies and foreign countries that provide some form of lump sum payment to civilian and military beneficiaries. We performed our evaluation from August 1999 through November 2000 in accordance with generally accepted government auditing standards. The objective of our survey was to learn about views veterans with service- connected disabilities have about lump sum payments as a compensation option. To elicit their views, we asked veterans to react to a hypothetical program—offering newly compensated veterans a choice between monthly payments and a lump sum payment—with the following features: (1) Monthly payments and the lump sum payment would both be tax-free. (2) Regardless of the type of payment chosen, veterans would continue to be entitled to VA medical and other current benefits. (3) Under the monthly payment system, veterans could reapply for increased payments for a worsening disability, but under the lump sum system, veterans could not reapply for additional payments for a worsening disability for which a lump sum had been received. (4) When the lump sum recipient dies, the surviving family would not have to repay any portion of the lump sum. The survey questions asked veterans whether VA should offer veterans a choice between monthly payments and a lump sum when they are first granted compensation and whether they would have been personally interested in a lump sum had it been available at that time. They were also asked, with the advantage of hindsight, which option would have been better for them. We pretested questions in group and individual discussions with veterans in Denver and Littleton, Colorado; Baltimore, Maryland; Washington, D.C.; and Fairfax and Fredricksburg, Virginia. These sites were chosen because of their proximity to our staff. A sample of 2,484 veterans was drawn from VA’s Compensation and Pension file as of October 23, 1999. Our population of interest was veterans currently receiving compensation for a service-connected disability who resided at domestic addresses.To minimize the probability of sending the questionnaire to veterans unable or incompetent to participate in the survey, we excluded veterans from the population with two or more psychological disabilities or a single psychological disability rated 60 percent or more, those whose records indicated incompetence, and those residing in nursing homes. After these exclusions, and also excluding those with nondomestic addresses, our sampled population covered about 94 percent of all compensated veterans in VA’s file. In addition to determining the level of support among compensated veterans for a lump sum option, we also wanted to learn from our survey something about what the interest in a lump sum might be if such a choice were offered. We wanted to be able to estimate the level of interest for specific categories of veterans. Therefore, we oversampled various groups to ensure that we could construct these estimates of interest within an acceptable margin of error. The population was stratified by the characteristics in table 2. Before surveying, we checked our sample file against VA recordsto identify veterans who had been terminated from the compensation rolls subsequent to our sample draw. We also visually inspected the addresses and dropped from the sample three veterans whose mailing address indicated that they should have been excluded—that is, the address suggested the likelihood that the veteran was incapable of responding. A total of 2,481 questionnaires were mailed for our survey. Our survey response rate is based on the proportion of questionnaires that were returned with usable information. We mailed our questionnaire in January 2000. A second mailing to nonrespondents occurred approximately a month later. We accepted returned questionnaires through April 26, 2000. Of the sample, 1,921 usable questionnaires were returned, for an overall response rate of 78 percent. For 16 of the sampled veterans, we received notification that the veteran had died or was ineligible for the survey. These cases were removed from the sample. Table 3 details the final disposition of the questionnaires mailed. Response rates were above 65 percent for each stratification level in the sample. To produce our estimates of responses in the population from which we sampled, we weighted each respondent’s answers based on our stratification scheme. All sample surveys are subject to sampling error, that is, the extent to which the survey results differ from what would have been obtained if the whole population had received and returned the questionnaire. Measures of sampling error are defined by two elements, the width of the confidence interval around the estimate (sometimes called precision of the estimate) and the confidence level at which the interval is computed. The confidence interval refers to the fact that estimates actually encompass a range of possible values, not just a single point. This interval is often expressed as a point, plus or minus some value (the precision level). For example, an estimate of 75 percent plus or minus 2 percentage points means that the true population value is estimated to lie between 73 percent and 77 percent, at some specified level of confidence. The confidence level of the estimate is a measure of the certainty that the true value lies within the range of the confidence interval. We calculated the sampling error for each statistical estimate in this report at the 95- percent confidence level. This means, for example, that if we repeatedly sampled veterans from the same population and performed the analyses again, 95 percent of the samples would yield values that fall within the confidence intervals of our estimates. Sampling errors in this report range from 1 to 7 (plus or minus) percentage points, with most being less than 5 percentage points. In addition to sampling errors, surveys can also be subject to other types of nonsystematic (noise) or systematic (bias) error that can affect results, such as differences in interpretation of the question or respondents’ inability or unwillingness to provide correct information. Unlike sampling errors, the magnitude of the effect of nonsampling errors is not normally known; however, steps can be taken to minimize their impact. One potential source of nonsampling error that may be especially important in this survey is questionnaire construction. Our early pretests revealed that compensation benefits can be an emotion-laden subject for veterans. Some veterans had strong, unanticipated reactions to language used to phrase the question about offering a choice of payments. To ensure that the question was as clear and unbiased as possible, we did extensive pretesting of the questionnaire, making modifications based on veterans’ comments. We also consulted with an outside expert in questionnaire design, who reviewed our survey instrument and provided recommendations. In addition, veterans found it difficult to respond to questions about a lump sum choice without details about what that choice might entail, especially the amount of the lump sum payment. It may be that given a more detailed and specific lump sum option, a larger or smaller proportion of veterans would support VA’s offering veterans a choice. The magnitude of the effect of these potential biases, if any, on survey results is unknown. To more fully understand why surveyed veterans supported or opposed a lump sum option, we conducted focus groups with veterans receiving disability compensation at VA Medical Centers in Cheyenne, Wyoming, and Grand Junction, Colorado. To gauge the opinions of people who could be affected by such a change in policy, we also conducted focus groups with active-duty military members in all four services. We spoke with members of the Air Force at Buckley Air National Guard Base, Colorado; Army at Fort Carson, Colorado; Marine Corps at Quantico, Virginia; and Navy at Norfolk, Virginia. Since compensated veterans in our survey pretests had also discussed their reasons for support or opposition, we considered their input in our analysis. The sites for veterans’ focus groups were in smaller cities, in part because we had already gathered reactions of veterans in some large metropolitan areas during our pretests. Regardless, findings from focus groups and pretest respondents cannot be generalized to larger populations. To obtain information on the experiences of other government agencies offering lump sum payments to the disabled, we contacted officials administering the Department of Labor’s Federal Employees Compensation, State Employment Compensation, Black Lung, and Longshore and Harbor Worker’s programs. We also obtained information from the Social Security Administration about its Disability Insurance program and from DOD about its disability separation and retirement benefits. To capture the experiences of foreign governments with this type of payment, we reviewed the compensation programs for disabled veterans in Australia, Canada, Israel, Germany, Great Britain, and Japan. We selected these countries because they were the focus of lump sum discussions in the 1999ReportoftheCongressionalCommissiononServicemembersand VeteransTransitionAssistance. We contacted officials from these countries directly or through the Department of State. Both Germany and Japan provided information about their programs in their native languages. We used translators from the Department of State to translate their responses into English. In addition to those named above, the following staff made key contributions to this report: Sandra Davis, Linda Diggs, Deborah Edwards, Susan Lawes, Karen Sloan, Vanessa Taylor, Joan Vogel, and Greg Whitney. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Ordersbymail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Ordersbyvisiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Ordersbyphone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
Currently, veterans who are disabled while serving their country are compensated for average reduction in earning capacity. Monthly compensation is based on the severity of a veteran's disability. After an initial rating for compensation has been determined, veterans who believe their condition has worsened may file a claim with the Department of Veterans' Affairs (VA) to reevaluate their disability rating. These repeat claims outnumbered initial disability applications by nearly three to one in fiscal year 1999, dominating VA's workload. To help reduce the volume of repeat claims, the Veterans' Claims Adjudication Commission asked Congress to consider paying less severely disabled veterans compensation in a lump sum. GAO surveyed veterans who are now being compensated on their reaction to a lump sum option. Veterans had mixed views. Many veterans and military personnel could see advantages and disadvantages to this new option. They also suggested some strategies that they believed could minimize the financial risks a lump sum payment option might introduce.
Strategic planning provides an important mechanism for HHS to establish long-term goals and strategies to improve the performance of its many health care workforce programs. HHS, as with all executive branch agencies, is required by GPRAMA to engage in performance management tasks, such as setting goals, measuring results, and reporting progress toward these goals. As one of its GPRAMA responsibilities, HHS issues a strategic plan at least every 4 years. HHS’s most recent plan covers fiscal year 2014 through fiscal year 2018 and describes four broad strategic goals: 1. Strengthen health care. 2. Advance scientific knowledge and innovation. 3. Advance the health, safety, and well-being of the American people. 4. Ensure efficiency, transparency, accountability, and effectiveness of HHS programs. Within each strategic goal, the plan presents specific objectives that are linked to a set of strategies for accomplishing that objective, as well as a representative set of performance goals or measures for assessing progress. (See fig. 1.) Annually, HHS releases a performance report that presents progress on the performance measures that contribute to its strategic plan. According to HHS officials, the performance measures included in this performance report are a small but representative subset of HHS’s work as a whole and of the performance measures that HHS tracks annually and publishes through other means. HHS’s strategic planning efforts are led by staff offices that report directly to the Secretary of Health and Human Services. Specifically, the office of the Assistant Secretary for Planning and Evaluation (ASPE) is responsible for the strategic plan, and the office of the Assistant Secretary for Financial Resources (ASFR) is responsible for monitoring performance of HHS’s various efforts and programs. ASPE and ASFR are to coordinate with HHS agencies to facilitate department-wide strategic planning and performance measurement efforts looking across all of HHS’s programs and initiatives, including health-workforce related efforts. These agencies include HRSA, CMS, ACF, the Indian Health Service (IHS), and the Substance Abuse and Mental Health Services Administration (SAMHSA). (See fig. 2.) In addition, HHS has several advisory bodies that make recommendations to the Secretary about several topics. For example, the Council on Graduate Medical Education (COGME), supported by HRSA, provides an ongoing assessment of physician workforce needs, training issues, and financing policies and recommends appropriate federal and private sector efforts to address identified needs. In fiscal year 2014, HHS obligated about $14 billion to 72 health care workforce education, training, and payment programs administered primarily through five of its agencies. HRSA manages the most programs related to health care workforce development, while CMS, ACF, IHS, and SAMHSA also oversee other such programs. HRSA managed 49 of the 72 HHS workforce programs in fiscal year 2014. These programs generally provide financial assistance to students and institutions—in the form of scholarships, loan repayments, or grants—to encourage students to train and work in needed professions and regions. These programs accounted for about 8 percent of HHS’s $14 billion in obligations for workforce development programs. In contrast, CMS managed 3 GME payment programs that together accounted for about 89 percent of this funding. These payments reimburse hospitals for the cost of training medical residents and are calculated, in part, based on the number of residents at the hospital. Some of these payments are included as part of each payment that CMS makes to reimburse hospitals that train residents for care to eligible patients. CMS also manages 1 additional payment program that supports the training of nurses and allied health professionals, which accounts for about 2 percent of total HHS workforce funding. The remaining agencies collectively managed 19 programs, accounting for about 1 percent of total HHS workforce funding. (See fig. 3.) In addition to funding health care workforce programs, HHS examines information about the future demand for health care services, as well as the related supply and distribution of health professionals. Specifically, HRSA periodically publishes health care workforce projections regarding the supply and demand of the health care workforce. We previously found that the agency’s projections had not been updated and recommended that the agency develop a strategy and time frames to update national health care workforce projections regularly. Following that, HRSA awarded a contract to develop a new model, which according to officials has enabled more accurate estimates of workforce projections as well as workforce projections for a wider array of health professions. The agency subsequently issued many of those projections later in 2014 and in 2015. HHS’s current strategic plan lacks specificity regarding how health care workforce programs contribute to its strategic plan goals. Additionally, the performance targets that HHS publicly reports do not comprehensively assess the department’s progress towards achieving its broader strategic plan goals regarding the health care workforce. Finally, the department engages in some coordinated planning, but lacks comprehensive planning and oversight to ensure that its many workforce efforts address identified national needs. HHS’s strategic plan includes broad strategies to which the department’s health care workforce efforts relate, but these strategies do not focus specifically on workforce issues. For example, the current 2014-2018 HHS strategic plan does not have a goal or objective specifically dedicated to health care workforce. Instead, HHS officials stated that workforce development efforts are distributed across various broad access and quality objectives within the plan’s goal of strengthening health care. Specifically, the plan includes seven strategies that contain a health care workforce component and that are distributed among three broad objectives. These strategies generally do not explicitly reference health care workforce training or education, but instead use broad statements that concurrently encompass numerous different components and methods for improving access to and quality of health care. For example, as part of one strategy, HHS seeks to improve access to comprehensive primary and preventative medical services in historically underserved areas and to support federally funded health centers. While not explicit in the plan, HHS officials indicated that developing the health care workforce is one element that contributes to these strategies. (See app. II for more details about the strategic plan.) Past HHS strategic plans included more specific strategies and objectives related to the department’s workforce programs. However, HHS officials told us that the department decided to remove most of workforce-specific language when developing the current plan. The prior HHS strategic plan (2010-2015) included a dedicated goal related to workforce development, with one objective and six corresponding strategies that were specific to health care workforce planning. Further, while an earlier HHS strategic plan (2007-2012) did not have a workforce-specific strategic goal, it also had one objective and corresponding strategies on health care workforce planning, similar to the detailed strategies in the 2010-2015 plan. However, HHS officials told us that in developing the current strategic plan, the department decided to remove most of the workforce-specific language because it determined that workforce development was an intermediate step in achieving the department’s broader strategic goals of improving access to and quality of health care. They also noted that, due to the size of HHS—for fiscal year 2015, HHS spent over $1 trillion through its operating divisions—its strategic plan has to be high level and not too specific about any one topic. Rather than include specificity in HHS’s strategic plan, the department expects that the strategic plans of its agencies will contain additional detail about their health care workforce efforts, as is consistent with their mission, budget, and authorities. However, we found that these agency plans did not always contain this additional detail and were not always updated in a timely manner. For example, while HRSA’s 2016-2018 strategic plan contains a dedicated health care workforce goal with three related objectives and eleven strategies, CMS’s current 2013-2017 plan did not elaborate on any health care workforce issues presented in the HHS strategic plan, nor did it contain other related strategies. Moreover, IHS’s strategic plan was last updated in 2006, and agency officials stated that the agency was not working to update the plan. (See app. III for more details about agency strategic plans.) Consistent with GPRAMA leading practices, for strategic planning to achieve meaningful results on a crosscutting topic like workforce development, it is important that planning efforts across these agencies be coordinated with those of the department. In addition to developing its quadrennial strategic plans, HHS has previously developed other planning documents and workforce development efforts. For example, HHS has published a series of strategic initiatives outlining the Secretary’s priorities. In 2014, one of these priorities was developing the health care workforce. This document identified specific strategies for developing the health care workforce, such as targeting resources to areas of high need and strengthening the primary care workforce. The paper also discussed the need for reform to payment policy to better support emerging health care delivery models. However, as of November 2015, HHS had not released an update to this series of initiatives since the new Secretary assumed office in June 2014. In addition, previous presidential annual budgets have included proposals for improving HHS’s health care workforce programs. Moreover, in conjunction with the President’s fiscal year 2015 budget, HHS released a report in February 2014 providing additional information about the fiscal year 2015 proposals. The report described the challenges to ensuring more diversity among and a better distribution of health care professionals and explained the costs and benefits associated with each of the proposals. HHS has identified a subset of performance measures that are intended to represent the effect that all existing health care workforce programs have on the department’s broader goal of strengthening health care by improving access and quality. Among other places, HHS reports these workforce performance measures in its annual performance plan and report, which indicate progress towards achieving HHS strategic planning goals. From fiscal year 2014 to 2016, the number of healthcare workforce performance measures tracked by HHS within its annual performance report dropped from five to two, and they are focused on a small percentage of programs. Specifically, for fiscal year 2016, the two measures assess progress related to several HRSA programs— 4 National Health Service Corps (NHSC) programs and 14 Bureau of Health Workforce primary care training programs—that combined represent about 3 percent of the overall funding for all HHS health care workforce programs in fiscal year 2014. According to HHS officials, the department chooses not to include as part of the annual performance report all of the performance measures tracked by HHS agencies. Officials said that these two measures were chosen to be tracked in the strategic plan and annual report because HHS identified them as part of a subset of measures that are representative of the department’s overall programming efforts. However, these measures are specific to the 18 programs and do not fully assess the adequacy of the department’s broader workforce efforts. Moreover, HHS has no stated targets to assess the effectiveness of existing health care workforce programs on achieving the department’s broader goal of strengthening health care by improving access and quality. As part of a larger measurement project, HHS tracks data on workforce that it indicates are closely related to the supply of trained health care providers. Specifically, HHS tracks (1) the percentage of individuals that have a usual source of medical care and (2) the number of primary care practitioners (such as physicians, nurse practitioners, and physician assistants). However, neither of these broad workforce indicators has a stated target related to necessary provider levels. GPRAMA leading practices indicate that for performance measures to be effective, agencies need to set specific targets. While it is important to have specific performance measures that assess individual programs, health care workforce is an issue that straddles many different programs and HHS agencies, and therefore it is also important to have broader measures that assess how these individual programs contribute to the department’s overall effectiveness in developing the health care workforce to improve access to care. In the absence of these broader measures, HHS lacks information to help it comprehensively determine the extent to which programs are meeting the department’s goal of strengthening health care or identify gaps between current health care workforce programs and unmet health care needs. According to HHS, over three-quarters of the 72 health care workforce programs had performance measures tracked by the relevant HHS agency. HHS indicated that its smaller health care workforce programs generally had performance measures and targets. For example, HRSA’s 7 largest health care workforce programs, which accounted for about 5 percent of HHS’s health care workforce obligations in fiscal year 2014, each had related performance measures. For most of these 7 programs, HRSA reported meeting or exceeding its performance targets in fiscal year 2014. For example, according to HRSA, the NHSC and Advanced Nurse Education programs exceeded their performance targets for fiscal year 2014, and the Children’s Hospital GME program met or exceeded its fiscal year 2014 performance targets. (See app. I for more information about the performance measures of the 12 largest health care workforce programs.) However, HHS lacks performance measures related to workforce development for their largest programs. Specifically, HHS’s 2 largest health care workforce programs—the Medicare GME programs run by CMS, accounting for 77 percent of obligations in fiscal year 2014—did not have performance measures directly aligned with areas of health care workforce needs identified in HRSA’s workforce projection reports. According to HHS officials, the GME programs are not aligned with the workforce objectives in HHS’s strategic plan. Leading practices identified in prior GAO work show that for individual programs to address strategic goals and objectives, it is important that the programs be aligned with these goals, and that these goals influence the daily operation of the programs. HHS does not have a consistent and ongoing effort to coordinate all of the workforce planning efforts and resources that are distributed across the department’s various offices and agencies. According to HHS officials, HHS delegates responsibility for many of its health care workforce planning efforts to its component agencies, based on each agency’s mission and expertise. These agencies also collaborate with each other occasionally on various health care workforce development efforts, such as projection reports and workforce programs. Officials said that HHS’s coordination of these efforts generally occurs during the department’s larger planning and budget process. Within the HHS Office of the Secretary, ASFR coordinates the annual budget development process, and ASPE coordinates the quadrennial strategic planning process, and, in doing so, both lead activities to include workforce development in these efforts. In addition, ASPE occasionally collaborates with HHS advisory bodies, such as COGME, supports research on the health care workforce, and periodically creates interagency workgroups to discuss specific priorities that support the development of its budget proposals. However, outside of the context of these strategic planning and budgetary processes, ASPE does not have an ongoing formal effort to coordinate the workforce planning efforts of HHS agencies. Additionally, it does not regularly monitor or facilitate HHS interagency collaborations on workforce efforts or fully communicate identified gaps to stakeholders. A separate entity within the Office of the Secretary, ASFR, coordinates the monitoring of those performance measures for health care workforce programs that the department tracks in its annual performance plan and report. Similar to ASPE, according to officials, ASFR engages in these coordinating and monitoring efforts primarily within the context of developing the annual performance report. To achieve meaningful results, GPRAMA leading practices emphasize the importance of coordinating planning efforts across agencies and departments. Leading practices state that when multiple federal programs that address a similar issue are spread over many agencies and departments, a coordinated planning approach is important to ensuring that efforts across multiple agencies are aligned. Specifically, a coordinated planning approach is essential to (1) setting targets for these workforce programs and other efforts; (2) identifying and communicating whether there are gaps between existing workforce programs and future needs; and (3) determining whether agencies have the necessary information to assess the reach and effectiveness of their programs. While coordination at the program level is important, it does not take the place of, or achieve the level of, overall department coordination that we have previously found to be the key to successful coordination. Recently, multiple stakeholders reported that a more coordinated federal effort—possibly managed at the department level—could help to ensure a more adequate supply and distribution of the health care workforce, especially given changes in the delivery of care. For example, in examining federal GME funding, IOM and COGME each stated that the GME program lacks the oversight and infrastructure to track outcomes, reward performance, and respond to emerging workforce challenges. IOM’s recommendations for reforming GME include developing a strategic plan for oversight of GME funding, as well as taking steps to modernize GME payment methods based on performance to ensure program oversight and accountability. Both entities suggested that GME payments are neither sustainable in the long run nor the most effective method to developing the health care workforce to meet future projected health care needs. In each case, the organization recommended the creation of a new entity to provide oversight of national workforce efforts. Moreover, Congress also recognized the need for greater coordinated attention to workforce planning when it authorized the creation of the National Health Care Workforce Commission. Although the Commission has not received an appropriation, it was authorized to provide advice to HHS and Congress about workforce supply and demand and to study various mechanisms to finance education and training. According to HHS officials, to the extent possible, the department has been able to complete work related to a number of the Commission’s planned functions, but certain critical functions remain unaddressed. For example, the Commission would have been required to submit an annual report to Congress and the administration that would include, among other things, information on implications of current federal policies affecting the health care workforce and on workforce needs of underserved populations. Without a comprehensive and consistently coordinated approach, it will be difficult for HHS to ensure that workforce funding and other resources are aligned with future health needs and to provide effective oversight of this funding. Some of HHS’s largest health care workforce programs do not target areas of identified workforce needs. While HHS’s ability to adjust existing programs to target areas of emerging needs is subject to certain statutory limitations, the department has taken some steps and proposed new authorities that would allow it to better align certain programs to areas of national need. However, the proposed authorities may not fully address the alignment of HHS’s largest workforce programs with national needs. Our review showed that although HHS’s health care workforce programs support education and training for multiple professions, the biggest programs do not specifically target areas of workforce need. The two CMS Medicare GME programs, which accounted for 77 percent of HHS’s fiscal year 2014 obligations for health care workforce development, support hospital-based training of the many different types of physician specialties. However, HHS cannot specifically target existing Medicare GME program funds to projected workforce needs—such as primary care and rural areas— because the disbursement of these funds is governed by requirements unrelated to workforce shortages. As a result, the majority of Medicare GME funding is disbursed based on historical patterns. Therefore the Medicare-supported residency slots, supported by this Medicare GME funding, are most highly concentrated in northeastern states. However, the areas of emerging health workforce need identified by HRSA in its health care workforce projection reports include the supply of primary care physicians, as well as various physician and non- physician providers in rural communities as well as ambulatory settings across the country. According to the Rand Corporation, between 1996 and 2011, the number of primary care residents has increased 8.4 percent, while there has been a 10.3 percent increase in other specialties and a 61.1 percent increase in subspecialty residents, such as cardiology. In contrast, HHS’s smaller health care workforce programs typically target emerging health care workforce needs. For example, HRSA’s five largest health care workforce education and training programs, which accounted for about 4 percent of HHS’s workforce obligations in fiscal year 2014, targeted the health professions identified as areas of need, such as primary care physicians and nurse professions, including registered nurses. According to HHS officials, the department generally has limited authority to better target workforce programs to address projected health care workforce needs. Our review showed that funding for many of the largest HHS health care workforce programs—for example, CMS’s Medicare GME program—are governed by statutory requirements unrelated to workforce needs. Specifically, the funding formulas for 5 of the 12 largest programs that we reviewed do not provide HHS with authority to realign funding for these specific programs to address projected workforce needs, while the remaining 7 programs provided the department with some flexibility to realign funding. For all of these 7 programs, HRSA officials reported that the agency has been able to exercise some flexibility in reallocating resources within a program or between similar programs, subject to existing law. HHS officials stated that although they have no authority to realign funding for programs governed by formula, the department has utilized other authorities to better align resources with health care needs. According to these officials, the agency has used demonstration authorities to test new payment models for the Medicaid GME program. HHS officials identified several demonstration projects approved by CMS for states and residency programs using their GME funds. For example, the State Innovation Model demonstration at the Center for Medicare and Medicaid Innovation is providing financial and technical support to states for the development and testing of state-led, multi-payer health care payment and service delivery models that are intended to improve health system performance, increase quality of care, and decrease costs for Medicare, Medicaid, and Children’s Health Insurance Program beneficiaries–and for residents of participating states. According to HHS, a number of states are using the opportunity of participating in the demonstration to reform GME and make investments in the physician training more accountable. Both Vermont and Connecticut have been awarded funding as “model testing states” based on their state innovation proposals. HHS indicated that these states proposed innovative GME mechanisms, outside of those traditionally supported by CMS, to help better meet their health workforce needs. HHS officials also reported that the Center for Medicare & Medicaid Innovation conducted demonstrations to test new approaches to paying providers. For example, in 2012, the center initiated the graduate nurse education demonstration to increase the number of graduate nursing students enrolled in advance practice registered nurse training programs. The demonstration increased reimbursement for their clinical training by $200 million over 4 years. HHS has proposed additional authorities intended to help address changing health care workforce needs, although they may not fully align the department’s programs with national needs. In both fiscal years 2015 and 2016, the President’s budget proposed to reduce a portion of Medicare’s GME payments made to hospitals by 10 percent. It also proposed investing in a new program to provide additional GME funding for primary care and rural communities. The 10-year, $5.3 billion “targeted support” grant to be run by HRSA would build upon the work of HRSA’s Teaching Health Centers Graduate Medical Education Payment Program and would train 13,000 physicians in primary care or other high-need specialties in teaching hospitals and other community-based health care facilities, with a focus on ambulatory and preventive care. According to HHS officials, these proposals are intended to serve as a first step to improve the alignment of GME funding with health care needs. The fiscal year 2016 budget also proposed continuation of programs, authorized under PPACA and Health Care and Education Reconciliation Act of 2010 (HCERA), to provide primary care providers with higher Medicare and Medicaid payments as a way to incentivize health care providers to offer primary care services to Medicare and Medicaid beneficiaries. HHS officials indicated that the implementation of these proposals would help officials recognize any successes and gaps, and, if necessary, they could then develop any additional proposals to supplement the programs. However, while implementing these proposed programs would provide greater funding to some areas of need, HHS officials stated that they do not know the full extent to which these proposals are sufficient to address identified health care workforce needs. While HHS officials indicated that the department planned to determine their sufficiency and identify remaining gaps after these proposals were enacted, the department does not have a comprehensive plan from which to evaluate the impact of the new programs or make a complete assessment of any gaps. External stakeholders—such as IOM, Medicare Payment Advisory Commission (MedPAC), and COGME—have identified various reforms to HHS’s largest health care workforce programs to better target these programs to emerging areas of health care workforce need. HHS officials told us that some of the reforms proposed by these stakeholders were the basis for some of the department’s past budget and legislative proposals. For example, IOM convened a panel of experts that recommended restructuring Medicare GME payments to help align the programs to the health needs of the nation. In its 2014 report, IOM proposed, among other things, (a) developing a new center at CMS to administer GME program payment reform and manage demonstrations of new GME payment models and (b) the creation of a transformation fund within the GME programs to finance payment demonstrations. In a report released in 2014, COGME concurred with IOM that there is a need to reform GME payments and, among other things, recommended expanding the GME program’s clinical training environment into the ambulatory and community setting. While an advocate for teaching hospitals— Association of American Medical Colleges (AAMC)—has cautioned against reductions in GME funding, it has also proposed reforms to the GME program and Medicare payment policy to bolster primary care training and reduce geographic disparities. While HHS maintains that developing an adequate supply and distribution of the health care workforce is a priority for the department, it has removed explicit language about goals and objectives related to workforce issues from its current strategic plan, which is the primary planning mechanism to address this issue. HHS’s lack of specific planning goals for the health workforce in its current strategic plan makes it challenging for the department to plan and to maintain accountability. Moreover, the department does not currently have a comprehensive set of performance measures and targets to assess whether its workforce efforts and the specific individual workforce programs managed by its agencies are collectively meeting the department’s broader strategic goal of strengthening health care by improving access and quality. Because the responsibilities for HHS’s workforce efforts, programs, and resources are dispersed among many agencies, it is important that HHS have a department-wide approach regarding its strategies and the actions needed to ensure an adequate supply and distribution of the nation’s workforce. For example, while HRSA manages the largest number of workforce programs and the development of workforce projections, the vast majority of workforce development funds are administered by CMS— for which workforce planning is not a key mission. It is also important for HHS to comprehensively assess the extent to which its many workforce programs, collectively, are adequate to address changing health care workforce needs. Multiple stakeholders have made recommendations to improve these programs. However, because the majority of workforce funds must be dispersed based on statutory requirements unrelated to projected workforce needs, HHS has limited options to retarget them. HHS has proposed additional authorities in the past, but these have not been enacted, and HHS officials acknowledge that these additional authorities may not be sufficient to fully address the existing program limitations identified by stakeholders. Without a comprehensive and coordinated approach to program planning, HHS cannot fully identify the gaps between existing programs and national needs, identify actions needed to address these gaps, or determine whether additional legislative proposals are needed to ensure that its programs fully meet workforce needs. To ensure that HHS workforce efforts meet national needs, we recommend that the Secretary of Health and Human Services develop a comprehensive and coordinated planning approach to guide HHS’s health care workforce development programs—including education, training, and payment programs—that includes performance measures to more clearly determine the extent to which these programs are meeting the department’s strategic goal of strengthening health care; identifies and communicates to stakeholders any gaps between existing programs and future health care workforce needs identified in HRSA’s workforce projection reports; identifies actions needed to address identified gaps; and identifies and communicates to Congress the legislative authority, if any, the Department needs to implement the identified actions. We provided a draft copy of this report to HHS for its review and HHS provided written comments, which are reprinted in appendix IV. In commenting on this draft, HHS concurred with our recommendation that it is important that the department have a comprehensive and coordinated approach to guide its health care workforce development programs. HHS identified areas where comprehensive and coordinated planning efforts are already underway and where additional efforts are needed. HHS identified several health care workforce planning efforts related to the elements of our recommendation, many of which we described in the draft report. For example, HHS noted that it coordinates workforce planning efforts through the HHS department-level and agency-specific budget, legislative development, health policy research and innovation work, performance management, and strategic planning. HHS also indicated that the National Health Care Workforce Commission, created under PPACA, has the potential to enhance HHS’s ability to implement a more comprehensive and coordinated planning approach, but that the commission has not received federal appropriations. In the absence of appropriations for this commission, HHS stated that it has undertaken some of the commission’s health care workforce planning and coordination activities, to the extent possible. In response to our recommendation, HHS indicated that it could convene an interagency group to assess (a) existing workforce programs, (b) performance measurement, (c) budgetary and other proposals, (d) gaps in workforce programs, and (e) potential requests to the Congress for modified or expanded legislative authority. We agree that a regular and ongoing initiative focused on the coordination of health care workforce programs could provide an important first step toward ensuring a more comprehensive and coordinated planning approach. HHS also reiterated that its health care workforce programs contribute to its broad goals of access and quality from its strategic plan, as was described in our draft report. However, it indicated that, in response to our recommendation, the department plans to add two new workforce-specific strategies to its strategic plan when it next updates the plan. HHS also provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at kingk@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix V. Related Objectives Objective E: Ensure access to quality, culturally competent care, including long-term services and support, for vulnerable populations (1 of 6 Objectives in Goal 1) Field strength of the National Health Service Corps through scholarship and loan repayment agreements. Percentage of individuals supported by Bureau of Health Workforce Programs who completed a primary care training program and are currently employed in underserved areas. across population groups, and work with federal, state, local, tribal, urban Indian, and nongovernmental actors to address observed disparities and to encourage and facilitate consultation and collaboration among them. Evaluate the impact of the Affordable Care Act provisions on access to and quality of care for vulnerable populations, as well as on disparities in access and quality. Promote access to primary oral health care services and oral disease preventive services in settings including federally funded health centers, school- based health centers, and Indian Health Services-funded programs that have comprehensive primary oral health care services, and state and community- based programs that improve oral health, especially for children, pregnant women, older adults, and people with disabilities. (2 of 13 Metrics in Objective E) Help eliminate disparities in health care by educating and training physicians, nurses, and allied health professionals on disparities and cultural competency, while increasing workforce diversity in medical and allied health care professions. Improve access to comprehensive primary and preventive medical services to historically underserved areas and support federally funded health centers, the range of services offered by these centers, and increased coordination with partners including the Aging Services Network. (5 of 18 Strategies in Objective E) Objective F: Improve health care and population health through meaningful use of health information technology (1 of 6 Objectives in Goal 1) Expand the adoption of telemedicine technologies, including more remote patient monitoring, electronic intensive care units, home health, and telemedicine networks, to increase access to health care services for people living in tribal, rural, and other underserved communities, and other vulnerable and hard-to-reach populations. (1 of 18 Strategies in Objective F) Related Objectives Objective C: Invest in the HHS workforce to help meet America’s health and human services needs (1 of 4 Objectives in Goal 4) Promote the Commissioned Corps as a health resource to provide public health services in hard-to-fill assignments as well as to respond to public health emergencies. (1 of 13 Strategies in Objective C) Related Strategies We will promote efforts to increase family economic security and stability by supporting our state, tribal, and community grantee partners in designing and implementing programs that focus simultaneously on parental employment and child and family well-being, including drawing from promising models in health and career pathways demonstrations. Support curriculum development and the training of health professionals to ensure the learning, enhancement, and updating of essential knowledge and skills. Support training and other activities that enhance the health workforce’s competency in providing culturally and linguistically appropriate care. Expand the number and type of training and technical assistance opportunities that educate students and providers to work in interprofessional teams and participate in practice transformations. Support technical assistance, training, and other opportunities to help safety-net providers expand, coordinate, and effectively use health information technology to support service delivery and quality improvement. Provide information and technical assistance to ensure that HRSA- supported safety-net providers know and use current treatment guidelines, appropriate promising practices, and evidence-based models of care. Facilitate and support the recruitment, placement, and retention of primary care and other providers in underserved communities in order to address shortages and improve the distribution of the health workforce. Support outreach and other activities to increase the recruitment, training, placement, and retention of under- represented groups in the health workforce. Field strength of the National Health Service Corps through scholarship and loan repayment agreements. Percentage of individuals supported by the Bureau of Health Workforce who completed a primary care training program and are currently employed in underserved areas. Percentage of trainees in Bureau of Health Workforce- supported health professions training programs who receive training in medically underserved communities. Percentage of trainees in Bureau of Health Workforce programs who are underrepresented minorities and/or from disadvantaged backgrounds. Support pre-entry academic advising, mentoring, and enrichment activities for underrepresented groups in order to promote successful health professions training and career development. Promote training opportunities within community-based settings for health professions students and residents by enhancing partnerships with organizations serving the underserved. Develop and employ approaches to monitoring, forecasting, and meeting long-term health workforce needs. Provide policy makers, researchers, and the public with information on health workforce trends, supply, demand, and policy issues. 2015 Goal Six: Workforce Develop and disseminate workforce training and education tools and core competencies to address behavioral health issues. Develop and support deployment of peer providers in all public health and health care delivery settings. Develop consistent data collection methods to identify and track behavioral health care workforce needs. Increase the number of behavioral health providers (professional, paraprofessional, and peers) addressing children, adolescents, and transitional-age youth. Increase the number of individuals trained as behavioral health peer providers. In addition to the contact named above, William Hadley, Assistant Director; N. Rotimi Adebonojo; Arushi Kumar; Jennifer Whitworth; and Beth Morrison made key contributions to this report.
An adequate, well-trained, and diverse health care workforce is essential for providing access to quality health care services. The federal government—largely through HHS—funds programs to help ensure a sufficient supply and distribution of health care professionals. Some experts suggest that maintaining access to care could require an increase in the supply of providers, while others suggest access can be maintained by, among other things, greater use of technology. GAO was asked to review HHS's workforce efforts. In this report, GAO examines (1) HHS's planning efforts for ensuring an adequate supply and distribution of the nation's health care workforce and (2) the extent to which individual HHS health care workforce programs contribute to meeting national needs. GAO reviewed strategic planning documents, workforce projection reports, and other related documents obtained from HHS agencies; interviewed HHS officials; and analyzed performance measures for the largest health care workforce programs operated by HHS. The Department of Health and Human Services (HHS) engages in some planning for the 72 health care workforce programs administered by its agencies, but lacks comprehensive planning and oversight to ensure that these efforts meet national health care workforce needs. HHS's current strategic plan includes broad strategies—such as improving access to comprehensive primary and preventive medical services in historically underserved areas and supporting federally funded health centers—to which department officials said the health care workforce programs relate. However, these strategies do not explicitly reference workforce issues or specify how these programs contribute towards HHS's current strategic goals and performance targets. The health care workforce performance measures tracked by HHS and its agencies are specific to individual workforce programs and do not fully assess the overall adequacy of the department's workforce efforts. The Office of the Secretary leads workforce planning efforts, but it does not have an ongoing formal effort to ensure that the workforce programs distributed across its different agencies are aligned with national needs. Multiple external stakeholders, such as the Institute of Medicine and the Council on Graduate Medical Education, have reported that graduate medical education (GME) funding lacks the oversight and infrastructure to track outcomes, reward performance, and respond to emerging workforce challenges and that a more coordinated effort could help to ensure an adequate supply and distribution of the health care workforce. Consistent with leading practices, a coordinated department-wide planning effort is important to ensure that these efforts are aligned and managed effectively to meet workforce needs. While HHS's workforce programs support education and training for multiple health professions, its largest programs do not specifically target areas of workforce need, such as for primary care and rural providers. For example, its two Medicare GME programs accounted for about three-quarters of HHS's fiscal year 2014 obligations for health care workforce development. However, HHS cannot target existing Medicare GME program funds to projected workforce shortage areas because the programs were established by statute and funds are disbursed based on a statutory formula that is unrelated to projected workforce needs. HHS has limited legal authority to target certain existing programs to areas of emerging needs and has taken steps to do so within its existing authorities, such as the approval of certain demonstration projects to test new payment models for Medicaid GME funds. Further, the President's budget has proposed additional authorities that would allow HHS to implement new education and training programs and payment reforms intended to support primary care providers, but these authorities have not been enacted and officials did not know the extent to which they would be sufficient to address identified needs. External stakeholders have recommended additional reforms that would allow these programs to better targets areas of need. Without a comprehensive and coordinated planning approach, HHS cannot fully identify gaps and actions to address those gaps, including determining whether additional legislative proposals are needed to ensure that its programs fully meet workforce needs. GAO recommends that HHS develop a comprehensive and coordinated planning approach that includes performance measures, identifies any gaps between its workforce programs and national needs, and identifies actions to close these gaps. HHS concurred with GAO's recommendations and provided technical comments, which GAO incorporated as appropriate.
The President’s fiscal year 2014 budget request included plans for the federal government to spend over $82 billion on IT. The stated goal of the President’s IT budget request is to support making federal agencies more efficient and effective for the American people; it also states that the strategic use of IT is critical to success in achieving this goal. Of the $82 billion budgeted for IT, the budget provides that 26 key agencies plan to spend the bulk of it, approximately $76 billion. Further, of the $76 billion, over $59 billion is to be spent on O&M investments with the remainder ($17 billion) being budgeted for development of new capabilities. As shown in figure 1, the $59 billion represents a significant majority (i.e., 77 percent) of total budgeted spending for these agencies ($76 billion). Although O&M spending by these agencies is about 77 percent of total IT spending, the amount spent by each agency varies from a high of 98 percent to a low of 46 percent (as shown in the following table). Development spending, which is intended for the inclusion of new capabilities, accounts for approximately 23 percent of the total amount to be spent on IT in fiscal year 2014 by these agencies. However, the investments in development vary greatly, from 54 percent by the Department of Transportation to a low of 2 percent by the National Aeronautics and Space Administration. Further, in addition to including amounts to be spent on IT development and O&M, the budget also further specifies how the total $76 billion budgeted for IT is to be spent on agency IT investments by the following three categories: those solely under development ($6 billion), those involving activities and systems that are in both development and O&M—known as mixed life cycle ($40 billion), and those existing operational systems—commonly referred to by OMB as steady state investments—that are solely in O&M ($30 billion). To assist agencies in managing their investments, Congress enacted the Clinger-Cohen Act of 1996, which requires OMB to establish processes to analyze, track, and evaluate the risks and results of major capital investments in information systems made by federal agencies and report to Congress on the net program performance benefits achieved as a Further, the act places responsibility for result of these investments.managing investments with the heads of agencies and establishes chief information officers to advise and assist agency heads in carrying out this responsibility. In carrying out its responsibilities, OMB uses several data collection mechanisms to oversee federal IT spending during the annual budget formulation process. Specifically, OMB requires federal departments and agencies to provide information to it related to their IT investments (called exhibit 53s) and capital asset plans and business cases (called exhibit 300s). Exhibit 53. The purpose of the exhibit 53 is to identify all IT investments—both major and nonmajor within a federal organization. Information included on agency exhibit 53s is designed, in part, to help OMB better understand what agencies are spending on IT investments. The information also supports cost analyses prescribed by the Clinger-Cohen Act. As part of the annual budget, OMB publishes a report on IT spending for the federal government representing a compilation of exhibit 53 data submitted by agencies. According to OMB guidance, a major IT investment requires special management attention because of its importance to the mission or function to the government; significant program or policy implications; high executive visibility; high development, operating, or maintenance costs; unusual funding mechanism; or definition as major by the agency’s capital planning and investment control process. Exhibit 300. The purpose of the exhibit 300 is to provide a business case for each major IT investment and to allow OMB to monitor IT investments once they are funded. Agencies are required to provide information on each major investment’s cost, schedule, and performance. In addition, in June 2009, to further improve the transparency into and oversight of agencies’ IT investments, OMB publicly deployed a website, known as the Federal IT Dashboard (Dashboard), which replaced its Management Watch List and High-Risk List. As part of this effort, OMB issued guidance directing federal agencies to report, via the Dashboard, the performance of their IT investments. Currently, the Dashboard publicly displays information on the cost, schedule, and performance of major federal IT investments at key federal agencies. In addition, the Dashboard allows users to download exhibit 53 data, which include information on both major and nonmajor investments. According to OMB, these data are intended to provide a near real-time perspective of the performance of these investments, as well as a historical perspective. Further, the public display of these data is intended to allow OMB, other oversight bodies, and the general public to hold the government agencies accountable for results and progress. Since the Dashboard has been implemented, we have reported and made recommendations to improve the data accuracy and reliability. In 2010, 2011, and 2012, we reported on the progress of the Dashboard and made recommendations to further improve how it rates investments relative to current performance. OMB concurred with our recommendations and has actions planned and underway to address them. Further, OMB has developed guidance that calls for agencies to develop an OA policy for examining the ongoing performance of existing legacy IT investments to measure, among other things, that the investment is continuing to meet business and customer needs and is contributing to meeting the agency’s strategic goals.provide for an annual OA of each investment that addresses the following: cost, schedule, customer satisfaction, strategic and business results, financial goals, and innovation. To address these areas, the guidance specifies the following 17 key factors that are to be addressed: This guidance calls for the policy to assessment of current costs against life-cycle costs; a structured schedule assessment (i.e., measuring the performance of the investment against its established schedule); a structured assessment of performance goals (i.e., measuring the performance of the investment against established goals); identification of whether the investment supports customer processes as designed and is delivering goods and services it was designed to deliver; a measure of the effect the investment has on the performing a measure of how well the investment contributes to achieving the organization’s business needs and strategic goals; a comparison of current performance with a pre-established cost areas for innovation in the areas of customer satisfaction, strategic and business results, and financial performance; indication if the agency revisited alternative methods for achieving the same mission needs and strategic goals; consideration of issues, such as greater utilization of technology or consolidation of investments to better meet organizational goals; an ongoing review of the status of the risks identified in the investment’s planning and acquisition phases; identification of whether there is a need to redesign, modify, or terminate the investment; an analysis on the need for improved methodology (i.e., better ways for the investment to meet cost and performance goals); lessons learned; cost or schedule variances; recommendations to redesign or modify an asset in advance of potential problems; and overlap with other investments. With regard to overseeing the agencies’ development of policies and annual performance, OMB officials responsible for governmentwide OA policy stated that they expect agencies to perform all the steps specified in the guidance and to be prepared to show documentation as evidence of compliance with the guidance. In October 2012 we reported on five agencies’ use of OAs (during fiscal year 2011) and how they varied significantly. Specifically, of the five agencies, we found that three—namely, DOD, Treasury, and VA—did not perform analyses on their 23 major steady state investments with annual budgets totaling $2.1 billion. The other two agencies—DHS and HHS— performed analyses but did not do so for all investments. For example, DHS analyzed 16 of its 44 steady state investments, meaning 28 investments with annual budgets totaling $1 billion were not analyzed; HHS analyzed 7 of its 8 steady state investments, thus omitting a single investment totaling $77 million from being assessed. We also found that of those OAs performed by these two agencies, none fully addressed all the key factors. Specifically, our analysis showed that only about half of the key factors were addressed in these assessments. Consequently, we recommended, among other things, that the agencies conduct annual OAs and in doing, ensure they are performed for all investments and that all factors are fully assessed. To ensure this is done and to provide transparency into the results of these analyses, we also recommended that OMB revise its guidance to include directing agencies to post the results on the Dashboard. OMB and the five agencies agreed with our recommendations and have efforts planned and underway to address them. In particular, OMB issued guidance (dated August 2012) to the agencies directing them to report OA results along with their fiscal year 2014 budget submission documentation (e.g., exhibit 300) to OMB. According to OMB officials, they are currently establishing a process on how agencies are to provide the information to OMB which they plan to have in place over the next 6 months. As part of this, OMB is defining a process for what they plan to do with the information once they receive it. The 10 federal IT O&M investments with the largest budgets, identified during our review, support agencies in a variety of ways such as providing worldwide telecommunications infrastructure and information transport for DOD operations; enabling HHS to conduct research, award grants, and disseminate biomedical research and health information to the public and National Institutes of Health stakeholders; and providing SSA the capability to maintain demographic, wage, and benefit information on all American citizens. Including ensuring the availability, changeability, stability, and security of SSA’s IT operations for the entire agency. These investments are operated by eight agencies, such as the Department of Energy (DOE), the National Aeronautics and Space Administration (NASA), and Social Security Administration (SSA). In total, the investments accounted for about $7.9 billion in O&M spending for fiscal year 2012, which was approximately 14 percent of all such spending for federal IT O&M. The following table identifies the 10 investments and describes the agency responsible for each investment, the amount budgeted for O&M and development for fiscal year 2012, investment type, and how each investment supports the organization’s mission. Although required to do so, seven of the eight agencies did not conduct OAs on their largest O&M investments. Specifically, of the 10 O&M IT investments (with the largest budgets) we reviewed, only one agency— DHS—conducted an analysis on its investment. In doing so, the department addressed most of the required OMB factors. However, the other seven agencies—DOD, DOE, HHS, Treasury, VA, NASA, and SSA—did not conduct OAs on their O&M investments, which have combined annual O&M budgets of $7.4 billion. The following table lists the 10 investments and whether an analysis was completed for fiscal year 2012. Further, it provides the total O&M amount for the investment that had an OA and for the investments that did not have one—$529 million and $7.4 billion, respectively. With regard to the OA DHS performed on its investment (the Customs and Border Protection Infrastructure), the department addressed 14 of the 17 OMB factors. For example, in addressing the factor on assessing performance goals, DHS made efforts to consolidate software licenses and maintenance in order to eliminate redundancy and reduce costs associated with software licenses and maintenance. Although DHS addressed these factors, it excluded 3 factors. Specifically, the department did not (1) assess current costs against life-cycle costs, (2) perform a structured schedule assessment, and (3) compare current performance against cost baseline and estimates developed when the investment was being planned. These factors are important because, among other things, they provide information to agency decision makers on whether an investment’s actual annual O&M costs are as they were planned to be and whether there is a need to examine more cost effective approaches to meeting agency mission objectives. Table 5 shows our analysis of DHS’s assessment of its Customs and Border Protection Infrastructure investment. With regard to why DHS’s analyses did not address all OMB factors, officials from the DHS Office of the Chief Information Officer (who are responsible for overseeing the performance of OAs departmentwide) attributed this to the department still being in the process of updating their Management Directive 102-01 and its related guidance, which will provide additional instructions for completing OAs. As part of this update, department officials told us they plan to provide additional guidance on conducting OAs for programs once they have achieved full operational capability. The department expects the guidance to be completed in calendar year 2014. Further, according to DHS, once completed, this guidance will complement existing program review processes—referred to by DHS as program health assessments—that requires all major IT investments, in support or mixed lifecycle phases, to complete an OA every 12 months. The other seven agencies attributed not performing OAs on these investments to several factors, including relying on other management and performance reviews—such as those used as part of developing their annual exhibit 300 submissions to OMB—although OMB has stated that these reviews are not to be a substitute for conducting annual analyses. The specific reasons cited by each agency are as follows: DOD: Officials from DOD’s Defense Information Systems Agency stated that they did not conduct an OA for the Defense Information Systems Network due to the fact that the investment undergoes constant oversight through weekly meetings to review issues such as the project status and accomplishments. Further, they said that the program manager exercises cost, schedule, and performance oversight using earned value management techniques. In addition, they stated that monthly reviews of actual versus planned spending are collected to flag any discrepancies from expected cost and schedule objectives. While these reviews are important steps to monitoring performance management, OMB states such ongoing efforts to manage investment performance are not a substitute for conducting an annual OA. According to the OMB guidance, OAs are to be conducted for all existing IT investments to ensure that, among other things, an investment is continuing to meet business and customer needs and is contributing to meeting the agency’s strategic goals. With regard to the Next Generation Enterprise Network, officials from the Navy who manage and oversee this investment stated an OA was not performed due to it going through a transition from a mature fielded system to a new service delivery model, which will become operational in 2014. Nonetheless, OMB guidance calls for agencies to also conduct annual analyses on all existing IT investments as part of ensuring that such investments continue to deliver value and support mission needs. DOE: Officials from the Office of the Chief Information Officer stated that an OA was not conducted on its Consolidated Infrastructure, Office Automation, and Telecommunications Program investment because in the summer of 2012 they began to separate it into smaller, more manageable pieces—referred to by these agency officials as deconsolidation— to better provide insight into the departmentwide infrastructure. In addition, to gain further insight into the infrastructure spending, the DOE Chief Information Officer led an in-depth analysis in collaboration with senior IT executives, which included a commodity IT TechStat review in the fall of 2011, and a commodity IT PortfolioStat review in the fall of 2012. While these latter reviews are helpful in monitoring performance, our analysis shows that they do not fully address all 17 OMB factors. Specifically, the reviews do not address, among other things, factors in the areas of customer satisfaction, strategic and business results, and financial performance. Addressing these factors is important because it provides information to agency decision makers on whether the investment supports customer processes and is delivering the goods and services it was designed to deliver. HHS: According to officials from the department’s National Institutes of Health, the National Institutes of Health IT Infrastructure investment, which had an annual budget of $371 million for fiscal year 2012, did not undergo an OA because this investment is an aggregation of all the components’ infrastructure and not a particular system or set of systems suited for this kind of macro analysis. In addition, they noted that National Institutes of Health does monitor the operational performance of its IT infrastructure and conducts a more strategic analysis of services within its IT infrastructure to evaluate the operational effectiveness at a strategic level. While these types of performance monitoring efforts are important, OMB guidance nonetheless calls for agencies to also conduct annual analyses on all existing IT investments as part of ensuring that such investments continue to deliver value and support mission needs. Treasury: Officials from the department’s IT Capital Planning and Investment Control branch (within the office of Treasury’s Chief Information Officer) noted that its Internal Revenue Service Main Frames and Servers Services and Support investment, which had a budget of $482 million for fiscal year 2012, was deconsolidated in fiscal year 2011 to allow for greater visibility into the infrastructure and that it is currently undergoing an OA but were not able to provide documentation at the time of our work. VA: Officials from VA’s Office of Information and Technology said an OA was not conducted on its Medical IT Support or Enterprise IT Support investments because performance is currently being reported monthly via the Federal IT Dashboard and internally through monthly performance reviews. The officials added that the department plans to develop a policy and begin conducting OAs on investments. However, VA has not yet determined when these analyses will be completed. NASA: Officials from NASA’s Office of the Chief Information Officer stated while they did not conduct a formal OA on the NASA IT Infrastructure investment, they did review the performance of the investment using monthly performance status reviews and bimonthly service delivery transition status updates. The officials noted that these reviews address financial performance, schedule, transformation initiatives, risks, customer satisfaction, performance metrics, and business results. According to officials, the investment underwent a service delivery transition status update and a performance status review in May 2012. While these NASA reviews are essential IT management tools, they do not incorporate all 17 OMB factors. For example, the reviews do not address, among other things, innovation and whether the investment overlapped with other systems. Fully addressing the OMB factors is essential to ensuring investments continue to deliver value and do not unnecessarily duplicate or overlap with other investments. SSA: According to officials from SSA’s Office of the Chief Information Officer, SSA’s Infrastructure Data Center investment did not undergo an analysis because it has significant development content and therefore an earned value analysis was conducted, which is called for by SSA guidance for mixed life-cycle investments. Officials stated they generally perform either an earned value analysis or OA, as applicable to the investment. While earned value management analyses are important to evaluating investment performance, OMB guidance nonetheless calls for agencies to also conduct annual OAs on all existing IT investments as part of ensuring that such investments continue to deliver value and support mission needs. Until the agencies address these shortcomings and ensure all their O&M investments are fully assessed, there is increased risk that these agencies will not know whether these multibillion dollar investments fully meet intended objectives, including whether there are more efficient ways to deliver their intended purpose, therefore increasing the potential for waste and duplication. For the eight selected agencies, the majority of their 401 major IT investments—totaling $29 billion— were in the mixed life-cycle phase in both spending and number of investments. Specifically, of the $29 billion, our analysis, as shown in figure 2, found that mixed life-cycle investments accounted for approximately $18 billion, or 61 percent; steady state investments accounted for approximately $8 billion, or 27 percent; and development investments accounted for approximately $3 billion, or 12 percent. With regard to the number of investments by phase, our analysis, as shown in figure 3, found that of the total 401 investments 193, or 48 percent, were in the mixed life-cycle phase, 139, or 35 percent, were in the steady state phase, and 69 or 17 percent, were in the development phase. On an individual agency basis, table 6 provides the total amount each agency reportedly spent on IT. It also shows of how each agency allocates this total by development, mixed life cycle, or steady state investments. Further for the mixed investments, it shows the amounts for O&M and development. Further, the following table provides for each of the eight agencies, their total number of investments and of that total, the number of investments in development, mixed life cycle, and steady state. The implications of the above analyses—especially the results in table 6 that show mixed investments having significant amounts of funding for both development and O&M activities—are noteworthy, particularly as it relates to the oversight of such investments. More specifically, overseeing these investments will involve a set of IT management capabilities for those portions of the investment that are operational and a different set of IT management capabilities for those portions that are still under development. In the case of those portions that are operational, this will include agencies having the capability to perform thorough OAs, the importance of which is discussed earlier in this report. For those portions still under development, OMB guidance and our best practices research and experience at federal agencies show such effective oversight will involve agencies having structures and processes—commonly referred to as IT governance and program management disciplines—that include instituting an investment review board to define and establish the management structure and processes for selecting, controlling, and evaluating IT investments; ensuring that a well-defined and disciplined process is used to select new IT proposals; and overseeing the progress of IT investments—using predefined criteria and checkpoints—in meeting cost, schedule, risk, and benefit expectations and to take corrective action when these expectations are not being met. Having these disciplines are important because they help agencies, among other things, ensure such investments are supporting strategic mission needs and meeting cost, schedule, and performance expectations. However, our experience at federal agencies has shown that agencies have not yet fully established effective governance and program management capabilities essential to managing IT investments. For example, we reported in April 2011 that many agencies did not have the mechanisms in place for investment review boards to effectively control their investments. More specifically, we reported that while these agencies largely had established IT investment management boards, these boards did not have key policies and procedures in place for ensuring that projects are meeting cost, schedule, and performance expectations. In addition, our experience at federal agencies, along with the results from this audit, has found that agencies do not consistently conduct OAs. Specifically, as noted in the background, we reported in 2012 on five agencies’ use of them and how they varied significantly. Of the five agencies, we found that three—namely, DOD, Treasury, and VA—did not perform analyses on 23 major steady state investments with annual budgets totaling $2.1 billion. The other two agencies—DHS and HHS— performed them but did not do so for all investments. Accordingly, we have made recommendations to these agencies to improve their use of OAs and fully implement effective governance and program management capabilities. They have in large part agreed to our recommendations and have efforts underway and planned to implement them. GAO-13-87. reprogram IT O&M funds to be used on development activities and we identified no evidence to the contrary; two agencies—Treasury and VA— reported they did so in two instances. With regard to Treasury, the department—on its CADE 2 investment which has a total O&M budget of $40 million—reallocated a total of $10,000 to fund development activities planned for the investment. According to Treasury documentation, the cost of the investment’s operations and maintenance came in under budget by $10,000 so the department reallocated the funds to be used on new CADE 2 development efforts. Treasury reported this reallocation was discussed and approved by the Internal Revenue Service’s investment review board (the Internal Revenue Service is responsible for overseeing CADE 2) during its monthly executive steering meetings held during fiscal year 2012. With regard to VA, it reprogrammed a total of $13.3 million from O&M to development on investments within an investment category which VA referred to as a portfolio. Specifically, during fiscal year 2012, the department reprogrammed $13.3 million from an O&M investment within its Medical Portfolio to investments under development within the portfolio requiring additional funding. This reprogramming of funds was approved by the Secretary of Veterans Affairs in June 2012. The 10 largest federal O&M IT investments represent a significant part of the federal government’s multibillion dollar commitment to operating and maintaining its IT investments. Although OMB has established that agencies are to use OAs to evaluate the performance of such investments, their use by the agencies on these investments was very limited. DHS was the only agency to perform such an assessment and in doing so largely addressed the required OMB factors. While Treasury and VA had planned to perform analyses, they had not done so. Further, DOD, DOE, HHS, NASA, and SSA had not intended to perform these analyses on their large O&M investments. This limited use of OAs is due in part to a number of factors, including agencies relying on other types of performance oversight reviews that can be helpful but are not intended to be a substitute for these assessments. Until these agencies address these shortcomings and ensure all their large O&M investments are fully assessed, there is increased risk that these agencies will not know whether these multibillion dollar investments fully meet intended objectives, including whether there are more efficient ways to deliver their intended purpose. To ensure that the largest IT O&M investments are being adequately analyzed, we recommend that the Secretary of Defense direct appropriate officials to perform OAs on the two investments identified in this report, including ensuring the analyses include all OMB factors; Secretary of Energy direct appropriate officials to perform an OA on the investment identified in this report, including ensuring the analysis includes all OMB factors; Secretary of Health and Human Services direct appropriate officials to perform an OA on the investment identified in this report, including ensuring the analysis includes all OMB factors; Secretary of Treasury direct appropriate officials to perform an OA on the investment identified in this report, including ensuring the analysis include all OMB factors; Secretary of Veterans Affairs direct appropriate officials to perform OAs on the two investments identified in this report, including ensuring the analyses include all OMB factors; NASA Administrator direct appropriate officials to perform an OA on the investment identified in this report, including ensuring the analysis includes all OMB factors; and Commissioner of Social Security direct appropriate officials to perform an OA on the investment identified in this report, including ensuring the analysis includes all OMB factors. In addition, we recommend that the Secretary of Homeland Security direct appropriate officials to ensure the department’s OA for the Customs and Border Protection Infrastructure is complete and assesses missing OMB factors identified in this report. In commenting on a draft of this report, four agencies—DHS, NASA, SSA, and VA—agreed with our recommendations; two agencies—DOD and DOE—partially agreed; and two agencies—HHS and Treasury—had no comments. The specific comments from the four agencies that agreed are as follows: DHS in its written comments, which are reprinted in appendix II, stated that it concurred with our findings and recommendation. It also commented that DHS’s Office of the Chief Information Officer and the Office of Information Technology within Customs and Border Protection (the DHS component agency responsible for the Customs and Border Protection Infrastructure investment) are to work closely to ensure future OAs conducted on the investment fully address the OMB assessment factors. NASA, in its written comments—which are reprinted in appendix III—stated it concurred with our recommendation. NASA also stated that it planned to conduct an OA on its NASA IT Infrastructure investment in April 2014 that is to include all OMB factors. In its written comments, SSA stated it agreed with our recommendation. It also stated that since 2008, SSA has had a process to perform OAs on investments that were solely in O&M and that it recently expanded the process to include mixed life cycle IT investments that have significant systems in O&M. SSA further commented that it was in the process of performing OAs on the SSA mixed life cycle investment identified in our report and other similar agency investments, with the goal of completing these analyses by September 30, 2013. SSA’s comments are reprinted in appendix IV. VA, in its written comments, stated it agreed with our conclusions and concurred with our recommendation. It also said that it had scheduled OAs for the two investments identified in our report to begin in the second half of fiscal year 2014. VA’s comments are reprinted in appendix V. The specific comments of the two agencies that partially agreed are as follows: DOD, in its written comments, stated that it partially concurred with our recommendation. Specifically, DOD said it agreed with our recommendation that its OAs should address all OMB assessment factors and said it is establishing an OA policy in coordination with OMB. The department further agreed with our recommendation that it perform an OA on its Defense Information System Network investment. The department disagreed with our recommendation to perform an OA on its Next Generation Enterprise Network investment stating the investment is no longer in O&M and such investments, per OMB policy, do not require an OA. More specifically, as noted earlier in this report, DOD is transitioning the investment from a mature fielded system to a new service delivery model, which will become operational in 2014, and has moved the entire investment back into planning and acquisition. Nonetheless, consistent with our recommendation and as required by OMB policy, DOD plans to conduct an OA on this investment once the department begins to make it operational in 2014. DOD’s comments are reprinted in appendix VI. In its written comments—which are reprinted in appendix VII— DOE commented that it partially concurred with our recommendation. DOE stated it was not required to perform an OA on the Consolidated Infrastructure, Office Automation, and Telecommunications Program because the investment no longer exists. Specifically, DOE said it decided in 2012 to separate this large investment into smaller, more manageable pieces—referred to by DOE as deconsolidation—to better provide insight into its departmentwide infrastructure, and that since the investment no longer exists, there is no reason to perform an OA on it. Nonetheless, consistent with our recommendation, DOE added that it will ensure that OAs are conducted on the O&M components of all current major IT investments in DOE’s IT portfolio. DOE stated that it had already performed OAs on applicable operational components that used to comprise the Consolidated Infrastructure, Office Automation, and Telecommunications Program. For example, DOE commented that one of the investments created during deconsolidation— called Consolidated Infrastructure—had already undergone an OA most recently in August 2013. While DOE reported this progress in its comments to us, it did not provide us with documentation to support that this OA had been performed and whether it addressed all the OMB assessment factors. Consequently, we are revising our recommendation to DOE that it ensure OAs are performed on the applicable operational components that used to comprise the Consolidated Infrastructure, Office Automation, and Telecommunications Program, including the newly created Consolidated Infrastructure investment. With regard to HHS and Treasury, HHS, in comments provided via e-mail from its GAO Intake Coordinator within the Office of the Assistant Secretary for Legislation, stated that it did not have any general comments on this report, and Treasury in its written response said it had no comments on our report; the department’s comments are reprinted in appendix VIII. DHS and HHS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to interested congressional committees; the Secretaries of the Departments of Defense, Energy, Health and Human Services, Homeland Security, Treasury, and Veterans Affairs; the Administrator of the National Aeronautics and Space Administration; and the Commissioner of the Social Security Administration. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-9286 or pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IX. Our objectives were to (1) identify the federal IT O&M investments with the largest budgets, including their responsible agencies and how each investment supports its agency’s mission; (2) determine the extent to which these investments have undergone OAs; and (3) assess whether the responsible agency’s major IT investments are in development, mixed life cycle, or steady state, and the extent to which funding for investments in O&M have been used to finance investments in development. To identify those federal IT O&M investments with the largest budgets, we used data reported to the Office of Management and Budget (OMB) as part of the budget process, and focused on the 10 largest reported budgets in O&M and the responsible eight agencies (the Departments of Defense, Energy, Homeland Security, Health and Human Services, the Treasury, and Veteran Affairs; and the National Aeronautics and Space Administration and Social Security Administration) that operate these investments. In addition, to determine how these 10 investments support their agencies’ missions, we reviewed OMB and agency documentation (e.g., exhibit 300s, exhibit 53s) and interviewed agency officials. To determine the extent to which OAs were conducted to manage these investments in accordance with OMB guidance, we analyzed agency documentation and interviewed responsible agency officials to determine whether any operational analyses had been performed on these 10 investments during fiscal year 2012 because it was the last full year for completing OAs. In those cases where an OA had been performed, we compared it against OMB guidance on conducting them, including the 17 factors that are to be addressed as part of such assessments, to identify any variances. Where there were variances, we reviewed agency documentation and interviewed agency officials responsible for the OA to identify the cause of their occurrence. In those instances where an analysis was not performed, we reviewed documentation and interviewed agency officials to identify why it was not done. To assess whether each of the eight agency’s major IT investments are in development, mixed life cycle, or steady state, we analyzed agencies’ reported spending data provided to OMB as part of the budget process to determine what phase the majority of the investments were in and where the majority of funds were invested (i.e., development, mixed, or steady state). To assess the reliability of the data we analyzed, we corroborated them by interviewing investment and other agency officials to determine whether the OMB information we used was consistent with that reported by the agencies; based on this assessment, we determined the data were reliable for the purposes of this report. Further, to assess the extent to which these and other agency IT O&M investments involve development activities, we analyzed agency data and evaluated whether the eight agencies were using their O&M funds for development activities (i.e., through the reprogramming or reallocation of funds). Specifically, we compared what agencies planned to spend on development and O&M with what was reported to have been spent to identify any variances that indicated O&M funds were reprogrammed and used for development activities. In addition, we reviewed agencies’ documentation to determine if agencies had any processes in place to manage investments transitioning from development to O&M. Lastly, we reviewed agency documentation and interviewed agency IT budget and investment officials to verify whether any reprogramming occurred, its causes, and the extent of which any reprogramming was subject to management oversight. We conducted this performance audit from December 2012 to October 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact name above, individuals making contributions to this report included Gary Mountjoy (Assistant Director), Gerard Aflague, Camille Chaires, Rebecca Eyler, and Lori Martinez.
Of the over $82 billion that federal agencies plan to spend on IT in fiscal year 2014, at least $59 billion is to be spent on O&M, which consists of legacy systems (i.e., steady state) and systems that are in both development and O&M (known as mixed life cycle). OMB calls for agencies to perform annual OAs, which are a key method for examining the performance of O&M investments. GAO was asked to review IT O&M investments and agency use of OAs. The objectives of this report were to among other things (1) identify the federal IT O&M investments with the largest budgets, including their responsible agencies and how each investment supports its agency’s mission; (2) determine the extent to which these investments have undergone OAs; and (3) assess whether the responsible agency’s major IT investments are in development, mixed life cycle, or steady state. To do so, GAO focused on the 10 IT investments with the largest budgets in O&M and their responsible eight agencies, and assessed whether OAs were conducted on the investments. In addition, GAO evaluated what agencies spent on mixed, development, and O&M investments and whether agencies were using O&M funds for development activities. The 10 federal information technology (IT) operations and maintenance (O&M) investments with the largest budgets in fiscal year 2012—and the eight agencies that operate them—are identified by GAO in the table below. They support agencies by providing, for example, global telecommunications infrastructure and information transport services for the Department of Defense. Of the 10 investments, only the Department of Homeland Security (DHS) investment underwent an operational analysis (OA)—a key performance evaluation and oversight mechanism required by the Office of Management and Budget (OMB) to ensure O&M investments continue to meet agency needs. DHS’s OA addressed most factors that OMB calls for; it did not address three factors (e.g., comparing current cost and schedule against original estimates). DHS officials attributed these factors not being addressed to the department still being in the process of implementing its new OA policy. The remaining agencies did not assess their investments, which accounted for $7.4 billion in reported O&M spending. Agency officials cited several reasons for not doing so, including relying on budget submission and related management reviews that measure performance; however, OMB has noted that these are not a substitute for OAs. Until the agencies ensure their operational investments are assessed, there is a risk that they will not know whether these multibillion dollar investments fully meet intended objectives. For the eight agencies in this review, the majority of their 401 major IT investments were mixed life cycle (i.e., having activities and systems that are in both development and O&M) with regard to total spending and number of investments. Specifically, 193 (48 percent) of the investments were mixed investments, accounting for about $18 billion (61 percent) of planned spending. As such, successful oversight of such investments should involve a combination of conducting OAs to address operational portions of an investment and establishing IT governance and program management disciplines to manage those portions under development. GAO’s experience at the agencies and this report have identified agency inconsistencies in conducting OAs and establishing the capabilities that are key to effectively managing IT investments; accordingly, GAO has made prior recommendations to strengthen agency efforts in these areas. GAO is recommending that the seven agencies that did not perform OAs on their large IT O&M investments do so, and that DHS ensure that its OA is complete and addresses all OMB factors. Of the seven agencies, three agreed with GAO’s recommendations; two partially agreed; and two had no comments. DHS agreed with the GAO recommendation to it.
Nationwide, VA provides or pays for veterans’ nursing home care in three settings: CLCs, community nursing homes, and state veterans’ nursing homes.the cost of care that is covered for eligible veterans. These settings vary in terms of their characteristics, as well as VA provides nursing home care to veterans in 134 CLCs nationwide. CLCs are typically within or in close proximity to VA medical centers. VA requires CLCs to meet The Joint Commission’s long-term care standards. veterans in CLCs, while discretionary veterans may be required to pay a copayment depending on their income or other factors. See Veterans Health Administration Handbook 1142.01, Criteria and Standards for VA Community Living Centers (Aug. 13, 2008). The Joint Commission is an independent organization that accredits and certifies health care organizations and programs in the United States. Medicare is the federal health insurance program for people age 65 and older, individuals under age 65 with certain disabilities, and individuals diagnosed with end-stage renal disease. Medicaid is a federal-state program that provides health care coverage to certain categories of low-income individuals. pays for the full cost of care for mandatory veterans, while discretionary veterans may be subject to a similar copayment as in CLCs depending on their income or other factors. However, VA is generally restricted by law from paying for more than 6 months of care for discretionary veterans. These veterans must therefore have other sources of payment, and, according to VA officials, most long-stay residents may enroll in Medicaid, have private long-term care insurance, or pay for care through out-of- pocket spending. VA also pays for all or part of veterans’ care in 140 state veterans’ nursing homes nationwide. For state veterans’ nursing homes, VA pays at least a portion of the cost of providing nursing home care for eligible veterans in these homes, but does not control the admission process. Veterans are admitted based on eligibility criteria as established by state requirements. For state veterans’ nursing homes to participate in VA’s program, however, VA generally requires that at least 75 percent of the In addition, VA requires state veterans’ nursing residents be veterans. homes to be certified by VA annually, and ensures compliance with its standards through surveys and audits. Each fiscal year, VA establishes the per diem rates paid to state veterans’ nursing homes for care provided to veterans. For mandatory veterans, VA pays a higher per diem For discretionary that covers the full cost of care, including medications.veterans, VA pays the lesser of the basic per diem established by VA or one-half of the total daily cost of care.nursing homes, there is no restriction on the number of days for which VA may pay for care for discretionary veterans in state veterans’ nursing homes. As part of VA’s support and oversight of state veterans’ nursing homes, VA medical centers of jurisdiction process and approve per diem reimbursements for the state veterans’ nursing homes located in their geographic areas. In addition to paying some or all of the cost of providing nursing home care to veterans, VA supports state veterans’ nursing homes by awarding grants to states for construction or renovation of facilities. These grants are awarded following VA’s review and approval of proposals submitted by state officials. In addition to per diem payments and construction grants from VA, state veterans’ nursing homes may receive payments from a number of different sources, including Medicare and Medicaid. While veterans of all ages may need VA nursing home care, the need for such care increases with age because elderly veterans are more likely to have functional or cognitive limitations that indicate a need for nursing home care. in 2014 and decline thereafter (see fig. 1). However, the percentage of elderly veterans is expected to remain relatively unchanged due to a decline in the overall veteran population. Although the need for VA nursing home care remains, for over a decade VA has highlighted the potential benefits of providing veterans with alternative options for long- term care—specifically, less costly home and community-based care—in an effort to lessen the need for more costly nursing home care. For example, in its 2014 budget justification, VA proposed legislation that would authorize VA to pay for care in VA-approved medical foster homes for veterans who would otherwise need nursing home care. Functional limitations are physical problems that limit a person’s ability to perform routine daily activities, such as eating, bathing, dressing, paying bills, and preparing meals. Cognitive limitations are losses in mental acuity that may also restrict a person’s ability to perform such activities. Institutionalization in a nursing home is more common at older ages—in 2010, about 1 in 8 people age 85 or older resided in institutions, compared with 1 percent of people ages 65 to 74. See Congressional Budget Office: Rising Demand for Long-Term Services and Supports for Elderly People (Washington D.C.: June 2013). Decisions about the nursing home setting in which a veteran will receive care are decentralized to the local level because of several factors, including variability in the choice of available settings, the nursing home care services available, and admissions policies at each type of setting. In addition, veterans’ nursing home service needs, eligibility status, and preferences about the location of care are considered in deciding the setting to choose. VA program officials told us that the decisions as to which nursing home setting would be used are decentralized to the local level because they are dependent upon several factors including the type of setting available in each community, and availability varies considerably across locations. For example, veterans in need of nursing home care in Augusta, Maine who wish to stay within about a 50-mile radius may have the option of receiving care from several settings including 1 CLC, 12 community nursing homes and 2 state veterans’ nursing homes, assuming availability of beds and resources. However, veterans in Saginaw, Michigan wishing to stay within a similar radius may have the option of receiving care from 1 CLC and 3 community nursing homes, depending on the availability of beds and resources, since the only 2 state veterans’ nursing homes are over 100 miles away. In addition, VA officials told us that decisions about which setting is used are based upon veterans’ specific nursing home service needs, and settings varied in the type of specific services offered. For example, officials told us that in certain geographic areas CLCs provide certain services that are not available in the community, such as dementia care, behavioral health services, and care for ventilator-dependent residents. In other areas, however, officials told us that these specialized services might not be available in a CLC and instead might be available at a community nursing home. When an individual medical center has more than one CLC in its service area, each CLC may offer a unique set of services. Therefore, according to officials, the availability of different types of services in each nursing home setting depends largely on location. VA officials further told us that admissions policies are generally the same for all CLCs, but may vary for community nursing homes and state veterans’ nursing homes. CLCs are required to provide care based on agency-wide policies; therefore, eligibility criteria and admissions policies are generally uniform across the country. An interdisciplinary team— including personnel such as a registered nurse, social worker, recreation therapist, medical provider, dietitian, and any other discipline(s) directly involved in the care of the resident—at the VA medical center of jurisdiction determines whether the veteran has a clinical need for nursing home care. This determination is to be based on a comprehensive clinical assessment of medical, nursing and therapy needs; level of functional impairment; cognitive status; rehabilitation needs; and special emphasis care needs, such as spinal cord injury or end-of-life care. Each CLC is required to use a standardized instrument that is used by all Centers for Medicare & Medicaid Services-certified nursing homes for assessment, treatment planning, and documentation and evaluation of care and services. Three key factors are considered at the time of admission: (1) the specific services to be provided; (2) whether the stay is short or long; and (3) the setting to which the resident will be discharged. Admissions policies are generally standardized for community nursing homes, but may vary for state veterans’ nursing homes based on state requirements. Community nursing homes are required to be certified for participation in the Medicare and Medicaid programs, and use the same standardized instrument for assessment, evaluation and treatment planning as CLCs. VA officials told us that community nursing homes are required to accept all eligible veterans referred by VA, subject to availability of beds and required resources. State veterans’ nursing homes are not required to provide VA with documentation of their admissions policies. Since these homes are state-owned and operated entities they are subject to admissions and eligibility criteria that vary from state to state. For example, for admission, state veterans’ nursing homes in Alabama require the veteran to have 90 days of service, at least one day of which was wartime service. In contrast, state veterans’ nursing homes in New York require the veteran to have only 30 days of active service, while homes in Florida do not require any wartime service. In addition to the type of available settings, the nursing home care services available, and admissions policies at each type of setting, VA officials told us that veterans’ eligibility status, and preferences about remaining close to home and family or willingness to travel to a nursing home setting were important considerations. For example, a discretionary veteran with a preference for staying close to home might be a candidate for admission to a community nursing home or a state veterans’ nursing home if a CLC was too far away. However, officials told us that because of the veteran’s discretionary status, he or she would be informed of VA’s restriction of coverage to only the first 180 days of care in the community nursing home, and staff would assist the veteran in obtaining Medicaid coverage. The veteran’s eligibility status would be less important if admission was made to the state veterans’ home since the restriction on length of coverage would not apply to this setting. However, officials emphasized that these considerations were made within the context of the availability of specific settings, specific services, and eligibility criteria and admissions policies across locations. Given the variability in these factors, veterans in two different communities with the same service needs, eligibility status and preferences might be admitted to different settings. Of the three VA nursing home settings, state veterans’ nursing homes provided care for just over half of VA’s nursing home workload in fiscal year 2012. VA’s total nursing home workload was primarily long stay that year. Most of the nursing home care that VA provided or paid for in fiscal year 2012 was for discretionary veterans and for residents ages 65 to 84 years old. In addition, veterans’ eligibility status and age varied by setting. State veterans’ nursing homes accounted for 53 percent of the workload—measured by average daily census—for which VA provided or paid for care in fiscal year 2012. CLCs provided care for 28 percent of the total workload, and community nursing homes provided care for 19 percent of the workload. (See fig. 2.) In fiscal year 2012, the most recent year for which data were available, state veterans’ nursing homes provided care to an average of 19,355 residents per day, out of the total average daily workload of 36,250 residents for whom VA provided or paid for nursing home care. This and other workload patterns we examined have been consistent in recent years based on VA data for fiscal years 2010-2012. At the network level, the proportion of nursing home workload in each setting varied widely by network, particularly the range of workload in state veterans’ nursing homes compared to the other settings. For example, state veterans’ nursing homes comprised 74 percent of total VA nursing home workload in Veterans Integrated Service Network (VISN) 16 (South Central VA Health Care Network), compared to 20 percent of the nursing home workload in VISN 21 (Sierra Pacific Network). (See app. I for more information on nursing home workload by network and setting.) Overall, long-stay care accounted for nearly 90 percent of VA’s total nursing home workload in fiscal year 2012 (31,750 of the 36,250 residents for whom VA provided or paid for care each day), and long-stay care accounted for at least three-quarters of all workload in each of VA’s three nursing home settings. (See fig. 3.) Of the three settings, state veterans’ nursing homes had the largest proportion of long-stay workload (97 percent) compared to community nursing homes (80 percent) and VA’s CLCs (76 percent). These patterns were consistent from fiscal year 2010 through fiscal year 2012. VA officials told us that they examine workload data by length of stay for planning purposes, but do not make these data available publicly. For all of the networks, the majority of workload was long-stay. The proportion of long-stay workload ranged from 80 percent to 95 percent, with the lowest proportion of long-stay workload in VISN 18 (VA Southwest Health Care Network), VISN 21 (Sierra Pacific Network), and VISN 22 (Desert Pacific Healthcare Network), and the highest proportion of long-stay workload in VISN 3 (VA New York/New Jersey Veterans Healthcare Network). (See app. II for information on workload by network and length of stay.) VA officials said that they thought long-stay care (measured by average daily census) accounted for a high proportion of CLC workload because CLCs provide a number of long-stay programs for veterans who are unable to access certain nursing home services in other settings. For example, according to VA officials, some CLCs offer specialized long-stay programs for residents with dementia or spinal cord injuries, and may also serve residents with mental or behavioral health conditions who are not eligible for nursing home care in other settings. Nearly two-thirds (62 percent) of VA’s nursing home care in fiscal year 2012 was provided to discretionary veterans, while just over one-third (35 percent) was provided to mandatory veterans. When examined by age group, nursing home residents 65 through 84 years of age comprised a larger proportion of the workload than other age groups, amounting to 45 percent of VA’s nursing home workload. Residents age 85 and older amounted to 37 percent, and those under 65 years of age amounted to 16 percent. (See fig. 4.) At the network level, workload by resident characteristics generally mirrored overall patterns. Workload for most networks was largely discretionary, with discretionary care comprising at least half of the workload in 20 of VA’s 21 networks. Discretionary workload ranged from 40 percent of total workload in VISN 21 (Sierra Pacific Network) to 69 percent of total workload in VISN 7 (VA Southeast Network) and VISN 3 (VA New York/New Jersey Veterans Healthcare Network). (See app. III for information on workload by network and eligibility status.) In addition, residents 65 through 84 years of age comprised similar proportions of workload in each network. Specifically, the proportion of workload for residents 65 through 84 years of age ranged from 42 percent in VISN 1 (VA New England Healthcare System), VISN 3 (VA New York/New Jersey Veterans Healthcare Network), VISN 15 (VA Heartland Network), and VISN 21 (Sierra Pacific Network) to 50 percent in VISN 6 (VA Mid- Atlantic Health Care Network) and VISN 11 (Veterans in Partnership). (See app. IV for information on workload by network and resident age.) The proportion of nursing home workload by veterans’ eligibility status— mandatory veterans compared to discretionary veterans—varied widely by setting. State veterans’ nursing homes provided the highest proportion of discretionary care compared to the other nursing home settings— 84 percent of workload in state veterans’ nursing homes was for care provided on a discretionary basis, compared to 48 percent of workload in CLCs and just 18 percent in community nursing homes. Conversely, community nursing homes provided the highest proportion of mandatory care (82 percent of workload), followed by CLCs (52 percent) and state veterans’ nursing homes (9 percent). (See table 1.) The proportion of workload by age group also varied among the three settings. Of the three settings, state veterans’ nursing homes had the highest proportion of workload for veterans age 85 and older. State veterans’ nursing homes also had the smallest proportion of workload for residents under age 65, who constituted less than a tenth of the workload. CLCs and community nursing homes had about the same proportions of workload for each age group, and in contrast to state veterans’ nursing homes, they had higher proportions of workload for residents under age 65 (27 and 24 percent of workload, respectively, compared to 8 percent). These patterns indicate that the characteristics of resident populations varied distinctly across settings. A higher proportion of workload in state veterans’ nursing homes was for discretionary and older veterans than in the other two settings. In addition, while workload in CLCs and community nursing homes had a similar age distribution of residents, community nursing homes had a higher proportion of workload for mandatory veterans than CLCs. VA officials told us that, at the national level, they rely on workload data for planning and budgeting purposes, especially to ensure that there are adequate resources to serve mandatory veterans. Officials said that reviewing the mix of mandatory versus discretionary veterans is particularly important since VA is required to serve the needs of mandatory veterans. VA officials told us that age data are also becoming important because VA now has a cohort of younger residents, and VA needs to be attuned to these population changes to ensure the required services are available. However, VA does not currently publish data on nursing home workload disaggregated by length of stay and resident characteristics in its budget justification. As a result, VA is not providing workload data on nursing home care provided or paid for to the maximum extent possible as encouraged by OMB guidance to justify staffing and other requirements. Congressional stakeholders therefore have incomplete information on the type of workload (long-stay or short-stay) being provided in each nursing home setting, as well as how settings differ in eligibility status and age of residents they serve. The lack of such information could hinder congressional budgeting and program decision making and oversight regarding VA’s staffing and resource requirements for providing nursing home care. Just under three-quarters of VA’s total nursing home expenditures in fiscal year 2012 were for care provided in CLCs. In addition, per diem expenditures in CLCs—i.e., the average daily cost per resident—were significantly higher than the per diem expenditures in community nursing homes and state veterans’ nursing homes. Over half of VA’s nursing home spending was for discretionary care and nearly half of spending was for veterans age 65 to 84. However, spending by eligibility status and age cohort varied by VA nursing home setting. In fiscal year 2012, VA spent more for care provided in CLCs than in the other two settings combined. Seventy-one percent ($3.5 billion) of VA’s total expenditures was spent on care provided in CLCs (see fig. 5), although CLCs accounted for 28 percent of the total workload. Conversely, VA spent 16 percent (about $800 million) for nursing home care in state veterans’ nursing homes, although state veterans’ nursing homes accounted for 53 percent of VA’s nursing home workload. The share of VA expenditures in each setting and other expenditure patterns we examine below remained relatively unchanged between fiscal years 2010-2012. Similar to workload, the proportion of VA nursing home expenditures accounted for by each setting varied widely by network. For example, nearly 90 percent of VA’s expenditures in VISN 5 (VA Capitol Health Care Network) were for care provided in CLCs, whereas in VISN 19 (Rocky Mountain Network) just 48 percent of expenditures were for care provided in CLCs. (See app. V for more information on nursing home expenditures by network and setting.) In addition to CLCs accounting for most of VA’s total nursing home expenditures, the per diem expenditure in CLCs was considerably higher than that for community nursing homes and state veterans’ nursing homes, as also reported in VA’s annual budget justification. Specifically, while the per diem expenditure across all settings in fiscal year 2012 was $370, the per diem expenditure in CLCs was nearly 4 times higher than for community nursing homes, and about 8 times higher than for state veterans’ nursing homes—$953 compared to $244 and $113, respectively. (See table 2.) We also found that the per diem expenditure for CLCs was substantially higher than the per diem expenditure for community nursing homes and state veterans’ nursing homes, regardless of the resident’s length of stay. Although the short-stay per diem expenditure in CLCs ($1,167) was substantially more than the per diem expenditure for long stays ($884), both were considerably higher than the per diem expenditures for community nursing homes and state veterans’ nursing homes. VA officials told us that they have not done any studies comparing the reasons for differences in per diem expenditures across settings because such expenditures are not comparable. However, VA officials provided us with a breakdown of the various components of total and per diem expenditures in CLCs. (See table 3.) VA officials indicated that “core” CLC expenditures, which account for about 40 percent of total CLC per diem expenditures, would be comparable to the care that VA pays for in community nursing homes and state veterans’ nursing homes. In addition to these core expenditures, VA’s expenditures for CLCs in fiscal year 2012 included direct care expenditures for physicians and other medical personnel staffing, indirect care expenditures for education and research, and overhead expenditures related to VA national programs, among others. In particular, VA officials noted that CLCs are often located in or within close proximity to a VA medical center, and that the facility expenditures alone for CLCs are generally higher than those of community nursing homes and state veterans’ nursing homes, which are generally stand-alone facilities. In fiscal year 2012, for example, the per diem expenditure for CLC facility costs alone was $234, roughly comparable to the entire per diem that VA paid for veterans to receive care in community nursing homes that year. VA officials also told us that while the amount of nursing staff in community nursing homes is similar to that of CLCs, the skill level may not be as high. For example, CLCs may hire more licensed and registered nurses due to the needs of the residents in CLCs. VA officials also told us that expenditures for emergency medical care would not be included in the community nursing home per diem expenditures, but that these expenditures are included for CLC residents. In addition, while the cost of routine medications is covered under the community nursing home per diem, any high-cost medications are not, although they are accounted for in the overall expenditures for operating the community nursing home program. Officials also noted that per diem expenditures for state veterans’ nursing homes only represent a portion of the total expenditures for care, with the remainder being paid for by the state and the veteran. The majority of VA’s spending for nursing home care in fiscal year 2012— $3.7 billion, or 75 percent—was on long-stay care. Long-stay care accounted for most of VA’s expenditures in each nursing home setting, and accounted for all but a small percentage of spending in state veterans’ nursing homes. (See fig. 6.) Although VA officials said they examine data on nursing home spending by length of stay for planning and budgeting purposes, VA does not include such data in its budget justification. Similarly, at the network level, the majority of expenditures for all networks were for long-stay care. The proportion of expenditures for long- stay care ranged from 62 percent for VISN 18 (VA Southwest Health Care Network) to 90 percent for VISN 3 (VA New York/New Jersey Veterans Healthcare Network). (See app. VI for information on expenditures by network and length of stay.) Overall, discretionary care accounted for just over half (52 percent or $2.5 billion) of all VA nursing home spending—a slightly lower proportion than workload, of which discretionary care comprised 62 percent. Just under half (47 percent or $2.3 billion) was spent on care for residents age 65 to 84. About one-quarter was spent on residents under age 65 and about the same percent for residents age 85 and over. (See fig. 7.) At the network level, total expenditures by eligibility status varied by network, with the proportion of spending for discretionary care ranging from 37 percent in VISN 6 (VA Mid-Atlantic Health Care Network) to 61 percent in VISN 15 (VA Heartland Network) and VISN 23 (VA Midwest Health Care Network). (See app. VII for information on expenditures by network and eligibility status.) Spending by age group did not vary as substantially between networks, however. Specifically, the proportion of spending for residents 65 through 84 ranged from 45 percent in VISN 1 (VA New England Healthcare System), VISN 15 (VA Heartland Network), and VISN 19 (Rocky Mountain Network), to 51 percent in VISN 6 (VA Mid-Atlantic Health Care Network). (See app. VIII for information on expenditures by network and resident age.) Similar to workload, spending for discretionary care varied widely by setting, with state veterans’ nursing homes having the highest proportion of their total spending (84 percent) for discretionary care compared to the other two settings (50 percent in CLCs and 18 percent in community nursing homes). (See table 4.) Also, state veterans’ nursing homes had the highest proportion (46 percent) of their total spending for residents age 85 and older, and the lowest proportion (8 percent) on residents under age 65. CLCs and community nursing homes had similar proportions of their total spending on care for residents in each age group. Similar to workload data, VA program officials told us that they rely on expenditure data by length of stay and resident characteristics for planning and budgeting purposes. This type of analysis is important given the significant differences in short- and long-stay per diem expenditures, particularly for CLCs, as well as differences in per diems that VA pays for mandatory and discretionary veterans in state veterans’ nursing homes. In addition, according to officials, expenditure data on community nursing homes are especially important from a program perspective since VA looks at unit costs to help in rate negotiations. However, VA does not currently include expenditure data disaggregated by length of stay and resident characteristics in its budget justification, and therefore does not provide information on unit costs to the maximum extent possible as encouraged by OMB to justify staffing and other requirements. As a result, congressional stakeholders have incomplete information on the budget that is approved for VA nursing home care, including the proportion of expenditures that is allocated for long-stay and short-stay care, as well as expenditures by resident characteristics. The lack of such information could hinder congressional budgeting and program oversight regarding VA’s staffing and resource requirements for providing nursing home care. VA now has key data on workload and expenditures for its three nursing home settings that were lacking in the past, and VA officials told us that they use these data for budgeting and planning purposes. Our analysis of these data show that the nursing home care that VA provides or pays for is primarily for long-stay care of 90 days or more for residents with chronic physical and mental limitations across all three nursing home settings, rather than short-stay care for residents with postacute care needs. Most of VA’s nursing home workload is for discretionary care, rather than mandatory care, and more care is provided for residents 65 to 84 years of age than for other age groups, though these patterns vary by setting. As VA determines budget estimates and plans for future care needs, these data provide a foundation for understanding the type of care provided, the characteristics of the residents receiving it, and differences among the three settings. We believe that having and using the key workload and expenditure data that we analyzed in this report provides VA with more complete data to better inform its budget estimates and conduct program oversight than in the past. VA is to be commended for collecting and using the information to improve its decision making. However, VA only includes data on total nursing home workload, total expenditures, and per diem expenditures by the three nursing home settings in its budget justification and does not include workload or expenditures disaggregated by length of stay or resident characteristics to the maximum extent possible to justify staffing and other requirements. As a result, Congress does not have complete nursing home data on workload and expenditures by the three settings. The lack of such information could hinder congressional decision making and oversight of budgeting of VA nursing home care staffing and resource needs for care, which accounts for a significant portion of VA’s health care budget and serves a vulnerable population. To provide more complete data for Congress, we recommend that the Secretary of Veterans Affairs supplement nursing home workload and expenditure data currently included in VA’s budget justification with the following information: Average daily census by length of stay and resident characteristics, including veterans’ eligibility status and age. Total expenditures and per diem expenditures by length of stay and resident characteristics, including veterans’ eligibility status and age. We provided a draft of this report to VA for comment. In its written comments—reproduced in appendix IX—VA concurred with our recommendation and stated that it will provide supplemental data on both nursing home workload and expenditures by length of stay and resident characteristics upon release of its fiscal year 2015 budget. VA stated that it would provide data for state veterans’ nursing homes to the extent the data are available. We are sending copies of this report to the Secretary of Veterans Affairs, and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Vijay D’Souza at (202) 512-7114, or dsouzav@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix X. In addition to the contact named above, James C. Musselwhite, Assistant Director; Iola D’Souza; Linda Galib; Drew Long; and Hemi Tewarson made key contributions to this report. Veterans’ Health Care Budget: Improvements Made, but Additional Actions Needed to Address Problems Related to Estimates Supporting President’s Request, GAO-13-715 (Washington, D.C.: Aug. 8, 2013). Veterans’ Health Care: Improvements Needed to Ensure That Budget Estimates Are Reliable and That Spending for Facility Maintenance Is Consistent with Priorities, GAO-13-220 (Washington, D.C.: Feb. 22, 2013). Veterans’ Health Care Budget: Better Labeling of Services and More Detailed Information Could Improve the Congressional Budget Justification, GAO-12-908 (Washington, D.C.: Sept. 18, 2012). Veterans’ Health Care Budget Estimate: Changes Were Made in Developing the President’s Budget Request for Fiscal Years 2012 and 2013, GAO-11-622 (Washington, D.C.: Jun. 14, 2011). Veterans’ Health Care: VA Uses a Projection Model to Develop Most of Its Health Care Budget Estimate to Inform the President’s Budget Request, GAO-11-205 (Washington, D.C.: Jan. 31, 2011). VA Health Care: Long-Term Care Strategic Planning and Budgeting Need Improvement, GAO-09-145 (Washington, D.C.: Jan. 23, 2009). VA Long-Term Care: Data Gaps Impede Strategic Planning for and Oversight of State Veterans’ Nursing Homes, GAO-06-264 (Washington, D.C.: March 31, 2006). VA Long-Term Care: Trends and Planning Challenges in Providing Nursing Home Care to Veterans, GAO-06-333T (Washington, D.C.: Jan. 9, 2006). VA Long-Term Care: Oversight of Nursing Home Program Impeded by Data Gaps, GAO-05-65 (Washington, D.C.: Nov. 10, 2004).
In fiscal year 2012, about $4.9 billion of VA’s $54 billion health care services budget was spent on nursing home care. To inform Congress of its budgeting priorities, VA prepares a budget justification, which is reviewed by OMB, that includes data on nursing home workload and expenditures in the three settings. VA also collects data on length of stay (long- and short-stay) and resident characteristics, including eligibility status, as VA is required to pay for mandatory veterans’ nursing home care and may pay for discretionary care as resources permit. These data are important for Congress to understand how funding is allocated for long- and short-stay care and for residents in each setting. GAO was asked to examine VA’s nursing home program. Among other things, GAO examined (1) VA’s nursing home workload in each setting, by length of stay and resident characteristics; and (2) VA’s expenditures for nursing home care in each setting, by length of stay and resident characteristics. GAO analyzed VA nursing home workload and expenditure data, including fiscal year 2012, by setting, length of stay, and resident characteristics; and interviewed VA officials. In fiscal year 2012, the Department of Veterans Affairs' (VA) nursing home workload--the average number of veterans receiving nursing home care per day--was 36,250 across all of the three nursing home settings in which VA provided or paid for veterans' nursing home care. The three settings include Community Living Centers (CLCs), which are VA-owned and operated; community nursing homes with which VA contracts to provide care for veterans; and state veterans' nursing homes, which are owned and operated by states. Over half (53 percent) of this workload was provided in state veterans' nursing homes, 28 percent in CLCs, and 19 percent in community nursing homes. Nearly 90 percent of total workload was long-stay (91 days or more for residents with chronic conditions), and at least 75 percent of care provided in each of VA's three settings was long-stay. In addition, 62 percent of VA's total workload was provided to discretionary veterans (those veterans without certain levels of service-connected disabilities). In fiscal year 2012, VA spent $3.5 billion (71 percent) of its total nursing home expenditures on care provided in CLCs, 16 percent in state veterans' nursing homes and 13 percent in community nursing homes. Seventy-five percent of total spending was for long-stay care, and at least 70 percent of spending in each setting was for long-stay care. About half of total VA spending was for discretionary veterans. GAO found that VA does not provide nursing home workload and expenditure data by length of stay and resident characteristics in its budget justification, although the Office of Management and Budget (OMB) encourages agencies to provide such information to the maximum extent possible to justify staffing and other requirements and improve congressional decision making. As a result, VA does not provide complete information, which could hinder Congress' budgeting and oversight of VA's nursing home staffing and resource requirements. To enhance congressional oversight of VA's nursing home program, GAO recommends that VA supplement data currently included in its budget justification with workload and expenditures by length of stay and resident characteristics. VA concurred with GAO's recommendation and stated it will provide these data upon release of its fiscal year 2015 budget.
As I previously stated, and we have reported on for several years, DOD faces a range of challenges that are complex, long-standing, pervasive, and deeply rooted in virtually all major business operations throughout the department. As I testified last March and as discussed in our latest financial audit report, DOD’s financial management deficiencies, taken together, continue to represent the single largest obstacle to achieving an unqualified (clean) audit opinion on the U.S. government’s consolidated financial statements. While it is important to note that some DOD organizations, such as the Defense Finance Accounting Service (DFAS), the Defense Contract Audit Agency, and the Office of the Inspector General, have clean audit opinions for fiscal year 2004, significant DOD components do not. To date, none of the military services has passed the test of an independent financial audit because of pervasive weaknesses in internal control and processes and fundamentally flawed business systems. Moreover, the lack of adequate transparency and appropriate accountability across DOD’s major business areas results in billions of dollars of wasted resources annually at a time of growing fiscal constraints. In identifying improved financial performance as one of its five governmentwide initiatives, the President’s Management Agenda recognized that obtaining an unqualified financial audit opinion is a basic prescription for any well-managed organization. At the same time, it recognized that without sound internal control and accurate and timely financial and performance information, it is not possible to accomplish the President’s agenda and secure the best performance and highest measure of accountability for the American people. The Joint Financial Management Improvement Program (JFMIP) principals have defined certain measures, in addition to receiving an unqualified financial statement audit opinion, for achieving financial management success. These additional measures include (1) being able to routinely provide timely, accurate, and useful financial and performance information, (2) having no material internal control weaknesses or material noncompliance with laws and regulations, and (3) meeting the requirements of the Federal Financial Management Improvement Act of 1996 (FFMIA). Unfortunately, DOD does not meet any of these conditions. For example, for fiscal year 2004, the DOD Inspector General issued a disclaimer of opinion on DOD’s financial statements, citing 11 material weaknesses in internal control and noncompliance with FFMIA requirements. Recent audits and investigations by GAO and DOD auditors continue to confirm the existence of pervasive weaknesses in DOD’s financial management and related business processes and systems. These problems have (1) resulted in a lack of reliable information needed to make sound decisions and report on the status of DOD activities, including accountability of assets, through financial and other reports to Congress and DOD decision makers, (2) hindered its operational efficiency, (3) adversely affected mission performance, and (4) left the department vulnerable to fraud, waste, and abuse, of which I have a few examples. 782 of the 829 mobilized Army National Guard and Reserve soldiers from 14 case study units we reviewed had at least one pay problem— including overpayments, underpayments, and late payments— associated with their mobilization. DOD’s inability to provide timely and accurate payments to these soldiers, many of whom risked their lives in dangerous combat missions in Iraq or Afghanistan, distracted them from their missions, imposed financial hardships on the soldiers and their families, and has negatively impacted retention. (GAO-04-89, Nov. 13, 2003 and GAO-04-911, Aug. 20, 2004) DOD incurred substantial logistical support problems as a result of weak distribution and accountability processes and controls over supplies and equipment shipments in support of Operation Iraqi Freedom, similar to those encountered during the prior gulf war. These weaknesses resulted in (1) supply shortages, (2) backlogs of materials delivered in-theater but not delivered to the requesting activity, (3) a discrepancy of $1.2 billion between the amount of materiel shipped and that acknowledged by the activity as received, (4) cannibalization of vehicles, and (5) duplicate supply requisitions. (GAO-04-305R, Dec. 18, 2003) Inadequate asset accountability also resulted in DOD’s inability to locate and remove from its inventory over 250,000 defective chemical and biological protective garments known as Battle Dress Overgarments (BDOs)—the predecessor of the new Joint Service Lightweight Integrated Suit Technology (JSLIST). Subsequently, we found that DOD had sold many of these defective suits to the public, including 379 that we purchased in an undercover operation. In addition, DOD may have issued over 4,700 of the defective BDO suits to local law enforcement agencies. Although local law enforcement agencies are most likely to be the first responders to a terrorist attack, DOD failed to inform these agencies that using these BDO suits could result in death or serious injury. (GAO-04-15NI, Nov. 19, 2003) Ineffective controls over Navy foreign military sales using blanket purchase orders placed classified and controlled spare parts at risk of being shipped to foreign countries that may not be eligible to receive them. For example, we identified instances in which Navy country managers (1) overrode the system to release classified parts under blanket purchase orders without filing required documentation justifying the release and (2) substituted classified parts for parts ordered under blanket purchase orders, bypassing the control-edit function of the system designed to check a country’s eligibility to receive the parts. (GAO-04-507, June 25, 2004) DOD and congressional decision makers lack reliable data upon which to base sourcing decisions due to recurring weaknesses in DOD data- gathering, reporting, and financial systems. As in the past, we have identified significant errors and omissions in the data submitted to Congress on the amount of each military service’s depot maintenance work outsourced or performed in-house. As a result, both DOD and Congress lack assurances that the dollar amounts of public-private sector workloads reported by the military services are reliable. (GAO- 04-871, Sept. 29, 2004) Ineffective controls over DOD’s centrally billed travel accounts led to millions of dollars wasted on unused airline tickets, reimbursements to travelers for improper and potentially fraudulent airline ticket claims, and issuance of airline tickets based on invalid travel orders. For example, we identified 58,000 airline tickets—primarily purchased in fiscal years 2001 and 2002—with a residual value of more than $21 million that were unused and not refunded as of October 2003. On the basis of limited airline data, we determined that since 1997, the potential magnitude of DOD’s unused tickets could be at least $115 million. (GAO-04-825T, June 9, 2004 and GAO-04-398, Mar. 31, 2004) The Navy’s lack of detailed cost information hinders its ability to monitor programs and analyze the cost of its activities. For example, we found that the Navy lacked the detailed cost and inventory data needed to assess its needs, evaluate spending patterns, and leverage its telecommunications buying power. As a result, we found that at the sites reviewed, the Navy paid for telecommunications services it no longer required, paid too much for services it used, and paid for potentially fraudulent or abusive long-distance charges. For instance, we found that DOD paid over $5,000 in charges for one card that was used to place 189 calls in one 24-hour period from 12 different cities to 12 different countries. (GAO-04-671, June 14, 2004) DOD continues to use overly optimistic planning assumptions to estimate its annual budget request. These assumptions are reflected in its Future Years Defense Program (FYDP), which reports projected spending for the current budget year and at least 4 succeeding years. Such overly optimistic assumptions limit the visibility of costs projected throughout the FYDP period and beyond. As a result, DOD has too many programs for the available dollars, which often leads to program instability, costly program stretch-outs, and program termination. For example, in January 2003, we reported that the estimated costs of developing eight major weapons systems had increased from about $47 billion in fiscal year 1998 to about $72 billion by fiscal year 2003. In addition, in September 2004 the Congressional Budget Office projected that if the costs of weapons programs and certain other activities continued to grow as they have historically rather than as DOD currently projects, executing today’s defense plans would require spending an average of $498 billion a year through 2009. Without realistic projections, Congress and DOD will not have visibility over the full range of budget options available to achieve defense goals. (GAO-03- 98, Jan. 2003 and GAO-04-514, May 7, 2004) DOD did not know the size of its security clearance backlog at the end of September 2003 and had not estimated this backlog since January 2000. Using September 2003 data, we estimated that DOD had a backlog of roughly 360,000 investigative and adjudicative cases, but the actual backlog size is uncertain. DOD’s failure to eliminate and accurately assess the size of its backlog may have adverse affects. For example, delays in updating overdue clearances for personnel doing classified work may increase national security risks and slowness in issuing new clearances can increase the costs of doing classified government work. (GAO-04-344, Feb. 9, 2004) These examples clearly demonstrate not only the severity of DOD’s current problems, but also the importance of reforming the department’s business operations to more effectively support DOD’s core mission, to improve the economy and efficiency of its operations, and to provide for transparency and accountability to Congress and American taxpayers. The underlying causes of DOD’s financial management and related business process and system weaknesses are generally the same ones I have outlined in my prior testimonies before this Subcommittee over the last 3 years. Unfortunately, DOD has made little progress in addressing these fundamental issues and thus is at high risk that its current major reform initiatives will fail. For each of the problems I cited previously, we found that one or more of these long-standing causes were contributing factors. Over the years, the department has undertaken many well-intended initiatives to transform business operations departmentwide and improve the reliability of information for decision making and reporting. However, many of these efforts resulted in costly failures because the department did not fully address the following four underlying causes of transformation challenges. DOD has not routinely assigned accountability for performance to specific organizations or individuals who have sufficient authority, resource control, and continuity in their position to accomplish desired goals. In addition, top management has not had a proactive, consistent, and continuing role in integrating daily operations with business transformation-related performance goals. It is imperative that major improvement initiatives have the direct, active support and involvement of the Secretary and Deputy Secretary of Defense to ensure that daily activities throughout the department remain focused on achieving shared, agencywide outcomes and success. However, sustaining top management continuity and commitment to performance goals, long-term planning, and follow-through that will necessarily span several years is particularly challenging for DOD. For example, in fiscal year 2004, DOD’s Comptroller, Deputy Under Secretary of Defense for Management Reform, and Deputy Chief Financial Officer—to whom the Secretary delegated the leadership role for key transformation initiatives—all resigned from the department within a 5-month period. Moreover, the department’s primary transformation program—BMMP—has had three different directors responsible for leading the program since Secretary Rumsfeld initiated it a little over 3 years ago. Given the importance of DOD’s business transformation effort, it is imperative that it receives sustained, focused departmentwide leadership needed to improve the economy, efficiency, and effectiveness of DOD’s business operations. As I will discuss in more detail later, we continue to advocate the establishment of a new executive position to provide strong and sustained leadership to the entire spectrum of DOD business transformation initiatives. The department has acknowledged that it confronts decades-old problems deeply grounded in the bureaucratic history and operating practices of a complex, multifaceted organization. Many of DOD’s current operating practices and systems were developed piecemeal to accommodate different organizations, each with its own policies and procedures. As we have reported over the last 3 years, DOD has continued to use a stovepiped approach to develop and fund its business system investments. The existing systems environment evolved over time as DOD components— each receives its own system funding and follows decentralized acquisition and investment practices—developed narrowly focused parochial solutions to their business problems. While the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 more clearly defines the roles and responsibilities of business system investment approval authorities, control over the budgeting for and execution of funding for system investment activities remains at the component level. As I will discuss later, unless business systems modernization money is appropriated to those who are responsible and accountable for reform, DOD is at risk for continuing its current stovepiped approach to developing and funding system investments and failing to fundamentally improve its business operations. DOD’s ability to address its current “business-as- usual” approach to business system investments is further hampered by its lack of an effective methodology and process for obtaining a complete picture of its current business systems environment—a condition we first highlighted in 1997. In September 2004, DOD reported that the department had identified over 4,000 business systems—up from the 1,731 the department reported in October 2002. Unfortunately, due to its lack of an effective methodology and process for identifying business systems, including a clear definition of what constitutes a business system, DOD continues to lack assurance that its systems inventory is reliable. This lack of visibility over business systems in use throughout the department hinders DOD’s ability to identify and eliminate duplicate and nonintegrated systems and transition to an integrated systems environment. At a programmatic level, the lack of clear, comprehensive, and integrated performance goals and measures has handicapped DOD’s past reform efforts. As a result, DOD managers lacked straightforward roadmaps showing how their work contributed to attaining the department’s strategic goals, and they risked operating autonomously rather than collectively. As of March 2004, DOD had formulated departmentwide performance goals and measures and continues to refine and align them with the outcomes described in its strategic plan—the September 2001 Quadrennial Defense Review (QDR). The QDR outlined a new risk management framework consisting of four dimensions of risk—force management, operational, future challenges, and institutional—to use in considering trade-offs among defense objectives and resource constraints. According to DOD’s Fiscal Year 2003 Annual Report to the President and the Congress, these risk areas are to form the basis for DOD’s annual performance goals. They will be used to track performance results and will be linked to planning and resource decisions. As of October 2004, the department was still in the process of implementing this approach departmentwide. However, it remains unclear how DOD will use this approach to measure progress in achieving business reform. As we reported in May 2004, DOD had yet to establish measurable, results- oriented goals for BMMP. BMMP is the department’s major business transformation initiative encompassing defense policies, processes, people, and systems that guide, perform, or support all aspects of business management, including development and implementation of the BEA. A key element of any major program is its ability to establish clearly defined goals and performance measures to monitor and report its progress to management. The lack of BMMP performance measures has made it is difficult to evaluate and track specific program progress, outcomes, and results, such as explicitly defined performance measures to evaluate the architecture’s quality, content, and utility of subsequent major updates. Given that DOD had reported total obligations for BMMP of over $203 million since architecture development began 3 years ago, with little tangible improvements in DOD operations, this is a serious performance management weakness. Further, DOD has not established measurable criteria that decision makers must consider for its revised weapons system acquisition policy, issued in May 2003. The revisions make major improvements to DOD acquisition policy by adopting knowledge-based, evolutionary practices used by successful commercial companies. However, DOD has not provided the necessary controls to ensure such an approach is followed. For example, the policy does not establish measures to gauge design and manufacturing knowledge at critical junctures in the product development process, allowing significant unknowns to be judged as acceptable risks. Without controls in the form of measurable criteria that decision makers must consider, DOD runs the risk of making decisions based on overly optimistic assumptions. The final underlying cause of the department’s long-standing inability to carry out needed fundamental reform has been the lack of a clear linkage of institutional, unit, and individual results-oriented goals, performance measures, and reward mechanisms for making more than incremental changes to existing “business-as-usual” operations, systems, and organizational structures. Traditionally, DOD has focused on justifying its need for more funding rather than on the outcomes its programs have produced. DOD has historically measured its performance by resource components, such as the amount of money spent, people employed, or number of tasks completed. Incentives for its decision makers to implement behavioral changes have been minimal or nonexistent. The lack of incentives to change is evident in the business systems modernization area. We have identified numerous business system modernization efforts that were not economically justified on the basis of cost, benefits, and risk; took years longer than planned; and fell far short of delivering planned or needed capabilities. Despite this track record, DOD continues to invest billions in business systems while at the same time it lacks the effective management and oversight needed to achieve real results. Without appropriate incentives and accountability mechanisms, as well as more centralized control of systems modernization funding, DOD components will continue to develop duplicative and nonintegrated systems that are inconsistent with the Secretary’s vision for reform. To effect real change, actions are needed to (1) develop a well-defined blueprint for change, such as an enterprise architecture, that provides a common framework of reference for making informed system investment decisions, (2) adopt an investment decision-making model that uses the architecture to break down parochialism and reward behaviors that meet DOD-wide goals, (3) establish incentives that motivate decision makers to initiate and implement efforts that are consistent with better architecture and program outcomes, including saying “no” or pulling the plug early on a system or program that is failing, (4) address human capital issues, such as the adequacy of staffing level, skills, and experience available to achieve the institutional, unit, and individual objectives and expectations, and (5) facilitate a congressional focus on results-oriented management, particularly with respect to resource allocation decisions. The success of DOD’s current broad-based business reform initiatives is threatened, as prior initiatives were, by DOD’s continued failure to incorporate key elements that are critical to achieve successful reform. Any efforts at reform must include (1) a comprehensive, integrated business transformation plan, (2) personnel with the necessary skills, experience, responsibility, and authority to implement the plan, (3) effective processes and related tools, such as a BEA and business system investment decision making controls, and (4) results-oriented performance measures that link institutional, unit, and individual personnel goals, measures, and expectations. Today, I would like to discuss three of those broad-based initiatives. In addition, I will briefly highlight some of the several smaller, more narrowly focused initiatives DOD has started in recent years that, through incorporation of many of the key elements, have been successful in making tangible improvements in DOD operations. Furthermore, I would like to reiterate two suggestions for legislative consideration that I believe are essential in order for DOD to be successful in its overall business transformation effort. Keys to Successful Reform As I have previously testified, and as illustrated by the success of the more narrowly defined DOD initiatives I will discuss later, there are several key elements that collectively would enable the department to effectively address the underlying causes of its long-standing business management problems. These elements, which we believe are key to any successful approach to transforming the department’s business addressing the department’s financial management and related business operational challenges as part of a comprehensive, integrated, DOD- wide strategic plan for business reform; providing for sustained, committed, and focused leadership by top management, including but not limited to the Secretary of Defense; establishing resource control over business systems investments; establishing clear lines of responsibility, authority, and accountability; incorporating results-oriented performance measures that link key institutional, unit, and individual personnel transformation objectives and expectations, and monitoring progress; addressing human capital issues, such as the adequacy of staff levels, skills, and experience available to achieve the institutional, unit, and individual personnel performance goals and expectations; providing appropriate incentives or consequences for action or inaction; establishing an enterprise architecture to guide and direct business systems modernization investments; and ensuring effective oversight and monitoring. These elements, which should not be viewed as independent actions but rather as a set of interrelated and interdependent actions, are reflected in the recommendations we have made to DOD over the last 3 years and are consistent with those actions discussed in the department’s April 2001 financial management transformation report. The degree to which DOD incorporates them into its current reform efforts—both long and short term—will be a deciding factor in whether these efforts are successful. Thus far, the department’s progress in implementing our recommendations pertaining to its broad-based initiatives has been slow. Further, while the new legislation on business systems oversight directs DOD to take action on some of these elements, we have not yet seen a comprehensive, cohesive, and integrated strategy that details how some of the ongoing efforts are being integrated. For example, we have not seen how the department plans to integrate its objective of obtaining an unqualified audit opinion in fiscal year 2007 with the BMMP. It appears as if these two efforts are being conducted without the degree of coordination that would generally be expected between efforts that share similar objectives. The first broad-based administrative initiative is effective implementation of the National Security Personnel System (NSPS). In November 2003, Congress authorized the Secretary of Defense to establish a new human capital management system—NSPS—for its civilian employees, which is modern, flexible, and consistent with the merit principles outlined by the act. This legislation requires DOD to develop a personnel system that is consistent with many of the practices that we have identified as elements of an effective human capital management system, including a modern and results-oriented performance management system. For several years, we have reported that many of DOD’s business process and control weaknesses were attributable in part to human capital issues. For example, GAO audits of DOD’s Army Reserve and National Guard payroll and the centrally billed travel card programs further highlight the adverse impact that outdated and inadequate human capital practices, such as insufficient staffing, training, and monitoring of performance, continue to have on DOD business operations. If properly developed and implemented, NSPS could result in significant improvements to DOD’s business operations. I strongly support the need for modernizing federal human capital policies both within DOD and for the entire federal government. Since April 2003 I have testified on four different occasions, including before this Subcommittee, on NSPS and related DOD human capital issues. In the near future, we will issue a summary of the forum GAO and the National Commission on the Public Service Implementation Initiative cohosted to advance the discussion of how human capital reform should proceed. Participants discussed whether there should be an overall governmentwide framework for human capital reform and, if yes, what such a framework should include. While the forum neither sought nor achieved consensus on all of the issues identified in the discussion, there was broad agreement that there should be a governmentwide framework to guide human capital reform built on a set of timeless beliefs and boundaries. Beliefs entail the fundamental principles that should govern all approaches to human capital reform and should not be altered or waived by agencies seeking human capital authorities. Boundaries include the criteria and processes that establish the checks and limitations when agencies seek and implement human capital authorities. A modern, effective, credible, and integrated performance management system can help improve DOD's business operations. Specifically, such a performance management system aligns individual performance expectations with organizational goals and thus defines responsibility and assures accountability for achieving them. In addition, a performance management system can help manage and direct a transformation process by linking performance expectations to an employee’s role in the process. Individual performance and contributions are evaluated on competencies such as change management. Leaders, managers, and employees who demonstrate these competencies are rewarded for their success in contributing to the achievement of the transformation process. There are significant opportunities to use the performance management system to explicitly link senior executive expectations for performance to results-oriented goals. There is a need to hold senior executives accountable for demonstrating competencies in leading and facilitating change and fostering collaboration both within and across organizational boundaries to achieve results. Setting and meeting expectations such as these will be critical to achieving needed transformation changes. Recently, Congress established a new performance-based pay system for members of the Senior Executive Service (SES) that is designed to provide a clear and direct link between SES performance and pay. An agency can raise the pay cap for its senior executives if the agency's performance management system makes meaningful distinctions based on relative performance. This visible step in linking pay to the achievement of measurable performance goals within a context of a credible human capital system that includes adequate safeguards is helpful in constructing a results-oriented culture. In my March 2004 testimony on DOD’s financial management and related business management transformation efforts, I stated that as DOD develops regulations to implement its new human capital management system, the department needs to do the following: Ensure the active involvement of the Office of Personnel Management in the development process, given the significant implications that changes in DOD regulations may have on governmentwide human capital policies. In this regard, the Office of Personnel Management has assigned a senior representative to support and advise DOD on the development of jointly prescribed NSPS regulations and the implementation of NSPS. Ensure the involvement of civilian employees and unions in the design and development of a new personnel system. The law calls for DOD to involve employees, especially in the design of its new performance management system. Involving employees in planning helps to develop agency goals and objectives that incorporate insights about operations from a front-line perspective. It can also serve to increase employees’ understanding and acceptance of organizational goals and improve motivation and morale. In this regard, DOD has launched a new Web site to educate its employees about the new National Security Personnel System. In addition, DOD leadership has indicated that it has sought input from civilian employees through town hall meetings, focus groups, and discussions with union leaders. Use a phased approach to implement the system, recognizing that different parts of the organization will have different levels of readiness and different capabilities to implement new authorities. A phased approach allows for learning so that appropriate adjustments and midcourse corrections can be made before the regulations are fully implemented departmentwide. In this regard, DOD had initially indicated that it planned to implement its new human capital system for 300,000 civilian employees by October 1, 2004. DOD has since indicated that it has adjusted its timelines to reflect a more cautious, deliberative approach involving more stakeholders. DOD has now indicated that it plans to phase in its new human capital system beginning in July 2005. We are currently evaluating DOD’s NSPS design process and look forward to sharing our findings with Congress upon completion of our review. While BMMP is vital to the department’s efforts to transform its business operations, DOD has not effectively addressed many of the impediments to successful reform that I mentioned earlier, including (1) a lack of sustained, effective, and focused leadership, (2) a lack of results-oriented goals and performance measures, and (3) long-standing cultural resistance and parochialism. As a result, the program has yielded very little, if any, tangible improvements in DOD’s business operations. We have made numerous recommendations to DOD that center on the need to incorporate the key elements to successful reform, which I discussed previously, into the program. In May 2004 we reported that no significant changes had been made to the architecture since the initial version was released. Further, we reported that DOD had not yet adopted key architecture management best practices, such as assigning accountability and responsibility for directing, overseeing, and approving the architecture and explicitly defining performance metrics to evaluate the architecture’s quality, content, and utility. For these and other reasons, DOD’s verification and validation contractor concluded that this latest version of the architecture retained most of the critical problems of the initial version, such as how the architecture should be used by the military services and other DOD components in making acquisition and portfolio investment decisions. I will now expand on the problems facing BMMP. The purpose of BMMP is to provide world-class mission support to the war fighter through transformation of DOD’s business processes and systems. A key element of BMMP is the development and implementation of a well- defined BEA. Properly developed and implemented, a BEA can provide assurance that the department invests in integrated enterprisewide business solutions and, along with effective project management and resource controls, it can be instrumental in developing corporatewide solutions and moving resources away from nonintegrated business system development efforts. As we reported in July 2003, DOD had developed an initial version of BEA and had expended tremendous effort and resources in doing so. However, we also reported that substantial work remains before the architecture would be sufficiently defined to have a tangible impact on improving DOD’s overall business operations. In May 2004, we reported that after about 3 years of effort and over $203 million in reported obligations for BMMP operations, BEA’s content and DOD’s approach to investing billions of dollars annually in existing and new systems had not changed significantly. Under a provision in the recently enacted Ronald W. Reagan National Defense Authorization Act for fiscal year 2005, DOD must develop an enterprise architecture to cover all defense business systems and related business functions and activities that is sufficiently defined to effectively guide, constrain, and permit implementation of a corporatewide solution and is consistent with the policies and procedures established by the Office of Management and Budget. Further, the act requires the development of a transition plan that includes not only an acquisition strategy for new systems, but also a listing of the termination dates of current legacy systems that will not be part of the corporatewide solution, as well as a listing of legacy systems that will be modified to become part of the corporatewide solution for addressing DOD’s business management deficiencies. Transforming DOD’s business operations and making them more efficient through the elimination of nonintegrated and noncompliant legacy systems would free up resources that could be used to support the department’s core mission, enhance readiness, and improve the quality of life for our troops and their families. I cannot overemphasize the degree of difficulty DOD faces in developing and implementing a well-defined architecture to provide the foundation that will guide its overall business transformation. The department’s business transformation depends on its ability to develop and implement business systems that provide corporate solutions. Successful implementation of corporate solutions through adherence to a well-defined enterprise architecture and effective project management and fund control would go a long way toward precluding the continued proliferation of duplicative, stovepiped systems and reduce spending on multiple systems that are supposed to perform the same function. Without these things, we have continued to see that DOD is still developing systems that are not designed to solve corporatewide problems. For example, the Defense Logistics Agency’s (DLA) Business Systems Modernization (BSM) and the Army’s Logistics Modernization Program (LMP), both of which were initiated prior to commencement of the BEA effort, were not directed towards a corporate solution to the department’s long-standing weaknesses in inventory and logistics management, such as the lack of total asset visibility. Rather, both projects focused on their respective entity’s inventory and logistics management operations. As a result, neither project will provide asset visibility beyond the stovepiped operation for which they were designed. For example, BSM is only designed to provide visibility over the items within the DLA environment— something DLA has stated already exists within its current system environment. As a result, DOD continues to lack the capability to identify the exact location of items, such as defective chemical and biological protective suits, that were distributed to end-users, such as the military services, or sold to the public. The department would have to resort to inefficient and ineffective data calls, as it has done in the past, to identify and withdraw defective items from use. Another major impediment to the successful transformation of DOD’s business systems is funds control. DOD invests billions of dollars annually to operate, maintain, and modernize its business systems. For fiscal year 2004, the department requested approximately $28 billion in IT funding to support a wide range of military operations as well as DOD business systems operations, of which DOD reported that approximately $18.8 billion —$5.8 billion for business systems and $13 billion for business systems infrastructure—relates to the operation, maintenance, and modernization of the department’s reported thousands of business systems. The $18.8 billion is spread across the military services and defense agencies, with each receiving and controlling its own funding for IT investments. Although the recently enacted Ronald W. Reagan National Defense Authorization Act for fiscal year 2005 more clearly defines the roles and responsibilities of business system investment approval authorities, control over the budgeting for and execution of funding for system investment activities remains at the component level. Under a provision in the act, effective October 1, 2005, DOD must identify each defense system for which funding is proposed in its budget, including the identification of all funds, by appropriation, for current services (to operate and maintain the system) and modernization. Further, DOD may not obligate funds for a defense business system modernization that will have a total cost in excess of $1 million unless specific conditions called for in the act are met. The Defense Business Systems Management Committee, also required by the act to be established, must then approve the designated approval authorities’ certification before funds can be obligated. Further, obligation of funds for modernization programs without certification and approval by the Defense Business Systems Management Committee is deemed a violation of the Anti-Deficiency Act. Although proper implementation of this legislation should strengthen oversight of DOD’s systems modernization efforts, it is questionable whether DOD has developed or improved its processes and procedures to identify and control system investments occurring at the component level. Unless DOD establishes effective processes and controls to identify and control system investments occurring within DOD components and overcome parochial interests when corporatewide solutions are more appropriate, it will lack the ability to ensure compliance with the act. We fully recognize that developing and implementing an enterprise architecture for an organization as large and complex as DOD is a formidable challenge. Nevertheless, a well-defined architecture is essential to enabling some of the elements for successful reform that I discussed earlier. Accordingly, we remain supportive of the need for BMMP, but are deeply concerned about the program’s lack of meaningful progress and inability to address management challenges. Accordingly, we plan to continue working constructively with the department to strengthen the program and will report to this Subcommittee on DOD’s progress and challenges in the spring of 2005. While DOD’s former Comptroller started the financial improvement initiative with the goal of obtaining an unqualified audit opinion for fiscal year 2007 on its departmentwide financial statements, we found that the initiative was simply a goal that lacked a clearly defined, well-documented, and realistic plan to make the stated goal a reality. In September 2004 we reported that DOD’s financial improvement initiative lacked several of the key elements critical to success, including (1) a comprehensive, integrated plan, (2) results-oriented goals and performance measures, and (3) effective oversight and monitoring. Specifically, we found that DOD had not established a framework to integrate the improvement efforts planned by DOD components with broad-based DOD initiatives such as human capital and BMMP. Rather, DOD intended to rely upon the collective efforts of DOD components, as shown in their discrete plans, to address its financial management deficiencies while at the same time continuing its broad-based initiatives. However, the component plans we reviewed did not consistently identify whether a proposed corrective action included a manual work-around or business system enhancement or replacement. Further, the component plans lacked sufficient information regarding human capital needs, such as the staffing level and skills required to implement and sustain the plans. In addition, as we have previously reported, the department currently lacks a mechanism to effectively identify, monitor, and oversee business system investments, including enhancements, occurring within the department. Because of this lack of visibility over how DOD components plan to advance their financial management functionality, the DOD Comptroller and BMMP may not have sufficient information to assess the feasibility of a work-around or to review and approve all modifications to existing legacy business systems to ensure that they (1) are sound investments, (2) optimize mission performance and accountability, and (3) are consistent with applicable requirements and key architectural elements in DOD’s business enterprise architecture. In addition, our review of key individual component plans revealed that the plans varied in levels of detail, completeness, and scope, such that it will be difficult for DOD Comptroller staff to use the departmental database of component plans it was developing to oversee and monitor component efforts. We found that the component plans did not consistently identify how staff (human capital), processes, or business systems would be changed to implement corrective actions. Such changes are key elements in assessing the adequacy of a component’s plan and in monitoring progress and sustainability. Further, DOD lacked effective oversight and accountability mechanisms to ensure that the plans are implemented and corrective actions are sustainable. The database the department is currently using was not integrated electronically with subordinate component plans and the milestone dates identified in the component plans were generally based on assertion dates prescribed by the DOD Comptroller and not on actual estimates of effort required. Furthermore, task dependencies were not clearly identified, including critical corrective tasks that would need to be completed in order for the fiscal year 2007 audit opinion to be achieved. On the positive side, DOD had developed business rules, which if implemented as planned, should clearly establish a process for ensuring that corrective actions, as described in the component plans, are implemented and validated in order to minimize the department’s risk of unsupported claims by DOD components that reported financial information is auditable. Further, the business rules clearly recognize that management, not the auditor, is responsible for documenting business processes, systems, and internal control for collecting and maintaining transaction data. In addition, DOD’s involvement of its components in developing and implementing solutions to long-standing deficiencies in their business operations under this initiative is a critical and positive step toward obtaining the commitment and buy-in that has not been readily apparent in BMMP. Further, the recently enacted Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 has placed a limitation on continued preparation or implementation of DOD’s financial improvement initiative pending a report to congressional defense committees containing the following: (1) a determination that BEA and the transition plan have been developed, as required by section 332 of the act, (2) an explanation of the manner in which fiscal year 2005 operation and maintenance funds will be used by DOD components to prepare or implement the midrange financial improvement plan, and (3) an estimate of future year costs for each DOD component to implement the plan. DOD Comptroller staff acknowledged that their goal was ambitious, but believed that they were in the process of laying a framework, which they believe would address our issues, to facilitate movement towards sustainable financial management improvements and eventually obtain an unqualified audit opinion. In contrast to its broad-based initiatives, DOD has incorporated many of the key elements for successful reform in its interim initiatives. As the following examples demonstrate, leadership, real incentives, accountability, and oversight and monitoring were clearly key elements in DOD’s efforts to improve its operations. For example, the former DOD Comptroller developed a Financial Management Balanced Scorecard that is intended to align the financial community’s strategy, goals, objectives, and related performance measures with the departmentwide risk management framework established as part of DOD’s Quadrennial Defense Review, and with the President’s Management Agenda. To effectively implement the balanced scorecard, the DOD Comptroller has cascaded the performance measures down to the military services and defense agency financial communities, along with certain specific reporting requirements. At the departmentwide level, certain financial metrics are selected, consolidated, and reported to the top levels of DOD management for evaluation and comparison. These “dashboard” metrics are intended to provide key decision makers, including Congress, with critical performance information at a glance, in a consistent and easily understandable format. DFAS has been reporting the metrics cited below for several years, which under the leadership of DFAS’s Director and DOD’s Comptroller, have reported improvements including the following. From April 2001 to September 2004, DOD reduced its commercial pay backlogs (payment delinquencies) by 72 percent. From March 2001 to September 2004, DOD reduced its payment recording errors by 77 percent. From September 2001 to September 2004, DOD reduced its delinquency rate for individually billed travel cards from 9.4 percent to 4.3 percent. Using DFAS’s metrics, management can quickly see when and where problems are arising and can focus additional attention on those areas. While these metrics show significant improvements from 2001 to today, our report last year on DOD’s metrics program included a caution that, without modern integrated systems and the streamlined processes they engender, reported progress may not be sustainable if workload is increased. DOD and the military services have also acted to improve their oversight and monitoring of the department’s purchase card program and have taken actions, that when fully implemented, should effectively address all of our 109 recommendations. For example, they issued policy guidance on monitoring charge card activity and disciplinary actions that will be taken against civilian or military employees who engage in improper, fraudulent, abusive, or negligent use of a government charge card. In addition, they substantially reduced the number of purchase cards issued. According to the General Services Administration records, DOD had reduced the total number of purchase cards from about 239,000 in March 2001 to about 131,875 in June 2004. These reductions have the potential to significantly improve the management of this program. Further, the DOD IG and the Navy have prototyped and are now expanding a data-mining capability to screen for and identify high-risk transactions (such as potentially fraudulent, improper, and abusive use of purchase cards) for subsequent investigation. On April 28, 2004, the DOD IG testified on ways the department could save money through the prudent use of government purchase cards. The testimony highlighted improvements made in the management of the department’s purchase card program and areas for which additional improvements are needed. Specifically, the testimony identified actions the DOD IG had taken to partner with the DOD purchase card program management offices so that DOD could more proactively identify and prevent potential fraud, waste, and mismanagement. However, more still needs to be done because the testimony also discussed more than $12 million in fraudulent, wasteful, or abusive purchases identified by the DOD IG. In addition to the oversight and monitoring performed by DOD over these business areas, we believe that consistent congressional oversight played a major role in bringing about these improvements in DOD’s purchase and travel card programs. From 2001 through 2004, 10 separate congressional hearings were held on DOD’s purchase and travel card programs. Numerous legislative initiatives aimed at improving DOD’s management and oversight of these programs also had a positive impact. Most recently, the fiscal year 2005 Defense Appropriations Act reduced DOD’s appropriation by $100 million to “limit excessive growth” in DOD’s travel expenses. Another important initiative under way at the department pertains to the quarterly financial statement review sessions held by the DOD Comptroller, which have led to the discovery and correction of numerous recording and reporting errors. Under the leadership of DOD’s former Comptroller, and continuing under its new leadership, DOD is working to instill discipline into its financial reporting processes to improve the reliability of the department’s financial data. Specifically, the DOD Comptroller requires DOD’s major components to prepare quarterly financial statements along with extensive footnotes that explain any improper balances or significant variances from previous year quarterly statements. All of the statements and footnotes are analyzed by Comptroller office staff and reviewed by the Comptroller. In addition, the midyear and end-of-year financial statements must be briefed to the DOD Comptroller by the military service Assistant Secretary for Financial Management or the head of the defense agency. Under DOD’s former Comptroller, GAO and the DOD IG were invited to observe several of these briefings and noted that the practice of preparing and explaining interim financial statements has improved the reliability of reported information through more timely discovery and correction of numerous recording and reporting errors. Although these meetings are continuing under the current Comptroller, GAO and the DOD IG have not been invited to attend. I would like to reiterate two suggestions for legislative consideration that I discussed in my testimony last March, which I believe could further improve the likelihood of successful business transformation at DOD. Most of the key elements necessary for successful transformation could be achieved under the current legislative framework; however, addressing sustained and focused leadership for DOD business transformation and funding control will require additional legislation. These suggestions include the creation of a chief management official and the appropriation of business system investment funding to the approval authorities responsible and accountable for business system investments under provisions enacted by the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005. While the Secretary and other key DOD leaders have demonstrated their commitment to the current business transformation efforts, in our view, the complexity and long-term nature of these efforts requires the development of an executive position capable of providing strong and sustained executive leadership—over a number of years and various administrations. The day-to-day demands placed on the Secretary, the Deputy Secretary, and others make it difficult for these leaders to maintain the oversight, focus, and momentum needed to resolve the weaknesses in DOD’s overall business operations. This is particularly evident given the demands that the Iraq and Afghanistan postwar reconstruction activities and the continuing war on terrorism have placed on current leaders. Likewise, the breadth and complexity of the problems preclude the under secretaries, such as the DOD Comptroller, from asserting the necessary authority over selected players and business areas while continuing to fulfill their other responsibilities. While sound strategic planning is the foundation upon which to build, sustained and focused leadership is needed for reform to succeed. One way to ensure sustained leadership over DOD’s business transformation efforts would be to create a full-time executive-level II position for a chief operating officer or chief management official (COO/CMO), who would serve as the Principal Under Secretary of Defense for Management. This position would elevate, integrate, and institutionalize the attention essential for addressing key stewardship responsibilities, such as strategic planning, human capital management, performance and financial management, acquisition and contract management, and business systems modernization, while facilitating the overall business transformation operations within DOD. The COO/CMO concept is consistent with the commonly agreed-upon governance principle that there needs to be a single point within agencies with the perspective and responsibility—as well as authority—to ensure the successful implementation of functional management and transformation efforts. Governments around the world, such as the United Kingdom and Ireland, have established term appointed positions, similar to the COO/CMO concept we propose, that are responsible for advancing and continuously improving agency operations. The DOD COO/CMO position could be filled by an individual, appointed by the President and confirmed by the Senate, for a set term of 7 years with the potential for reappointment. Articulating the roles and responsibilities of the position in statute helps to create unambiguous expectations and underscores Congress’ desire to follow a professional, nonpartisan approach to the position. In that regard, such an individual should have a proven track record as a business process change agent in large, complex, and diverse organizations—experience necessary to spearhead business process transformation across the department and serve as an integrator for the needed business transformation efforts. In addition, this individual would enter into an annual performance agreement with the Secretary that sets forth measurable individual goals linked to overall organizational goals in connection with the department’s business transformation efforts. Measurable progress towards achieving agreed-upon goals would be a basis for determining the level of compensation earned, including any related bonus. In addition, this individual’s achievements and compensation would be reported to Congress each year. DOD’s current systems investment process in which system funding is controlled by DOD components has contributed to the evolution of an overly complex and error-prone information technology environment containing duplicative, nonintegrated, and stovepiped systems. We have made numerous recommendations to DOD intended to improve the management oversight and control of its business systems modernization investments. However, as previously mentioned, progress in achieving this control has been slow. Recent legislation,consistent with the suggestion I made in my prior testimony, established specific management oversight and accountability with the “owners” of the various functional areas or domains. The legislation defined the scope of the various business areas (e.g., acquisition, logistics, finance and accounting) and established functional approval authority and responsibility for management of the portfolio of business systems with the relevant under secretary of defense for the departmental domains and the Assistant Secretary of Defense for Networks and Information Integration (information technology infrastructure). For example, the Under Secretary of Defense for Acquisition, Technology, and Logistics is now responsible and accountable for any defense business system intended to support acquisition activities, logistics activities, or installations and environment activities for DOD. The legislation also requires that the responsible approval authorities establish a hierarchy of investment review boards with DOD-wide representation, including the military services and Defense agencies. The boards are responsible for reviewing and approving investments to develop, operate, maintain, and modernize business systems for their business area portfolio, including ensuring that investments are consistent with DOD’s BEA. Although the new legislation clearly assigns responsibility and accountability for system modernization to designated approval authorities, control over system investment funding remains at the DOD component level. As a result, DOD continues to have little or no assurance that its business systems modernization investment money is being spent in an economical, efficient, and effective manner. Given that DOD spends billions on business systems and related infrastructure each year, we believe it is critical that funds for DOD business systems be appropriated to those responsible and accountable for business system improvements. However, implementation may require review of the various statutory authorities for the military services and other DOD components. Control over the funds would improve the capacity of DOD’s designated approval authorities to fulfill their responsibilities and transparency over DOD investments, and minimize the parochial approach to systems development that exists today. In addition, to improve coordination and integration activities, we suggest that all approval authorities coordinate their business system modernization efforts with the chief management official who would chair the Defense Business Systems Management Committee. Cognizant business area approval authorities would also be required to report to Congress through the chief management official and the Secretary of Defense on applicable business systems that are not compliant with review requirements and to include a summary justification for noncompliance. The United States is facing large and growing long-term fiscal pressures created by the impending retirement of the baby boom generation, rising health care costs, increased homeland security and defense commitments, and a reduction in federal revenues. These pressures not only sharpen the need to look at competing claims on existing federal budgetary resources and emerging new priorities, they underscore the need for transparent and reliable information upon which to base decisions at all levels within the federal government. This includes timely, useful, and reliable financial and management information that demonstrates what results are being achieved and what risks are being incurred by various government programs, functions, and activities. As I have discussed, DOD lacks the efficient and effective financial management and related business operations, including processes and systems, to support the war fighter, DOD management, and Congress. With a large and growing fiscal imbalance facing our nation, achieving tens of billions of dollars of annual savings through successful DOD transformation is increasingly important. DOD’s senior leaders have demonstrated a commitment to transforming the department and improving its business operations. Recent legislation pertaining to defense business systems, enterprise architecture, accountability, and modernization, if properly implemented, should improve oversight and control over DOD’s significant system investment activities. However, DOD’s transformation efforts and legislation to date have not adequately addressed key underlying causes of past reform failures. Successful transformation will require an effective transformation plan; adequate human capital; effective processes and transformation tools, such as a BEA; and results-oriented performance measures that link institutional, unit, and individual personnel goals and expectations. Reforming DOD’s business operations is a monumental challenge and many well-intentioned efforts have failed over the last several decades. Lessons learned from these previous reform attempts include the need for sustained and focused leadership at the highest level, with appropriate authority over all of DOD’s business operations, as well as centralized control of all business transformation-related funding with the designated approval authorities assigned responsibility for transformation activities within their specific business process areas. This leadership could be provided through the establishment of a Chief Operating Officer/Chief Management Official. Absent this leadership, authority, and control of funding, the current transformation efforts are likely to fail. I commend the Subcommittee for holding this hearing and I encourage you to use this vehicle, on an annual basis, as a catalyst for long overdue business transformation at DOD. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions you or other members of the Subcommittee may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-9095 or kutzg@gao.gov, Randolph C. Hite at (202) 512- 3439 or hiter@gao.gov, Sharon Pickup at (202) 512-9619 or pickups@gao.gov, or Evelyn Logue at (202) 512-3881 or loguee@gao.gov. Other key contributors to this testimony include Catherine Baltzell, Sandra Bell, Molly Boyle, Peter Del Toro, Francine DelVecchio, Bill Doherty, Abe Dymond, Cynthia Jackson, John Kelly, Neelaxi Lakhmani, Elizabeth Mead, Chris Mihm, Mai Nguyen, John Ryan, Lisa Shames, Darby Smith, and Marilyn Wasleski. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In March 2004, GAO testified before the Subcommittee on Readiness and Management Support, Senate Committee on Armed Services on the impact and causes of financial and related business weaknesses on the Department of Defense's (DOD) operations and the status of DOD reform efforts. GAO's reports continue to show that fundamental problems with DOD's financial management and related business operations result in substantial waste and inefficiency, adversely impact mission performance, and result in a lack of adequate transparency and appropriate accountability across all major business areas. Over the years, DOD leaders have initiated a number of efforts to address these weaknesses and transform the department. For years, GAO has reported that DOD is challenged in its efforts to effect fundamental financial and business management reform, and GAO's ongoing work continues to raise serious questions about DOD's chances of success. The Subcommittee asked GAO to provide a current status report on DOD's progress to date and suggestions for improvement. Specifically, GAO was asked to provide (1) an overview of the impact and causes of weaknesses in DOD's business operations, (2) the status of DOD reform efforts, (3) the impact of recent legislation pertaining to DOD's transformation and financial improvement initiatives, and (4) suggestions for improving DOD's efforts to improve the reliability of its financial information. Although senior DOD leaders have shown commitment to transformation as evidenced by key initiatives such as human capital reform, the Business Management Modernization Program, and the Financial Improvement Initiative, little tangible evidence of improvement has been seen in DOD's business operations. Overhauling the business operations of one of the largest and most complex organizations in the world represents a huge management challenge, especially given the increased demands on our military forces. However, this challenge can be met if DOD employs key elements, such as a comprehensive and integrated business transformation plan. Six DOD program areas are on GAO's high-risk list, and the department shares responsibility for three other governmentwide high-risk areas. Substantial weaknesses in DOD business operations adversely affect its ability to provide timely, reliable management information for DOD and Congress to use in making informed decisions. Further, the lack of adequate transparency and appropriate accountability across all of DOD's major business areas results in billions of dollars annually in wasted resources in a time of increasing fiscal challenges. Four underlying causes impede reform: (1) lack of clear and sustained leadership for overall business transformation efforts, (2) cultural resistance to change, (3) lack of meaningful metrics and ongoing monitoring, and (4) inadequate incentives and accountability mechanisms. To address these issues, GAO reiterates the key elements to successful reform that are embodied in our prior recommendations and two suggestions for legislative action. First, GAO suggests that a senior management position be established to provide strong and sustained leadership over all major transformation efforts. Second, GAO proposes that business systems modernization money be appropriated to designated approval authorities responsible and accountable for system investments within DOD business areas. Absent this unified responsibility, authority, accountability, and control of funding, the current transformation efforts are likely to fail.
The 50 states, the District of Columbia, and the 5 U.S. territories (hereafter referred to collectively as “states”) each administer a state- based Medicaid program. Federal laws authorize both federal and state entities to protect the program from fraud, waste, and abuse. Specifically, various provisions of federal law give CMS the authority to oversee Medicaid program integrity and to set requirements with which state Medicaid programs must comply. CMS oversees the states’ Medicaid programs by providing administrators with guidance related to statutory and regulatory requirements, as well as technical assistance on specific program integrity activities, such as the implementation of supporting information systems. Further, the Deficit Reduction Act of 2005 established the Medicaid Integrity Program within CMS to support and oversee state program integrity efforts. To carry out its oversight responsibilities, CMS established within that program the Medicaid Integrity Group, which was responsible for conducting comprehensive reviews of states’ Medicaid program integrity activities to assess their compliance with federal program integrity laws and regulations. Administrators of the 56 state-based programs are responsible for the day-to-day operations, including program integrity activities, of Medicaid. State Medicaid administrators employ the expertise of program integrity analysts to screen providers and determine whether the providers are eligible to enroll in the program. These analysts are also responsible for reviewing claims filed for services before they are paid, and for reviewing claims after they have been paid. Provider enrollment: When enrolling providers to participate in the program, states are to first verify the providers’ eligibility. As part of the enrollment screening process, state program integrity analysts collect certain information about the providers, which may include the results of any criminal background checks and whether they are identified on lists that exclude or bar them from participating in other states’ Medicaid programs or the federal Medicare program. Any providers who are determined to be ineligible as a result of information obtained through the screening process are excluded from participating in Medicaid. Prepayment claims review: The states also conduct reviews of claims data submitted by providers prior to payment in attempts to ensure that the claims were filed properly. For example, program integrity analysts conduct reviews to identify errors in individual claims, such as incorrect medical codes, and return claims that are found to have errors to the providers, thus preventing payment of such claims until the errors are corrected. The analysts may also compare claims data to prior incidents of known fraudulent behavior in their efforts to identify providers for further investigation. Post-payment claims review: Medicaid administrators also are to take steps to identify payments that were made to providers for improperly filed claims. In this regard, states’ program integrity analysts may compare data from multiple paid claims to related provider records as they attempt to identify behaviors consistent with fraudulent activity that had been identified previously. Providers demonstrating such behaviors would then be subjected to additional review by states’ auditors and investigators, who are tasked to take actions intended to recover the amounts reimbursed for improper or fraudulent claims. Figure 1 presents a simplified illustration of the provider enrollment, prepayment review, and post-payment review activities. The implementation of information systems is integral to states’ efforts to conduct the program integrity activities covering provider enrollment through post-payment claims review. In this regard, the Social Security provides that, to receive federal funds for Medicaid, Act, as amended,every state must implement a claims processing and information retrieval system to support the administration of the program. For the Medicaid program, this system is the Medicaid Management Information System (MMIS). In accordance with the act, as amended, and relevant regulations, CMS further defined criteria that states must meet to be approved to receive federal funds, including the implementation of system functionality that supports key Medicaid business areas. Such areas would include program performance management, business relationships, and operations management. Program integrity is a component of the performance management business area. CMS also defined requirements for implementing MMISs, including various subsystems that support program integrity activities, such as provider screening, claims processing, and utilization reviews. The MMIS provider subsystem is to be used to enroll and maintain a state’s network of providers for serving the Medicaid beneficiary population. Among other things, this subsystem is to include functionality needed to determine the eligibility of the providers participating in Medicaid. For example, the system is to allow Medicaid program administrators the ability to cross-reference license and sanction information with other states and federal agencies in order to identify providers who may not be eligible to enroll. State Medicaid programs are to define and implement functionality within this subsystem to validate providers’ enrollment based on state- specific criteria, such as license and permit expiration dates. The MMIS claims processing subsystem is to be used to review data from claims filed by providers before they are paid and is to provide functionality needed to prevent improper payments of claims. For example, when analyzing claims data prior to payment, this subsystem is to be used by Medicaid administrators to identify improperly filed claims through the implementation of prepayment edits—i.e., instructions that system developers code into the subsystem to electronically compare claims data to program requirements in order to assure that claims are filed properly before they are approved for payment. Any claims that do not pass such edits are denied for payment or flagged for additional review by program integrity analysts. The MMIS surveillance and utilization review subsystem (SURS) is to be used by program integrity analysts when they conduct post- payment reviews of claims in an attempt to detect any that were paid improperly. Specifically, the subsystem provides functionality to analyze data supporting the denial or payment of multiple claims submitted by a particular provider to help identify patterns that may indicate inappropriate provider behavior and, therefore, detect improper payments of claims. For example, payments made to a provider for an unusually large number of services for an uncommon type of procedure over a relatively short period of time could indicate fraudulent behavior on that provider’s part and, therefore, warrant additional review or investigation of the provider’s practices. Additionally, within their MMIS IT environment, states may implement other components, such as databases and data warehouses, to store the beneficiary, claims, and provider data that are collected for processing and analysis by the system and its subsystems. Further, in accordance with the Patient Protection and Affordable Care Act (PPACA), CMS identified certain prepayment edits and required state Medicaid administrators to incorporate these edits into their MMIS claims processing subsystem.for identifying incorrect coding on Medicaid claims that, if undetected, could lead to improper payments for ambulatory surgical center services, outpatient hospital services, and durable medical equipment. Specifically, states are to implement functionality Prepayment edits that provide such functionality were developed through efforts of the National Correct Coding Initiative (NCCI)—a program implemented by CMS in 1996 for the Medicare fee-for-service program. Through this initiative, CMS defined more than a million standard claims processing prepayment edits to identify coding errors that are applicable to state programs. For example, some of the edits can identify pairs of medical billing codes that indicate to program integrity analysts any services that should not be reported together, such as two codes for the same service for the same beneficiary on the same date. In such cases, the first code would be eligible for payment but the second code would be denied. Other NCCI edits required for Medicaid programs are designed to identify procedures that could not be performed during a patient’s visit because they would not be feasible based on anatomic or gender considerations. For example, processing claims data against the edits may identify services such as prenatal treatment for a male patient that would not be a likely or feasible medical service. In addition to the NCCI edits, states may design and implement prepayment edits based on their own program experiences and needs to identify improperly filed claims and prevent payment of such claims. Provider enrollment and pre- and post-payment claims data review activities, and the MMIS subsystems that support them, were designed primarily to address program integrity goals of states’ delivery of fee-for- service health care to Medicaid beneficiaries. In fee-for-service plans, providers are paid for each service that is delivered; they file claims for reimbursement from Medicaid that include detailed data specific to the service delivered during a patient’s visit. However, as we noted in May 2014, over the past 15 years, states have more frequently implemented managed care delivery systems for providing health care services for Medicaid beneficiaries.care delivery, beneficiaries obtain some or all of their medical services from organizations of providers that are under contract with the state to provide Medicaid benefits in exchange for a monthly payment. The payments to these managed care organizations are typically made by the state Medicaid programs on a predetermined, per-person basis. While the individual managed care providers do not file claims for reimbursement by Medicaid, the managed care organizations are expected to report data to state Medicaid programs that allow the Medicaid administrators to track the services received by beneficiaries enrolled in managed care. These data are referred to as encounter data and are obtained from claims for With managed reimbursement that providers submit to their managed care organizations for services delivered. Encounter data are similar to the fee-for-service claims data, but they typically do not include the same level of detail, and specific encounter data elements may be defined differently than they are for claims data. For example, encounter data generally would not include a Medicaid- billed amount for a particular beneficiary’s visit to a provider because the state does not pay the provider directly. In contrast, the data included on a Medicaid fee-for-service claim would include a specific amount for services delivered to a beneficiary during a visit since providers in fee-for- service plans bill and are reimbursed on a service-by-service basis. Thus, all the data needed for analyses by MMISs and other systems that were designed to process fee-for-service claims data will not always be consistent or available from the encounter data that managed care organizations collect and report to state Medicaid program administrators. In contrast to the program integrity reviews conducted when administering fee-for-service plans, which are largely based on pre- and post-payment review of claims data, states’ oversight of managed care organizations often occurs through contracts and reporting requirements. All 10 of the states in our study had implemented MMIS subsystems to support their program integrity efforts. Three states reported that they were operating MMISs that were implemented more than 20 years ago, while 7 states had upgraded their subsystems in the past 13 years, and 2 of those reported having done so in the past 2 years. Further, 7 states had, in the past 10 years, implemented other new and more advanced systems, in addition to their MMISs, to meet specific needs related to enrolling providers and processing claims data. Medicaid administrators in the 9 states that administer fee-for-service plans described a number of ways that they use their various systems to help improve the outcomes of their program integrity efforts, and 4 states reported that they had implemented specific functionality needed to support program integrity activities for their managed care plans. Consistent with the requirements defined by CMS, the selected state Medicaid programs use the MMIS provider and claims processing subsystems to perform program integrity activities related to provider enrollment and prepayment review. For example, all 10 of the states have incorporated NCCI edits into their claims processing subsystems, as required by CMS, to help identify and prevent potential improper payments. Six of the states also had developed and implemented prepayment edits other than the required NCCI edits that incorporate additional criteria for conducting prepayment reviews of claims data to help prevent improper payments. Likewise, nine of the selected states use SURS to help detect payments that may have been made to providers improperly. Medicaid administrators in these states told us that they use this subsystem to identify suspicious patterns of provider behavior that are not evident during the prepayment claims data review. For example, SURS can be used to analyze post-payment data for multiple claims at a time in order to identify suspicious provider billing patterns that are not detectable by the claims processing subsystem, which is used to process one claim at a time. The following examples describe ways that the selected states have implemented functionality into the required MMIS provider subsystem, claims processing subsystem, and SURS to support their program integrity activities. California, which administers its Medicaid program as both fee-for- service and managed care, implemented its MMIS claims processing subsystem in the 1980s. The subsystem includes approximately 1,500 prepayment edits that were developed and implemented by the state, and are in addition to, and conducted after claims data pass through, the NCCI edits. These additional edits are applied to claims data prior to payment and are designed to help prevent claims from being paid improperly. According to the state’s Medicaid administrators, these additional prepayment edits were developed based on previous improper provider billing activity identified by the state, and may be used to identify claims for services that exceed limitations, such as for drug costs and uses. Beyond this subsystem, administrators also reported that they use SURS to query post-payment fee-for-service data for claims that were submitted over a period of time to identify suspicious activity and trends, such as spikes in payments to providers for a certain type of service. In such cases, the providers identified by SURS may be subjected to further review by program integrity analysts. For example, the analysts may analyze additional data, such as data on prior paid claims, to determine whether the payments were made improperly or whether the activities could indicate potential fraud. Maryland’s MMIS claims processing subsystem was implemented in 1984 to analyze fee-for-service claims data and identify errors in claims that could lead to improper payments to providers. Program administrators had also integrated managed care organizations’ encounter data into their SURS so that the data would be available for post-payment reviews of payments made to providers within the managed care organizations. Mississippi implemented its MMIS in 2003. The state requires its managed care organizations to report the same data that fee-for- service providers report on claims; thus, the program integrity functionality implemented in the state’s MMIS subsystems could be Among other uses, the state relies on used for both types of plans.SURS to examine multiple claims submitted by a provider to identify those whom they suspect are submitting claims improperly. For example, the system can be used to identify patterns of providers’ billing practices that may indicate that they are submitting claims to Medicaid for mental health day treatment when they are actually providing day care services, which are not billable to Medicaid. North Carolina’s Medicaid administrators implemented their MMIS in 2013. The system includes an automated provider credentialing and enrollment function, along with claims processing functionality that integrates pre-payment edits, business rules, program logic, and other user-defined criteria to help identify potential improper payments in the state’s fee-for-service plan. Program integrity analysts who use the claims processing subsystem are able to select multiple provider- or claims-based criteria for suspending claims so that they can be reviewed prior to payment. Tennessee implemented its MMIS in 2004 to support the administration of the state’s managed care Medicaid program.According to the program administrator, the MMIS provider subsystem, claims processing subsystem, and SURS are used to collect and process all the data created by the state’s managed care organizations, including provider enrollment and claims data for individual providers. Program integrity staff rely on the claims processing subsystem as they review all providers’ claims data submitted by the managed care organizations, and the subsystem incorporates algorithms and NCCI prepayment edits to identify potential payment of improper claims filed by providers with managed care organizations. By requiring managed care organizations to report detailed claims data, Tennessee administrators are able to use their systems to support program integrity activities as if the state was operating a fee-for-service model, unlike other managed care plans that only collect encounter data. Texas, which administers its Medicaid program as both fee-for-service and managed care, implemented its MMIS, including the claims processing subsystem and the current SURS, in 2009 to process and screen fee-for-service claims data. The Texas MMIS includes thousands of prepayment edits in addition to the required NCCI edits. Further, the state uses SURS to query post-payment claims data to help identify suspicious activity and trends, such as spikes in payments to providers. In these cases, the providers identified by SURS would be subjected to further review by program integrity analysts or the state’s investigators to determine whether the targeted providers had improperly billed Medicaid. U.S. Virgin Islands, which administers Medicaid as fee-for-service, implemented its MMIS in 2013 to automate its manual program administration processes. In addition to the required NCCI edits, the territory has incorporated unique prepayments edits in the claims processing subsystem. According to the administrator for the program, the territory also uses SURS to conduct post-payment reviews of claims to detect payments to providers that may have been made improperly. Virginia, which administers Medicaid as a combination of fee-for- service and managed care, implemented its MMIS in 2003. The provider subsystem includes functionality that can automatically identify providers that have been excluded from other Medicaid programs, Medicare, and other federal programs. The system automatically identifies providers that are required to be revalidated before they are eligible to submit claims and be reimbursed for services covered by Medicaid. Further, the claims processing subsystem includes prepayment edits in addition to the NCCI edits. The state also has implemented a commercial software package that edits fee-for-service claims data after they have been processed by the MMIS but before providers are paid. According to the program administrator, these edits are applied to provide additional assurance that billing codes and other data on the claims are accurate. CMS’s provider screening and enrollment regulations (42 C.F.R. § 455.414) requires states, beginning March 25, 2011, to complete revalidation of enrollment for all providers, regardless of provider type, at least every 5 years. Beyond using the required MMIS provider subsystem, claims processing subsystem, and SURS, Medicaid administrators in 7 of the 10 selected states have implemented additional systems and functionality. These include data analytics, claims data verification systems, and an in-home care monitoring system, intended to enhance the outcomes of efforts to prevent and detect improper payments to providers. Specifically, among these 7 states: California implemented a separate decision support system and data warehouse in 2008 to assist with identifying overpayments or erroneous payments for both fee-for-service claims and managed care encounters that were not detected by the state’s MMIS SURS. For example, state administrators said that the results of the decision support system’s automated analyses are used to identify irregular provider behavior indicated by spikes in payments to providers, which may lead to further analysis by program integrity analysts to identify patterns of fraud, waste, and abuse and, consequently, the detection of improper payments. The warehouse stores historical data on providers and claims that were collected over time from the MMIS databases. Maryland implemented a new system in 2013 that provides additional information about providers’ behavior to enhance the state’s ability to prevent improper payments to in-home care providers. Specifically, the system allows automated monitoring of the individuals who provide in-home services within the fee-for-service program and is used before claims for the services are submitted to and processed by the MMIS. The state requires these providers to use the system to check in and out via phone when they visit a participant’s home. The care provider can either use a land line at the participant’s house, or, if a land line is not available, a passcode along with a password device, which is issued to the patient and must be kept at the patient’s home. According to the state’s Medicaid administrator, the use of the in-home care system helps program integrity analysts verify that the personal care provider actually visited the patient. Specifically, when a provider checks in at a participant’s home, the system records and integrates data into the provider’s records, which are accessible to the MMIS claims processing subsystem. The claims processing subsystem then automatically compares the provider’s records, which indicate when they visited patients, to their claims data to verify that in-home visits were actually made at the times for which claims were filed. Thus, the in-home care systems can be used to identify any providers that filed claims but did not check in using the system, and help the Medicaid administrators prevent improper payments for claims filed for services that were not delivered. North Carolina built an additional system of data martstools in 2013, which according to the state’s administrators, is used after MMIS processing has been completed to analyze both paid and denied fee-for-service claims data needed to help detect improper payments. Consequently, the results of the system’s analysis can be used to identify repeated provider billing patterns that were determined to be improper and denied in the past. Texas implemented an additional data analytics system in 2013 to mine and analyze fee-for-service claims data collected by its MMIS claims processing subsystem and stored in an MMIS database. According to the state’s Medicaid administrators, the system retrieves the data from the MMIS database and provides data warehousing and mining capabilities that allow investigators to query the data in a way that reveals patterns and relationships between data on beneficiaries, providers, and locations and dates of service. The technology is used to establish not simply what happens, but also the relationships that explain why things happen—information that is not provided by the analyses conducted by MMIS subsystems. Kentucky implemented a commercial-off-the-shelf data analytics system in April 2014 that is used to conduct additional analysis of both fee-for-service claims and managed care encounter data after the data have been analyzed by the MMIS subsystems. The system is used to determine, for example, whether a significant increase in claims or encounters is the result of an increase in a provider’s office size or an indicator of improper billing by the provider. According to Kentucky’s administrators, the enhanced functionality available through this system provides a broader overview of data than the MMIS and enables program integrity analysts to detect interconnections between providers to identify and prevent payments of claims filed as a result of fraudulent activities, such as kickback fees paid from one provider to another for a fake referral. Mississippi implemented a new data verification system in 2007, which retrieves and processes claims and encounter data that were reported to and stored in the state’s MMIS data base. The system can be used in addition to the MMIS to detect improper payments in both the program’s fee-for-service and managed care plans. The system produces reports for different purposes, such as for post-payment claims reviews and provider audits. For example, a report may identify patterns of mental health providers submitting claims for services not billable to Medicaid, which in turn may raise questions about those providers’ billing patterns and warrant further review by program integrity analysts to determine whether the claims were paid improperly. Tennessee relies on a data warehouse and analytic capabilities implemented in about 2004 to detect improper payments for services provided through managed care organizations. The state uses the system to conduct analyses of data reported by managed care organizations and stored by the MMIS. The warehouse maintains 5 years of encounter data collected from the program’s managed care organizations, and retrieves current encounter data that were collected and stored using the state’s MMIS claims processing subsystem. The warehouse is mined for data to be used in analyses that could lead to additional audit reviews or investigations. For example, the capabilities are used to identify patterns in providers’ billing practices, based on historical data, that support preliminary analyses of provider referrals received from managed care organizations, as well as referrals developed internally through data mining. The results of the analyses help the state’s Medicaid administrators determine whether to investigate providers for whom suspicious behavior is detected. Figure 2 illustrates the selected states’ program integrity activities and how the MMIS subsystems and other implemented systems have been integrated to support Medicaid provider enrollment, claims processing prepayment review, and post-payment review. In accordance with federal laws and agency program integrity plans, CMS takes steps to support states’ efforts to implement information systems that help prevent and detect improper payments in the Medicaid program. In particular, the agency provides states access to various sources of data that it maintains for its own use in administering the Medicare and Medicaid programs, along with technical guidance and training offered through the Medicaid Integrity Institute and other agency components. CMS also reviews and approves states’ requests for federal financial assistance offered through a matching funds program that supports the development, operations, and maintenance of information systems used for Medicaid administration, including program integrity. While the states in our study found these resources useful for improving the outcomes of systems they used to help prevent and detect improper payments, only 3 of the 10 states quantified and measured financial benefits achieved as a result of using the systems. As a result, CMS and the selected states do not have the information needed to determine the effectiveness of the systems. The Patient Protection and Affordable Care Act requires that CMS establish a process to make available to state agencies information about individuals and entities terminated from participating in Medicare, Medicaid, or the Children’s Health Insurance Program. Access to these data is intended to assist states in their efforts to determine whether providers are eligible to participate in Medicaid. To respond to the requirements of the act, CMS defined an objective in its Comprehensive Medicaid Integrity Plan for fiscal years 2014-2018 to increase state Medicaid agency access to Medicare program integrity data.to data that it maintains about Medicare and Medicaid providers. These data are intended to help Medicaid programs screen providers seeking to participate in Medicaid and to identify potential improper payments during post-payment reviews of claims. Further, in accordance with its plans, CMS provides states access The 10 states described data that CMS currently makes accessible to Medicaid administrators through four systems that it operates and maintains to support the Medicare and Medicaid programs: the Termination Notification Server, Provider Enrollment Chain and Ownership System (PECOS), Medicare Exclusion Database, and Fraud Investigation Database. States may use their own systems to manually log in and connect with CMS’s systems to conduct online queries of the databases. In these cases, the data received in response to the queries are not automatically integrated into the states’ systems. Alternatively, Medicaid staff may program their MMISs and other systems to automatically access and query CMS’s databases, and then download and integrate response data into their systems for use in automated processes, such as provider screening and prepayment claims review. In particular, CMS allows states access to the Termination Notification Server, which it established and maintains for sharing information with all state Medicaid programs about the Medicaid providers reported to have been excluded or terminated by any state. Medicaid administrators in six of the selected states told us they use data from the server to support their efforts to screen providers seeking to participate in their Medicaid programs and identify those who may have been terminated from other Medicaid programs for causes such as fraudulent activity. The states are also given access to PECOS—a system to which Medicare providers submit and update their enrollment data. This Internet-based system is used by the states to obtain information on providers eligible to participate in Medicare and Medicaid. In particular, while the data stored in the system are specific to Medicare providers, they are nonetheless useful to state Medicaid programs because many providers participate in both programs. For example, states may use the data when screening providers during enrollment processes to determine whether a provider has ever been excluded from participation in Medicare and, thus, whether they should be allowed to participate in Medicaid. They also use PECOS data during provider screening to determine whether a Medicare screening has already taken place, thus eliminating the need to screen further for Medicaid participation. Eight of the 10 states reported using PECOS. Further, states are allowed to access provider termination data via the Medicare Exclusion Database. This database is accessed by users who may download files of monthly provider sanctions and reinstatement data and perform inquiries on excluded providers. Three of the selected states told us that they access data from the Medicare Exclusion Database to help program staff screen providers. Finally, according to agency officials with the Data & Systems Group for Medicaid, Database—a nationwide data entry and reporting system that the agency established to monitor fraudulent activity and payment suspensions related to Medicare and Medicaid providers. The database was designed to capture data from the point when potential fraud is substantiated to the final resolution of a case. CMS updates the database with information regarding fraud investigations in the Medicare program, and state administrators enter data regarding their own investigations of potential fraud in their state-based Medicaid programs. Administrators in one state told us that they use information obtained from the Fraud Investigation Database. The Center for Medicaid and CHIP Services Data & Systems Group is responsible for overseeing the collection of information from the states as is necessary for effective administration of the Medicaid program and to ensure program integrity. In carrying out responsibilities under PPACA, CMS provided technical guidance and support to help state Medicaid programs implement information systems. Accordingly, CMS’s Comprehensive Medicaid Integrity Plan defines the agency’s objectives to expand training and other technical support activities offered through its Medicaid Integrity Institute for administrators of state programs. To address the objectives of the plan, CMS provides states with various types of guidance and training opportunities related to technologies that could be implemented to help identify improperly filed claims. For example, CMS provides technical guidance to states incorporating NCCI edits into their MMIS claims processing systems. The agency describes specifications and instructions for state Medicaid programs to incorporate new or modified NCCI edits into their MMISs on a quarterly basis. CMS also provides states with files that include functionality for performing the edits. According to officials with CMS’s Data & Systems Group, some states have updated their MMISs to incorporate certain capabilities that enable them to download the files from a CMS website and integrate them directly into their IT environment, thus reducing the amount of effort needed to implement the edits into their MMIS. However, states that have not updated their legacy MMISs to enable this capability have to make programming changes in order to implement the edits each quarter of the year. The updates are provided 15 days prior to the first day of the next quarter, and the states are to implement the updated edits with the next 4 weeks. help states integrate and analyze managed care encounter data. According to CMS officials with Medicaid Integrity Institute, the agency determines state Medicaid administrators’ needs for continuing education based on information collected by surveys administered during training sessions. These opportunities are fully supported with federal funds at no cost to the states. State Medicaid program staff described various ways that their staff had participated in training activities and collaborations with other states that were conducted by the Medicaid Integrity Institute. For example, they attended sessions in which Medicaid data experts gathered to exchange ideas and develop best practices on topics such as integrating data from various sources, predictive analytics, and working with algorithms to analyze both fee-for-service and managed care data. In particular, Vermont Medicaid staff participate in a CMS Fraud Technical Advisory Group that meets on a regular basis to share information related to, among other things, data sources and ways to access data that could be used to help identify improperly paid claims or aberrant provider behavior. Likewise, Kentucky, Maryland, Tennessee, Texas, and Virginia staff had attended a Data Expert Symposium conducted by the institute in 2014, and North Carolina Medicaid program integrity staff had attended training related to the use of PECOS data in provider enrollment systems. Medicaid administrators in our study also described ways that their staff had benefited from information obtained from training sessions specifically related to the integration of managed care encounter data with their MMISs and other systems that support efforts to prevent and detect improper payments. For example, Texas Medicaid program integrity staff had attended training on Emerging Trends in Managed Care in February 2012 and a Program Integrity Partnership in Managed Care Symposium in March 2014. These sessions addressed topics related to encounter data such as timeliness, validity, and reliability; use of encounter data in data analytics; and collecting and editing encounter data using MMIS. Further, the selected states told us that the training sessions and collaborations facilitated by CMS and the Medicaid Integrity Institute had been valuable resources that supported their efforts to implement information systems for program integrity purposes. For example, Maryland’s administrators stated that courses on data usage within analytical systems were helpful in their learning new strategies for developing algorithms that they used to identify potential improper claims. North Carolina and Tennessee administrators said that the institute had provided a venue for discussion between states regarding topics such as the implementation of data analytics and other advanced technologies, along with lessons learned related to the implementation of systems for program integrity purposes and algorithms that can be used to analyze data and help identify fraudulent provider billing patterns. They added that the Medicaid Integrity Institute has been the most helpful resource that CMS has provided in support of states’ efforts to implement information systems for program integrity purposes. CMS is authorized by federal law to provide matching funds to assist states in their implementation and operation of systems to support the administration of their Medicaid programs, including program integrity efforts. Specifically, Title XIX of the Social Security Act provides for CMS to approve states’ requests for federal matching funds to help finance the design, development, and installation of MMISs and other claims processing and information retrieval systems. States can request and receive funds to cover up to 90 percent of these costs, depending upon the extent to which their plans for implementing the systems meet certain technical specifications and requirements defined by CMS, including those defined for the implementation of system functionality to support efforts to prevent and detect improper payments. In addition, CMS is authorized to approve states’ requests for federal matching funds to cover up to 75 percent of the costs associated with the operation and maintenance of the systems. In order to qualify for federal matching funds, Medicaid programs must first submit Advance Planning Documents that define, among other things, goals, objectives, and cost-benefit analyses of information technology projects relevant to specific business areas, such as provider and program integrity management. They also must certify with CMS that their MMIS and other system implementations meet a set of standards and conditions defined by the agency. For example, Medicaid programs must submit to CMS information specific to each business area, such as business objectives and system review criteria that address state-specific objectives and best practices defined by the agency. The documents and information are used by CMS for evaluation and certification of the states’ MMISs and other information systems relevant to the administration of Medicaid. that support Medicaid administration.federal matching funds these states received. According to GAO’s IT investment management framework, an organization’s process for investing in information systems should include a structured and proven investment analysis, such as a cost reduction or The results from avoidance, cost and benefit, or return on investment. such an analysis should reflect a consistent and repeatable approach for supporting IT investment decisions and ensuring that the organization is aware of the financial as well as other internal and external effects of operating and maintaining particular systems. All of the selected states asserted that the MMIS subsystems and data analytics, decision support, and other systems they implemented for program integrity purposes had helped improve the outcomes and efficiency of their efforts to prevent and detect improper Medicaid payments. However, most of the states could not provide any supporting evidence of their systems’ effectiveness. Specifically, Medicaid administrators in seven states could not identify any steps that they had taken to quantify improvements in the outcomes, or otherwise assess the effectiveness, of program integrity efforts attributable to the use of their systems. For example, they did not measure financial benefits associated with increases in the amounts of money they saved or recovered as a result of improvements in their efforts to prevent and detect improper payments that were attributable to the implementation of information systems. The administrators of these seven state programs that had not taken steps to quantify financial benefits gave several reasons for not having done so. According to the administrators of one state, the amount of effort and time that would be required to calculate return on investment or cost avoidance, along with questionable accuracy of the results, outweighs the usefulness of the information. Another state administrator said that return on investment for a single system could not be calculated because a system is only part of the process for recovering funds lost to improper payments. Still, another told us that it is difficult to calculate a return on investment as a result of using the MMIS for program integrity purposes because the system performs other functions beyond those for program integrity; therefore, it is not possible to break out the costs and benefits of implementing a single function. However, among the 10 states in our study, 3 had identified ways to measure quantifiable benefits realized as a result of using systems designed to help prevent and detect improper payments. They did so by using information available from existing practices and reporting capabilities of systems that were implemented for program integrity purposes. Specifically, Medicaid administrators for the 3 states demonstrated practices for measuring financial benefits that could provide examples of ways to quantify improvements in outcomes resulting from the use of systems for program integrity purposes, along with lessons learned from the states’ experiences. These 3 states provided documentation discussing the results of efforts that had been taken to assess quantifiable benefits, achieved in the form of cost reduction or avoidance, from implementing their systems for program integrity purposes. For example: Medicaid administrators in California provided the results of a routine internal audit conducted in 2010 that documented cost reductions totaling about $2 million during a 6-month period in 2010, which they attributed to their ability to supply providers with system-generated reports of comparative billing information. According to the administrators, when these providers were made aware of other providers’ billing patterns and costs, they often modified behaviors to be consistent with others to whom they were compared. As a result, they subsequently billed the state’s Medicaid program for lower costs or for fewer services. Mississippi administrators provided a report generated by the state’s SURS that identified payments of $10 million for potential improper claims for a specific service in 2010. They stated that the information contained in the report had enabled the state to avoid additional costs in 2011. For example, the system identified payments to providers who had filed claims for mental health services for children when the actual services delivered were for day care, which was not billable to Medicaid. As a result, the Medicaid administrators notified providers that these claims were not acceptable, and then used SURS analytical and reporting capabilities to identify and document a subsequent decrease in the number of such claims filed by mental health providers. Ultimately, the administrators attributed cost avoidance totaling $7.5 million in 1 year to their use of SURS, based on a reduction in those types of improper payments from $10 million in 2010 to $2.5 million in 2011. Virginia administrators measured over $216 million in cost avoidance achieved during fiscal year 2013 as a result of prepayment claims review activities supported by their claims processing subsystem. For example, they provided calculations of cost avoidance based on the cost of service requests denied as a result of claims processing prepayment edits. The cost was multiplied by the number of denied requests. For its part, CMS has not required the states to identify and report on the outcomes and effectiveness of systems used for program integrity purposes. As mentioned previously, the agency requires states to document expected costs and benefits for systems when they submit requests for federal financial assistance with investments in new systems or functionality needed to support Medicaid program administration—i.e., the 90 percent matching funds.identify and report financial benefits or other quantifiable measures of effectiveness achieved as a result of using the systems in order to receive continued funding during the operations and maintenance. Therefore, CMS does not know whether the Medicaid systems implemented for program integrity purposes are effective in helping states avoid paying for providers’ claims that may be improper or in recovering funds lost to payment of improper claims. However, it does not require states to As emphasized in our IT investment management framework, investments can outlive their usefulness and consume resources that begin to outweigh their benefits. Without identifying and measuring the financial benefits (i.e., money saved or recovered) that result from using their MMISs and other systems, CMS and state Medicaid administrators cannot be assured of the systems’ effectiveness in helping to prevent and detect improper payments. Moreover, without having required states to institute consistent and repeatable approaches for measuring and reporting such outcomes, CMS Medicaid officials lack an essential mechanism for ensuring that the federal financial assistance that states receive to help fund the operations and maintenance of these systems is an effective use of resources to support Medicaid program integrity efforts. Even as the selected states rely on their systems to help prevent and detect improper Medicaid payments, five of the seven states in our study that administered Medicaid as both fee-for-service and managed care— North Carolina, Texas, Virginia, California, and Maryland—faced challenges that were specific to the use of their systems for ensuring the These challenges introduced integrity of their managed care programs.limitations in the states’ ability to use their systems to analyze managed care encounter data because of the (1) content of the data reported, (2) quality of the data submitted, or (3) inconsistencies between the ways managed care and fee-for-service data values are defined. In particular, the encounter data reported by managed care organizations often lack content needed for the states’ systems to conduct analyses that help prevent or detect improper payments. Specifically, while these states collect data from managed care organizations, their Medicaid administrators stated that the data do not include the details needed for their systems to prevent and detect improper payments using the MMIS claims processing subsystem, SURS, or their additional data analytical systems that were implemented to conduct pre- and post-payment reviews of fee-for-service claims data, which do include the detailed data needed. North Carolina and Texas pointed to challenges in their ability to use their systems to analyze managed care encounter data resulting from lack of data content. For example, Texas administrators stated that encounter data submitted by their managed care organizations only indicate the reason for a patient’s visit and whether the provider’s claim was paid; they do not always include data such as diagnostic codes and the specific amounts paid for a visit—data that are needed for their systems to analyze paid claims to detect improper payments. Further, deficiencies in the quality of encounter data impede these states’ ability to analyze the data to help prevent and detect improper payments for services delivered by managed care organizations. Medicaid administrators in California cited examples of data quality issues that presented challenges to their ability to use systems to support the integrity of Medicaid managed care. Specifically, California Medicaid administrators said that the encounter data being submitted by managed care organizations have historically been inaccurate, unreasonable, incomplete, and untimely. As a result, the data could not be effectively analyzed by the systems to identify patterns in claims and services that may help identify fraudulent or abusive provider behaviors and detect improper payments. Thus, any analyses of such erroneous data could not produce valid or reliable outcomes. Additionally, differences between the way some data values are defined for managed care encounter and fee-for-service claims data cause problems with using systems to prevent and detect improper payments for managed care services. For example, claims processing subsystems that were designed to process claims data for specific services covered by fee-for-service plans may not properly process encounter data for different services allowed under managed care (but not allowed by fee- for-service). Thus, some of the prepayment edits designed to analyze fee- for-service claims data can provide erroneous results when applied to managed care encounter data. Additionally, managed care encounter data analyzed by SURS during post-payment review could include estimated rather than actual costs associated with services delivered to patients, which would not reflect any providers’ overcharges. Consequently, services for which providers overcharged would not be identifiable by SURS. Virginia administrators said they had experienced such challenges in using their MMIS claims processing subsystem and SURS for analyzing encounter data to support oversight of managed care plans because of the way encounter data are defined. They told us that differences between the ways fee-for-service and managed care data are defined introduce inconsistencies that may affect the outcomes of the systems’ analyses and, consequently, lead to challenges related not only to the state’s ability to conduct oversight of the managed care organizations’ activities, but also to the amount of work and effort required when updating the state’s MMIS. Additionally, Maryland administrators continued to experience challenges with using their SURS to conduct post-payment reviews of managed care data for this reason. For example, they said that encounter data do not typically include the same values or level of detail as claims data. Therefore, they cannot effectively analyze those data during post-payment review using their SURS, which was designed to process fee-for-service claims data. To address such challenges, one state—Tennessee—had taken actions that could offer lessons learned based on its having incorporated capabilities that enable the analysis of managed care encounter data using the state’s MMIS claims processing subsystem, SURS, and other systems. The Tennessee administrator described actions taken to address challenges with analyzing encounter data using Medicaid systems that had been designed to process fee-for-service claims data. Specifically, the state began to collect data from managed care organizations so that they could be analyzed by the MMIS claims processing subsystem, SURS, and data analytics systems to help prevent and detect improper payments. Tennessee’s Medicaid administrator told us that, to do so, the state defined the data required from the managed care organizations to include the content and level of detail that would be reported by fee-for-service claims, rather than the less detailed data the organizations had been reporting. Tennessee also required the organizations to report quality encounter data in a timely manner so that they could be analyzed by the MMIS and other systems. For example, when a managed care organization submits encounter data to the state, the MMIS is used to conduct both system and payment edits. If the data do not pass the edits, they are returned to the managed care organization for corrections to be made. If the data are not corrected and returned within 45 days, the organization is fined a certain amount for each day it is late. When the corrected data are returned, they are further reviewed by analysts who ensure the data needed to conduct analyses are present before the data are stored in the state’s Medicaid data warehouse. By requiring managed care organizations to report detailed data and taking steps to ensure that the data meet quality and timeliness standards, Tennessee’s Medicaid administrators are able to use their MMIS claims processing subsystem and SURS to process the encounter data to detect improper payments in the same way they would analyze claims data. As a result of its effort, the administrators said the state is able to identify potential improper payments made to providers. For its part, CMS had begun to take steps that could help states overcome challenges related to the collection of detailed, quality data needed to enable analyses of managed care encounter data using MMISs and other systems. For example, since 2010, CMS’s Center for Medicaid and Children’s Health Insurance Program Services, through the offerings of a contractor, has provided technical assistance to states. The contractor published documents and conducted webinars that addressed states’ need to collect the content and level of detail needed to conduct analyses of encounter data using their systems, along with steps that would need to be taken to ensure the quality and consistent definition of data reported by managed care organizations. In November 2013, the contractor published a toolkit on the Medicaid.gov website that identifies steps states should take to collect and validate data needed to conduct program integrity oversight of their Medicaid managed care organizations. Additionally, courses and symposia that the selected states reported attending included sessions on topics such as collecting and editing encounter data and applying fee-for-service methodologies to the automated analysis of managed care encounter data that are defined differently from claims data. As noted above, 5 of the 10 states reported that they had participated in one or more of the training and data sharing sessions conducted by CMS’s Medicaid Integrity Institute. Furthermore, the Medicaid Integrity Institute included in a March 2014 symposium a presentation by Tennessee’s Medicaid administrator, who described the state’s experiences and successes with defining, collecting, editing, integrating, and analyzing managed care encounter data using the functionality of the MMIS claims processing subsystem and SURS to conduct prepayment and post-payment reviews to prevent and detect improper payments for services delivered by Medicaid managed care organizations. By taking such actions, CMS has continued efforts to support information-sharing activities that could help states address challenges. States have implemented MMISs and other systems to support the administration of Medicaid, including efforts to ensure the integrity of the program. The 10 selected state Medicaid programs incorporated functionality required by CMS to help prevent and detect improper payments for Medicaid services, and had benefited from the support CMS provides in the form of data, technical guidance and training, or financial assistance. However, the effectiveness of the systems for program integrity purposes is unknown. Only 3 of the states had established methodologies for measuring financial benefits they had achieved based on the implementation of systems to help prevent and detect improper payments. While states are required by CMS to document expected benefits when they request financial support to implement new systems or functionality, they are not required to report actual benefits realized from using the systems when requesting additional funds to operate and maintain the systems. Therefore, the selected states and CMS do not have the information needed to determine whether the use of the systems is effective in helping Medicaid programs avoid paying or recover payments made for improperly filed claims. Until states are able and required to identify and measure quantifiable benefits achieved as a result of using systems to help ensure the integrity of both fee-for-service and managed care programs, CMS cannot determine whether the systems help states save money by improving the outcomes of efforts to prevent and detect improper payments in Medicaid. Consequently, the effectiveness of the systems will remain unknown as the federal government continues to provide potentially billions of dollars in financial assistance each year to support the implementation, operation, and maintenance of information systems intended to support Medicaid program integrity efforts. To ensure that the federal government’s and states’ investments in information systems result in outcomes that are effective in supporting efforts to save funds through the prevention and detection of improper payments in the Medicaid program, we recommend that the Secretary of HHS direct the Administrator of CMS to require states to measure quantifiable benefits, such as cost reductions or avoidance, achieved as a result of operating information systems to help prevent and detect improper payments. Such measurement of benefits should reflect a consistent and repeatable approach and should be reported when requesting approval for matching federal funds to support ongoing operation and maintenance of systems that were implemented to support Medicaid program integrity purposes. In written comments on a draft of this report (reprinted in appendix II), HHS stated that it concurred with our recommendation. Further, in its comments, HHS stated that it works with state Medicaid programs to determine the effectiveness of systems that support program integrity functions. The department added that it had taken recent steps to help ensure that states provide post-implementation data to document quantifiable benefits, and is taking additional steps to determine effective methods for continuing to evaluate outcomes of Medicaid program integrity information technology investments. While the actions that HHS described could be beneficial, our study found that the department and CMS had not defined a consistent and reliable approach for determining quantifiable benefits achieved by states before it approves the use of federal funds to finance the ongoing operations of systems intended to support program integrity efforts. Thus, we believe the full implementation of our recommendation is important to ensure that federal and state investments in information systems result in outcomes that help save funds through the prevention and detection of improper payments in the Medicaid program. HHS also provided technical comments, which we incorporated into the report as appropriate. Additionally, we obtained and, as appropriate, incorporated technical comments from the state Medicaid administrators who participated in our study. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of HHS and interested congressional committees. In addition, the report will be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact Valerie C. Melvin at (202) 512-6304 or melvinv@gao.gov or (202) 512-7114, or Carolyn L. Yocom at (202) 512-7114 or yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The objectives of our review were to determine (1) the types and implementation status of the information systems used by states and territories to support Medicaid administrators’ efforts to prevent and detect improper payments to providers; (2) the extent to which the Centers for Medicare & Medicaid Services (CMS) is making available funds, data sources, and other technical resources to support Medicaid programs’ efforts to implement systems that help prevent and detect improper payments to providers, and the effectiveness of the states’ systems; and (3) key challenges, if any, that Medicaid programs have faced in using IT to enhance program integrity initiatives, and CMS’s actions to support efforts to overcome these challenges. To address the objectives, we selected a nonprobability, nonrandom sample of the 50 states, 5 U.S. territories, and the District of Columbia. To select the states for our sample, we obtained data on states’ expenditures for systems implementation and program integrity activities for fiscal years 2004 through the first quarter of fiscal year 2014. We collected the data from a CMS database to which the states are required to report Medicaid program expenditures for which they request federal reimbursements. We assessed the reliability of the CMS data by reviewing prior GAO work that had accessed and used the data and prior determinations that the data provided reliable evidence to support findings, conclusions, and recommendations. We also held discussion with CMS officials knowledgeable of the specific types of data recorded in the database. Based on how we intended to use the information, we determined that the data were sufficiently reliable for the purpose of selecting states for our study. We sorted the data we obtained based on states’ total expenditures for development and maintenance of their Medicaid Management Information Systems (MMIS) and the reported administrative costs for program integrity. We grouped the states, territories, and the District of Columbia according to low, medium, and high levels of spending based on their expenditures reported from fiscal year 2004 through the first quarter of fiscal year 2014. For example, those in the low spending group were the three states and two territories reporting the lowest expenditures, from $574,836 to $66,497,668; those in the medium spending group were the five states that reported expenditures, based on the median of all amounts reported, from $202,724,728 to $240,891,446; and those in the high spending group were the five states that reported the highest expenditures, from $825,026,677 to $2,578,096,036. We calculated the median expenditure for each group and identified the two states directly above and the two states directly below each median, which identified 12 states. Of the 12 states that we identified based on the expenditure data we collected and assessed, we selected 10 states—three low- expenditure states (U.S. Virgin Islands, Tennessee, and Vermont), four middle-expenditure states (Kentucky, Mississippi, Virginia, and Maryland), and three high-expenditure states (North Carolina, Texas, and California). Based on our assessment of the extent to which they met the selection criteria defined within our methodology, we determined that any information collected from these states would be sufficient for our use. We then developed and administered a questionnaire to collect information regarding the selected states’ use of systems to support program integrity activities in their Medicaid programs, the technical support they received from CMS, and any challenges the states faced regarding their efforts to implement information systems for program integrity purposes. The results of our study are not generalizable to Medicaid programs administered by all states, territories, and the District of Columbia. To address the first objective, we analyzed information taken from the questionnaire responses about the selected states’ program integrity efforts and supporting systems. We also obtained and analyzed documentation describing the types of systems they used to analyze provider and claims data in support of program efforts to prevent and detect improper payments. To determine the status of the systems, we examined relevant project management documents, including project plans and status reports, that provided information about systems implementation dates and dates any significant enhancements and replacements of the states’ information systems were completed or planned. For the states that were planning significant enhancements, updates, or replacements of systems, we also reviewed requests for proposals issued to potential contractors, along with statements of work for ongoing initiatives, to identify the types of changes or enhancements that the programs had planned to implement. To address the second objective, we obtained and examined federal legislation, along with relevant agency plans, to identify legal and program requirements for CMS to provide financial support, data, and other technical resources to help states implement information systems for program integrity purposes. We included in our scope resources such as agency guidance and training provided to states and examined documentation that described the funding, data sources, systems, and other technical resources intended to help state Medicaid administrators implement the system functionality they need to prevent and detect improper payments. To determine how states use the resources, we examined information from the questionnaires and analyzed documentation the states provided describing their use of federal funds and the data sources, systems, and technical training and guidance from CMS. We identified how each of the states used federal financial support to develop and implement new systems, operate existing systems, and fund the staff who use the systems in support of program integrity efforts. We also identified various ways the states integrated the data provided by CMS within their IT environment and individual systems, along with the types of training opportunities and technical support the states used to improve their ability to develop and enhance information systems that effectively support program integrity analysts’ efforts. We examined states’ responses from the questionnaire to determine the extent to which the financial, data, and other technical resources provided by the agency were reported to be useful to states in their efforts to implement new and update existing information systems that support the prevention and detection of improper payments. To describe the extent to which the use of the systems were effective in improving outcomes of the states’ program integrity initiatives, we reviewed best practices identified in our IT Investment management framework for agencies’ management of IT portfolios, including practices for conducting investment analyses and determining financial and other effects of maintaining systems. We obtained from Medicaid program administrators documentation such as performance plans and audit reports regarding practices for determining the effectiveness of the different types of systems. We identified state programs that had developed methodologies and practices for measuring any quantifiable benefits realized from the use of specific systems. From those states we collected additional documentation that identified ways the states had measured quantifiable benefits, such as return on investments and cost avoidances, attributable to the use of the systems, and compared them to practices identified by our IT investment framework. Specifically, we examined methods and calculations used to determine measures such as amounts of payments withheld because of errors detected by the systems during prepayment review and amounts of improper payments recovered as a result of post-payment review activities supported by the systems. For the states that did not measure quantifiable benefits, we discussed with the Medicaid administrators their reasons for or inability to do so. We used the information collected from the questionnaires and document reviews to develop additional questions and conducted interviews with state Medicaid officials. Finally, to address the third objective we analyzed information from our questionnaire and interviews about states’ experiences with implementing information systems for program integrity purposes and any challenges they faced in doing so. We identified challenges that were relevant to the role that CMS plays in supporting states’ efforts—i.e., those other than challenges related to state-based issues such as local funding levels, internal data sharing between state entities, and economic conditions. We obtained and reviewed CMS documentation, such as the Medicaid Integrity Program’s descriptions and plans that discussed activities planned and initiated by the agency’s Medicaid program integrity officials to support states’ administration of Medicaid, and compared the intent of such activities to challenges the states identified. We also examined agency schedules and training curricula to determine whether recent and planned training and technical assistance sessions were relevant to challenges the states identified. To identify any other actions CMS had taken or planned to help states address any such challenges, we examined annual reports the agency had provided to Congress that described steps taken over the previous year to address goals and objectives of the Medicaid Integrity Program. In addition, we held discussions with CMS Medicaid officials regarding their efforts and intent to address any known challenges associated with states’ efforts to implement information systems for program integrity purposes. For each of the objectives, we supplemented the information gained from our documentation reviews by holding discussions with CMS officials and state Medicaid program administrators, including those responsible for implementing information systems used to help program integrity analysts prevent and detect improper payments of Medicaid claims. We conducted this performance audit from November 2013 to January 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contacts named above, Catina R. Bradley, Assistant Director; Teresa F. Tucker, Assistant Director; Melina I. Asencio; Nicholas A. Bartine; Christopher G. Businski; Debra M. Conner; Rebecca E. Eyler; Stuart M. Kaufman; Thomas E. Murphy; and Daniel K. Wexler made key contributions to this report.
Medicaid is a joint federal-state program that provides health care coverage to certain low-income individuals. The program is overseen by CMS, while the states that administer Medicaid are tasked with taking actions to ensure its integrity. Such actions include implementing IT systems that provide program integrity analysts with capabilities to assess claims, provider, beneficiary, and other data relevant to Medicaid; and supporting efforts to prevent and detect improper payments to providers. GAO was asked to review states' implementation of IT systems that support Medicaid. GAO determined (1) the types and implementation status of the systems used by states to support program integrity initiatives; (2) the extent to which CMS is making available data, technical resources, and funds to support Medicaid programs' efforts to implement systems, and the effectiveness of the states' systems; and (3) key challenges that Medicaid programs have faced in using IT to enhance program integrity initiatives, and CMS's actions to support efforts to overcome them. To do this, GAO analyzed information from 10 selected states covering a range of expenditures on such systems, reviewed program management documentation, and interviewed CMS officials. In the 10 selected states reviewed, GAO found the use of varying types of information technology (IT) systems to support efforts to prevent and detect improper payments. All 10 states had implemented a Medicaid Management Information System (MMIS) to process claims and support their program integrity efforts, and 7 had implemented additional types of systems to meet specific needs. Three states were operating MMISs that were implemented more than 20 years ago, but 7 states had upgraded their MMISs, and 2 of those had done so in the past 2 years. In addition, 7 states had implemented other systems, such as data analytics and decision support systems that enabled complex reviews of multiple claims and identification of providers' billing patterns that could be fraudulent. While the MMISs and other systems implemented by the 10 states were designed primarily for administering Medicaid as a fee-for-service program, in which providers file claims for reimbursement for each service delivered to patients, officials with 7 of the 10 states also administered managed care plans–plans for which provider organizations are reimbursed based on a fixed amount each month–and 1 state administered Medicaid exclusively as managed care. Officials with the 9 states who administered fee-for-service plans said they used their systems to help conduct pre- and post-payment reviews of claims. All 10 states received technical and financial support from the Centers for Medicare & Medicaid Services (CMS) for implementing the systems. For example, they accessed the agency's databases to collect information that helped determine providers' eligibility to enroll in Medicaid. In addition, all 10 states had participated in training, technical workgroups, and collaborative sessions facilitated by CMS. With the agency's approval, the 10 states received up to 90 percent in federal matching funds to help implement systems. All 10 states reported that agency support, particularly training, helped them to implement systems needed to prevent and detect improper payments. However, the effectiveness of the states' use of the systems for program integrity purposes is not known. CMS does not require states to measure or report quantifiable benefits achieved as a result of using the systems; accordingly, only 3 of the 10 selected states measured benefits. Without identifying and measuring such benefits (i.e., money saved or recovered) that result from using MMISs and other systems, CMS and the states cannot be assured of the systems' effectiveness in helping to prevent and detect improper payments. Moreover, without requiring states to institute approaches for measuring and reporting such outcomes, CMS officials lack an essential mechanism for ensuring that the federal financial assistance that states receive to help fund these systems effectively supports Medicaid program integrity efforts. Five of the 10 states faced challenges with using systems for managed care program integrity–introduced by the content, quality, and definitions of data on services provided. However, 1 state had taken steps to overcome such challenges and had integrated data and implemented functionality needed to review managed care data both prior to and after payment. For its part, CMS had conducted training related specifically to collecting and analyzing these data to help prevent and detect improper payments in the Medicaid program. GAO recommends that CMS require states to measure and report quantifiable benefits of program integrity systems when requesting federal funds, and to reflect their approach for doing so. The agency agreed with the recommendation.
The federal government’s vast real property inventory reflects the diversity of agencies’ missions and includes office buildings, prisons, post offices, courthouses, laboratories, and border stations. The Federal Real Property Profile (FRPP) is a database of owned and leased space held by executive branch agencies. It is maintained by GSA on behalf of the FRPC, although FRPC controls access to the data. In 2010, FRPP data indicated that 24 executive branch agencies held about 3.35-billion square feet of building space. These agencies reported that 79 percent of the total reported building space was federal-government owned; 17 percent was leased, and 4 percent of the space was otherwise managed. The eight agencies we reviewed—USDA, DOD, DOE, DHS, DOI, VA, GSA, and USPS—reported holding over 3.32-billion square feet of building space or about 99 percent of reported square footage. GSA and USPS are the largest civilian holders of federally owned property. They hold the largest amounts of space, by square foot, of the civilian agencies that we examined. As noted previously, we excluded much of DOD’s property from the scope of our review because of the security requirements of traditional military bases, which would make colocation with other agencies unlikely. GSA and USPS together hold more square footage—almost 660-million square feet—than the other agencies we reviewed, excluding DOD, combined—over 454-million square feet. (See fig. 1.) Additionally, both agencies have a wide national presence—GSA-held properties exist in over 750 markets and USPS- held property is in almost 36,000 cities and towns. Federal agencies, particularly GSA in its role as broker and property manager to the civilian portion of the U.S. government, rely on costly leasing, and the number of federal government leases has increased in recent years. The civilian federal agencies we reviewed held leases in close to 41,000 assets covering nearly 324-million square feet of space, with GSA and USPS leasing the most space. Nearly all of GSA’s leases are for other tenant agencies—for example, its four largest customers in the leased inventory are the Department of Justice, DHS, the Social Security Administration (SSA), and Department of Treasury (Treasury)— based upon those agencies’ identified needs. According to GSA’s annual portfolio report, since fiscal year 2008, its leased inventory has experienced faster growth than its owned inventory. We have reported that over time GSA has relied heavily on operating leases to meet new long-term needs because it lacks up-front funding needed to purchase buildings or space. In addition, GSA has reported operational losses related to leasing, once indirect overhead expenses have been allocated, in recent years. GSA is authorized by law to acquire, manage, utilize, and dispose of real property for most federal agencies. GSA is able to enter into lease agreements for up to 20 years that the Administrator of GSA considers to be in the interest of the federal government and necessary to accommodate a federal agency. GSA uses this authority to lease space on behalf of many federal government agencies. In 2004, the administration added managing federal real property to the President’s Management Agenda and the President issued an executive order, applicable to 24 executive departments and agencies 1) establishing FRPC and 2) requiring FRPC to work with GSA to establish and maintain a single, comprehensive database describing the nature, use, and extent of all federal real property held by executive branch agencies, except when otherwise required for reasons of national security. FRPC worked with GSA to create the FRPP to meet this requirement. FRPC is chaired by the Deputy Director for Management of OMB and is composed of Senior Real Property Officers from the 24 executive departments and agencies, the Controller of OMB, the Administrator of GSA, and any other full-time or permanent part-time federal officials or employees as deemed necessary by the Chairman of the Council. The order does not apply to USPS and FRPC does not work directly with USPS on the management of its real property. These efforts notwithstanding, we have previously reported that the federal government continues to face a number of challenges to effectively managing its real property. In particular, we have reported on challenges to disposing of excess properties, making better use of properties that are underutilized, and reducing overreliance on leasing. USPS, which is an independent establishment of the executive branch, is authorized to sell, lease, or dispose of property and is exempt from most federal laws dealing with real property and contracting. Although declining mail volume and changes to its operations have resulted in excess capacity and facility space throughout the postal network, our recent work has shown that USPS faces challenges, such as legal restrictions and local stakeholder influences, that have limited its ability to close postal facilities in order to restructure its retail and processing network. For example, USPS has often faced resistance from affected employees, communities, and elected officials when it has attempted to consolidate its processing operations and networks or close mail- processing facilities because of concerns about possible effects on service, employees, and communities. USPS recently announced that it will maintain existing retail locations, with modified operating hours. As a result of these issues, USPS has more space than it needs. Our recent work has also shown that USPS faces a deteriorating financial condition. For example, at the end of fiscal year 2011, the USPS had incurred a $5.1-billion loss for the year, had $2 billion remaining on its $15-billion borrowing limit, and projects it will be unable to make its $5.5 billion scheduled retiree health benefits payment to the federal government. In addition, USPS was conceived as a financially self-sufficient entity, but its revenues do not cover costs at about 80 percent of its retail facilities. The federal government owns facilities that are underutilized in locations where it also leases space for different purposes. This is particularly true for USPS, as declining mail volume and changes in operations have freed space in many owned facilities. While there are problems with using governmentwide data to identify underutilized space, as will be discussed later in this report, we observed underutilized space held by multiple federal entities in the case study markets we visited for this report. For example, in each case study market, we observed one or more cases of vacant or underutilized space in post offices, including both offices and space on the processing floor, that officials said could be re-configured and physically separated from USPS operations (see fig. 2.) In some cases, spaces within these underutilized owned properties could be used by other government agencies. According to a recent report by the USPS Office of Inspector General (OIG) related to post office utilization, excess floor and retail window space exists nationwide that could be used by other government agencies or used to perform transactions on behalf of other government agencies. The USPS OIG’s office also conducted several regional studies examining excess USPS space and noted a correlation between space leased by GSA and the ability of USPS to significantly accommodate federal space needs. For example, one of those studies estimated that of the USPS districts reviewed, USPS excess space may accommodate 147 of 175 (or 84 percent) of agencies’ current federal leases, and noted that GSA paid considerably more per square foot than the value assigned to USPS space. However, the Inspector General (IG) did not determine whether the excess space identified was usable for sharing with other agencies, in part because USPS systems and policies do not identify usable areas, and noted that more information would be necessary to determine whether USPS’s excess space would be suitable for another government tenant. We observed several attributes that could affect using underutilized space for colocation. These attributes included size, location, and condition, which would likely render some spaces more appropriate for sharing than others. Much of the underutilized space we observed was small—only several hundred to a few thousand square feet. We also observed underutilized space that was not contiguous. Both of these attributes could limit those spaces’ suitability for effective colocation. Furthermore, underutilized space that we observed varied in terms of its location within facilities. For example, GSA and VA officials described having some space that is less desirable to potential tenants. Although we observed generally high occupancy in GSA’s multi-agency federal buildings, GSA officials showed us some space they said is not easily leased because of its location, such as a first floor interior office bordering the building’s maintenance hallways or windowless basement spaces, and noted that these extra spaces can remain in GSA buildings when an agency does not require the entirety of a vacant space. VA officials noted similar issues, in that empty or available space at its campuses is often located in buildings surrounded by other VA buildings, which can make it harder for outside parties to access and use. Additionally, we observed underutilized space in a wide range of conditions, from rundown to newly renovated, which could also affect colocation options. GSA officials said that a variety of physical aspects of the space may factor into the desirability of the space for colocation, including ceiling height, support column size, lighting, and windows. For example, Figure 3 shows interior office space in a GSA-held federal building in downtown Dallas that GSA officials told us has been vacant for years, a vacancy that they attributed to the lack of natural light and the large support columns that make it difficult to place workstations. Federal officials we spoke with indicated that colocation could result in improved government operations through increased efficiencies for service access or delivery to the public in some cases. For example, VA officials stated that their incentive for colocation is to expand veterans’ medical care efficiently, and that sharing space with other agencies with similar missions, such as the U.S. Army, could help achieve that goal and avoid duplicating medical capacity. Moreover, according to a recent report by the USPS OIG, the Postal Service would benefit from sharing post office space with other government entities while generating revenue and increasing efficiency by expanding citizen access to government operations. For example, USPS currently has interagency agreements to provide non-postal government services, such as accepting passport applications and Selective Service registration forms. DHS officials discussed broadly how DHS is often colocated with USDA, the Drug Enforcement Administration (DEA) and the Federal Bureau of Investigation (FBI) because those agencies have complementary missions to certain DHS operations. These colocations take place in both GSA-held and DHS-held space. While not inter-agency, Interior officials described how the agency has tried to colocate its various bureaus for the sake of agency synergies, especially since the public often does not distinguish among the roles of the bureaus. They noted that integrating services in space or function is a good practice that could occur across agencies. USDA officials also said that the colocation of the Farm Service Agency (FSA) and the Natural Resources Conservation Service (NRCS) provided synergies because they are able to share databases and pass information more readily between the two entities. Federal officials also said that, under certain circumstances, colocation could result in cost savings or avoidance for the federal government. For example, DHS officials described the department’s examination of colocation opportunities within the department, and cited one case it studied where cost savings could result from productivity gains, reduced redundancy, and cost avoidance. USPS officials in multiple locations noted USPS would benefit from revenue from a federal agency tenant. For example, USPS could share underutilized floor and retail window space with other government agencies, generating revenue to offset some building costs. Additionally, GSA officials described the motivation to accomplish savings from consolidation and colocation as responsible asset stewardship. While federal officials seemed to agree that colocation can produce efficiencies, data limitations, such as the lack of a national, multi-agency asset-management tool as discussed in the next section, make it difficult to estimate the financial and nonfinancial benefits from colocating federal agencies, because the quality of any estimate is a direct function of the input data. Moreover, colocation will not always be more cost-effective than leasing in the short run, particularly if the costs to reconfigure owned space are high. For example, DOD officials said that it cost $20 million to renovate a vacant 70,000-square-foot warehouse within the Naval Support Facility in suburban north Philadelphia and move the Navy Human Resources Service Center there from leased commercial space. They estimated that the payback period for the move would exceed 30 years. Information on cost and service delivery improvements from colocations can help agencies decide whether to proceed with colocations and aid agencies in evaluating completed colocations. Generally, however, agencies lack the tools—such as a standardized approach for quantifying costs and benefits—to determine whether, and to what extent, colocations will generate or are generating intended savings or financial benefits, metrics that are key to helping agencies manage their resources. Moreover, some federal officials indicated that quantitatively measuring the nonfinancial results of colocations, such as intergovernmental collaboration, was difficult to do because these are difficult concepts to monetize, as they can be subjective. We found that agencies generally lacked the tools to measure the costs and benefits of colocation efforts. Our work on capital decision making has shown that establishing an analytical framework for review, approval, and selection of projects; evaluating a project’s results; and incorporating lessons learned into the decision-making process are all key principles and practices of such an effort. Establishing a framework with a mixture of financial and nonfinancial benefits, such as service delivery improvements, allows entities to better evaluate performance. Agency officials said that greater collaboration—through strategic partnerships among federal agencies targeted to meet specific needs and a formal local coordination mechanism—could mitigate some administrative, financial and data challenges to colocation. Agencies’ varying real property-management authorities can create administrative challenges, which officials said could be addressed through a strategic partnership with GSA. Acquiring the needed up-front financing for repair or renovation remains challenging for agencies, although some agencies have secured up-front financing through partnerships with private entities. Agencies face challenges identifying colocation opportunities because of limitations with available data and the lack of a coordination mechanism. Officials from a few agencies suggested that structured local or regional coordination could best identify opportunities where the missions of various agencies could be “matched” to appropriate space because of local and regional federal officials’ more detailed knowledge of local needs, conditions, and opportunities. Agencies have varying real property management authorities related to colocation, including the ability to share property and retain the proceeds, and this variation can create administrative challenges for agencies seeking to increase inter-agency colocation opportunities. For example, USPS can share its property with private or government entities and retain the proceeds, but other agencies may not be able to do so.officials reported that the agency is allowed, under certain circumstances, to share government-owned real property, but it is not allowed to retain the proceeds, unless provided for in its annual appropriation. In addition, even if an agency has the authority to share real property, it may not be well- prepared to handle tasks such as setting lease rates and managing the financial arrangements for renovations. For example, GSA officials said some agencies do not know what rates to charge for the space they would share with other agencies. Moreover, Navy officials said agencies with the authority to share properties can face administrative challenges managing the many various sources of funds potentially needed should extensive renovations be necessary to bring properties up to usable condition. Officials from six agencies as well as commercial real estate officials said that to overcome some of these administrative challenges and improve colocation efforts, agencies could address specific challenges through a strategic partnership with GSA. They said GSA has administrative structures and experiences that could benefit less-experienced agencies. For example, GSA, as the federal government’s property manager, already possesses the capability to market and price properties and manage leases on a large scale. Our previous work on the Government Performance and Results Act (GPRA) also supports the idea that strategic partnerships could be beneficial to overcoming these challenges. We have reported that cross-government agency collaboration can produce more public value than can be produced when agencies act alone. Specifically, agencies can enhance and sustain their collaborative efforts by engaging in a variety of practicesestablishing policies and procedures to operate across agency boundaries, by, for example, developing interagency handbooks that define common standards, policies, and procedures. During our review, officials from four agencies suggested that increased collaboration through some of these practices could help mitigate some of the administrative challenges of colocation. As a potential approach for these types of strategic partnerships, OMB officials described GSA’s effort to work with selected agencies to develop strategic plans for future property needs and identify potential areas for consolidation. USPS has some experience collaborating with other agencies on real property issues, and as it explores further options to better utilize excess space, strategic partnerships with other agencies, particularly GSA, could help USPS overcome administrative challenges that may be impeding colocation. A February 2012 USPS OIG report said USPS has experience with intergovernmental collaboration because it already shares space in federal buildings and conducts transactions for other federal entities. For example, the Mansfield, Ohio, federal building hosts a post office as well as offices of SSA, the Internal Revenue Service (IRS), and the U.S. Department of Labor. The report noted that because many postal facilities are near many GSA-leased properties, sharing space could potentially lower overall federal lease costs. The report recognized USPS’s need to optimize its network through internal consolidations and closures, but said USPS could use its underutilized resources better through external collaboration. USPS management agreed with the OIG recommendation to develop and implement a strategy to address these findings. United States Postal Service, Office of Inspector General. 21st Century Post Office: Opportunities to Share Excess Resources – Management Advisory, DA-MA-12-003 (Arlington, VA: February 9, 2012.) During our site visits, we found federally owned properties that could be made available for leasing; however, many of the spaces would need substantial repair or renovation, and acquiring the needed up-front financing remains challenging for agencies. For example, we saw several USPS properties in which the available space required substantial renovation to replace old carpet, peeling paint, and outdated fixtures, and to repair water damage (see fig. 4). However, USPS’s deteriorating financial condition may limit the costs it can incur to renovate its facilities prior to sharing them with other agencies. We also observed spaces that would need potentially costly specialized repairs or renovations. For example, some of the U.S. Navy properties we visited at the mixed-use Philadelphia Navy Yard (see fig. 5) would need asbestos abatement and water damage repair. Navy officials told us that the properties could be leased from the Navy by other government agencies, and that some agencies have made inquiries to do so. However, they said that the agencies were alarmed by the complexity and costs of repairs, which effectively ended any further consideration of the properties for colocation. Had any agencies pursued leasing the properties, Navy officials said they likely would lack sufficient up-front financing. In addition to general and specialized renovation costs, Navy officials said DOD Unified Facilities Criteria (UFC) requirements prescribe certain antiterrorism measures, such as blast-proof windows and security gates, which can further elevate the costs of renovations to DOD-owned buildings, both on and off-base. The up-front costs of renovations present a challenge to GSA that hinders its colocation efforts. GSA regional officials said that financing renovations is the most serious challenge they face in improving the utilization of their assets. Regional officials said they have considered acquiring vacant USPS facilities that could support colocation, but have been reluctant to do so in part because of the up-front cost of the extensive renovations needed to make the properties usable. As we have previously reported, in recent years budgeting and appropriations decisions, made by the executive branch and Congress, have limited the amount of resources made available from the Federal Buildings Fund to GSA to fund real property operations, acquisition, and maintenance.GSA headquarters officials told us that these limitations make it challenging for the agency to effectively manage its portfolio and result in delayed or cancelled projects. In downtown Philadelphia, GSA had considered purchasing USPS’ large, underutilized 30th St. Mail Processing and Distribution Center Station, but was dissuaded by the substantial cost of the renovation that would be needed for the building, which was constructed in 1935. Instead, in April 2007, USPS signed a deal with a private developer to renovate nearby facilities and, in August 2007, signed a memorandum of understanding with GSA for private redevelopment of the building into IRS offices. In September 2008, USPS moved its retail operations and distribution unit out of 30th St. into its new facilities. When the 30th St. renovation was complete, IRS moved into the property (see fig. 6). In exchange for financing the $184-million renovation, the developer received all interest and rights from USPS for the 30th St. land, building, and a nearby 1,661-space parking garage, and until August 25, 2030, will receive lease payments from GSA, who in turn will receive rent from IRS. reported, renovations financed by the private sector will generally cost more than those financed by Treasury borrowing. The only national-level, multi-agency real property database—the FRPP—was not designed to be an active asset management system.As such, it does not possess the level of detail necessary to support the identification of colocation opportunities. The FRPP can provide basic descriptive information about the government’s federal property holdings, such as address, square footage and facility type; however, colocation decisions would require more data elements than would be practical to add to the FRPP. For example, the FRPP provides square footage information, but it does not provide information on orientation or use of space. We visited a DHS-held site where most of the facility was underground and much of the unoccupied space was used by environmental systems such as air filtration units and pumps that could not be removed. (See fig. 7.) The FRPP does not reveal that the facility is underground, nor does it convey the substantial challenges to reconfiguring the space. Similarly, we found that one building under renovation was characterized as “underutilized” in the FRPP. While not technically incorrect, characterizing this space as underutilized can be misleading because the simple utilization designation does not necessarily indicate if the space can be immediately occupied or used for colocation. As a result, local and regional federal officials are generally better positioned than headquarters officials to manage the colocation process because of their more detailed knowledge of local needs, conditions, and opportunities. We found that detailed property knowledge necessary to facilitate colocations was concentrated at the regional and local levels, rather than at headquarters. When we asked for detailed information about specific properties, we were referred to local and regional federal officials, who were knowledgeable about specific sites and facilities. Some headquarters officials were familiar with attempts at colocation and could describe overall situations, but they were not the primary contacts for these efforts, nor could they readily describe the properties’ attributes or local office needs. In general, local and regional federal officials said that they knew property details—such as space configuration, access routes, and parking availability—that would be important for facilitating colocations. In addition, FRPC, which created the FRPP database, is a national, policy-oriented body. As such, the scope of FRPC’s mission does not include managing the local-level negotiations that colocation would require. The detailed property knowledge held by local federal officials is important for ensuring an appropriate match between the agency that owns the property and the agency that would lease space. Officials from many agencies reported that matching the location of available property to the mission and security needs of the agency searching for space is an important consideration; for example, DOE’s need for isolated, remote sites as compared to VA’s interest in sites readily accessible to veterans. However, officials noted that that there are no universal requirements regarding their respective agencies’ property needs—rather, the property needs vary across the country in response to mission needs. In some cases, agencies have operational requirements that would make colocation inappropriate if the potential tenant and potential lessor did not share the same mission needs. For example, USPS officials noted the Postal Service’s need to keep mail secure and separate from potential tenant agencies or members of the public who may need to access the facility. Additionally, officials from DOD told us that in some circumstances their security requirements would make them ill-suited to share space with other agencies, such as when public access would be required. However, in instances where mission needs were similar, potential tenants might see enhanced security as desirable. In other cases, an agency’s mission may dictate the need for a specialized facility that could make colocation inappropriate. For example, USDA officials in a few regions told us that farmers often drove farm vehicles, including tractors, to Service Center locations and that in these cases, underground parking in a federal building would be problematic. In addition, we visited a leased Interior site that required a blacksmith and carpentry shop, cold storage for artifacts, and parking for large maintenance vehicles, such as wood-chippers and industrial mowers (see photos in fig. 8 below). An Interior official said that these needs would have to be taken into account to share space. None of these details are included in the FRPP, but local and regional officials from several agencies noted that they can speak readily on how mission needs and facility details may impact colocation. Officials from several agencies acknowledged that property knowledge is sometimes communicated informally. However, various officials noted that the lack of a systematic mechanism to share information hinders any efforts to colocate. Officials from a few agencies suggested that structured local or regional coordination could best identify opportunities where the missions of various agencies could be “matched” to appropriate space. Several local officials who showed us vacant federal spaces said there is currently no online or formal mechanism they can use to share vacancy details with officials from other agencies who might need space. A previous effort at local coordination—the Governmentwide Real Property Information Sharing program (GRPIS)—experienced some success, according to GSA officials, which they attributed to connections made at the local level. The program was tasked with encouraging and facilitating the sharing of real property information among federal agencies, and it revolved around the formation of real property councils within major federal communities nationwide. GSA officials said that local councils were an effective method for sharing information. However, officials said the program became essentially inactive after responsibility for the program was transferred within GSA and local connections were lost. Colocating federal agencies into government-owned space represents an opportunity to improve government operations while simultaneously addressing two of the federal government’s long-standing real-property management challenges: reducing over-reliance on costly leasing and the presence of underutilized owned property. Our analysis of eight markets shows that there are underutilized owned properties near areas where the government also leases space for other purposes. However, colocations are far more complicated than just matching the square feet needed with the square feet available. Agencies’ mission needs and building-specific issues that include security, condition, configuration, and use must align for the colocation to fully succeed. FRPC has coordinated federal real property actions for almost a decade at the national level, but detailed local knowledge of agency missions and facility needs combined with systematic communication channels are needed to match owners with compatible tenants. Once matched, numerous capacity and administrative hurdles remain as challenges to successful colocation. GSA is the only agency that has a core mission of managing real property. Several landholding agencies lack the experience and administrative tools necessary to effectively market and manage their property as a landlord. Creating cross-agency relationships with GSA to assist in tasks such as setting rental rates, crafting lease documents, renovating space, and otherwise managing the property would improve consistency of approach and allow each agency to remain focused on its core mission. Colocation is not always the right answer. We found that agencies can force relocations into ill-suited locations, pushing the financial breakeven point out decades into the future. Without the tools to measure the benefits and costs of colocation efforts or proposals, policy makers are unable to effectively weigh colocation as an option. Understanding the financial costs and savings associated with colocation efforts, as well as the nature and extent of synergies and improved services, will allow agencies to better demonstrate that the benefits can be worth the costs of renovating and moving an agency out of privately leased space. To promote colocation across agencies, the Director of the Office of Management and Budget (OMB) should work with the Federal Real Property Council (FRPC) and the U.S. Postal Service (USPS) to implement GAO’s three recommendations:  Establish a mechanism, which includes USPS, for local coordination in markets with large concentrations of federal agencies to identify, on a case by case basis, specific opportunities to share space and improve coordination of real property use across agencies.  Develop strategic partnerships and a coordinated strategy with assigned roles and tasks between the General Services Administration (GSA) and other federal landholding agencies (USPS specifically) with less experience sharing real property.  Develop and implement tools, along with supporting guidance, to measure, evaluate, and disseminate information on financial and nonfinancial benefits, such as service delivery improvements, from colocating federal agencies. We provided a draft of this report to OMB, GSA, USPS, VA, USDA, DOE, Interior, DHS, DOD, and IRS for review and comment. In commenting on a draft of this report, officials from OMB said that they agreed with the report’s findings, conclusions, and recommendations and offered technical comments that we incorporated as appropriate. They said that OMB has little power over how USPS manages its real property assets. The officials also said that GSA has already started looking at consolidating tenant field operations within its portfolio, and suggested that the report clarify the role that we recommend GSA takes in facilitating consolidations. USPS agreed with the facts and findings in the report and provided comments regarding our recommendations. USPS’s comments are contained in appendix III. GSA agreed with our recommendations and provided technical comments that we incorporated as appropriate. DHS and VA provided clarifying technical comments which we incorporated, where appropriate. VA’s comments are contained in appendix IV. USDA, DOE, Interior, DHS, DOD, and IRS did not provide comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Agriculture, the Secretary of Defense, the Secretary of Energy, the Administrator of General Services, the Secretary of Homeland Security, the Secretary of the Interior, the Commissioner of Internal Revenue, the Director of the Office of Management and Budget, the Postmaster General, and the Secretary of Veterans Affairs. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-2834 or wised@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Our objective was to review the issues surrounding colocation—that is, moving federal operations from one stand-alone location to a federal location occupied by another entity. To accomplish this, we addressed (1) if the potential for cross-agency colocation exists, what factors can affect that potential; (2) the potential benefits of colocation; and (3) the challenges associated with colocation, and what solutions, if any, can mitigate these challenges. During the course of our work we used the Federal Real Property Portfolio (FRPP), a government-wide database of owned and leased space, maintained by GSA on behalf of the Federal Real Property Council (FRPC). We recently reported that the FRPC has not followed sound data collection practices—related to data consistency, performance measures, collaboration, and data reporting—when collecting FRPP data, that would help them collect these data in a way that is sufficiently consistent and accurate to be useful making property management decisions. We recommended that GSA develop a plan to improve the FRPP consistent with sound data collection practices. Nonetheless, we also reported that the FRPP can be used in a general sense to track assets. As such, for this report, we used FRPP data for the limited purposes of identifying agencies within our scope, selecting case study markets and summarizing agency-level statistics on owned and leased property. We used the 2010 Federal Real Property Portfolio (FRPP) summary report and U.S. Postal Service property data to identify the agencies which hold the largest amounts of property. We then limited our scope to 8 of the top 10 agencies, which include the Departments of Agriculture (USDA), Defense (DOD), Energy (DOE), Homeland Security (DHS), the Interior (Interior), Veterans Affairs (VA), the General Services Administration (GSA), and the U.S. Postal Service (USPS). To determine the factors that can affect cross-agency consolidation, we analyzed detailed data and interviewed agency officials about the property holdings in 8 specific U.S. markets: Allentown PA, Cleveland OH, Dallas TX, Kansas City KS, Kerrville TX, Philadelphia PA, San Antonio TX, and Waco TX. To select these areas and provide nationwide statistics on owned and leased facilities, we analyzed basic inventory data, including location, occupant, size, owned/leased data from the FRPP for the 7 agencies in our scope that are represented in the FRPP. USPS, which is not represented in the FRPP, provided data from its internal systems. While case studies are not generalizable, we selected diverse markets in terms of market size, geographic region, owned and leased federal properties, and agencies present. Although we used GSA- defined markets as a guideline, to better reflect the interests of this review we delineated markets by using an estimated 60-minute commute radius, and selected the borders based on professional judgment (for example, in more rural areas, following the direction of development.) We identified the primary cities of large and medium markets using GSA data, and then selected small markets within driving distance of a large or medium-sized market in order to facilitate travel. Because there are no reliable real property cost and benefit data, we primarily relied on interviews with federal agency officials at the national, regional, and local levels to determine the potential benefits of colocation. We focused on benefits that were mentioned by officials from more than one agency and more than one market. We also reviewed relevant GAO and other reports and documents, including USPS Office of Inspector General reports, and laws, regulations, and guidance. To determine the challenges associated with colocation and what solutions, if any, could mitigate these challenges, we visited facilities that were both owned and leased, with a particular emphasis on owned offices and warehouses that were categorized as underutilized. We did not include properties categorized as inactive, excess, or disposed in our scope, and we did not include land. Using this information, we conducted an analysis to identify key challenges that agencies face when making property decisions and the options, if any, for mitigating those challenges. We also interviewed agency officials at the national, regional, and local level, and reviewed documentation provided to us regarding specific properties. We did not examine any screenings for other potential uses of real property, such as use for the homeless or public benefit. To determine which challenges were the most pressing, we only included challenges which were raised in more than one market and by more than one agency. We conducted this performance audit from July 2011 through July 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This list is not intended to be inclusive of all of an agency’s real property authorities; there may be other authorities not included below that may authorize colocation. Relevant statute and description of authority Enhanced Use Lease Authority Pilot Program 7 U.S.C. § 3125a note The Secretary of Agriculture is authorized to establish a pilot program and lease nonexcess real property at the Beltsville Agricultural Research Center and the National Agricultural Library to any individual or entity, including agencies or instrumentalities of State or local governments, if the Secretary determines that the lease is consistent with, and will not adversely affect, the mission of the agency administering the property; will enhance the use of the property; will not permit any portion of the property or facility to be used for the public retail or wholesale sale of merchandise or residential development; will not permit the construction or modification of facilities financed by nonfederal sources to be used by an agency, except for incidental use; and will not include any property or facility required for any agency purpose without prior consideration of the needs of the agency. Consideration for any lease shall be for fair market value and for cash. The Secretary is authorized to enter into leases until June 18, 2013, and the term of the lease shall not exceed 30 years. Retention of Proceeds/Enhanced Use Lease Authority Pilot Program 7 U.S.C. § 3125a note Consideration for leases shall be deposited in a capital asset account, which is available until expended, without further appropriation, for maintenance, capital revitalization, and improvements to the department’s properties and facilities at the Beltsville Agricultural Research Center and the National Agricultural Library. Leases of Non-Excess Property of Military Departments 10 U.S.C. § 2667 The Secretary of a military department is authorized to lease nonexcess real property under the control of the department that is not needed for public use if the Secretary considers the lease to be advantageous to the United States and upon such terms that will promote the national defense or be in the public interest. The term of the lease may not be more than 5 years, unless the Secretary determines the term should be longer to promote the national defense or to be in the public interest. Lease payments shall be in cash or in-kind consideration for an amount not less than fair market value. In-kind consideration includes maintenance, protection, alteration, repair, or environmental restoration of property or facilities; construction of new facilities; providing facilities; or providing or paying for utility services. Relevant statute and description of authority appropriation act. Beginning in fiscal year 2005, any amounts deposited into a special account from the disposition of property are appropriated and available for obligation or available to the Secretary without additional congressional action. Conveyance or Lease of Existing Property and Facilities 10 U.S.C. § 2878 The Secretary concerned is authorized to convey or lease property or facilities, including ancillary supporting facilities to eligible entities at such consideration the Secretary concerned considers appropriate for the purposes of the alternative authority for acquisition and improvement of military housing and to protect the interests of the United States. Retention of Proceeds/Conveyance or Lease of Existing Property and Facilities 10 U.S.C. § 2883 Proceeds from the conveyance or lease of property or facilities under 10 U.S.C. § 2878 shall be credited to the Department of Defense Housing Improvement Funds. Proceeds may be used to carry out activities with respect to the alternative authority for the acquisition and improvement of military housing, including activities required in connection with the planning, execution, and administration of contracts subject to such amounts as provided in appropriation acts. Leasing of Property 42 U.S.C. § 7256 The Secretary of Energy is authorized to lease acquired real property located at a DOE facility that is to be closed or reconfigured and is not needed by DOE at the time the lease is entered into if the Secretary considers the lease to be appropriate to promote national security or is in the public interest. The term of the lease may be up to 10 years, with an option to renew the lease for another 10 years, if the Secretary determines that a renewal of the lease will promote national security or be in the public interest. Lease payments may be in cash or in-kind consideration and may be for an amount less than fair market value. In kind consideration may include services relating to the protection and maintenance of the leased property. Retention of Proceeds/Leasing of Property 42 U.S.C. § 7256 To the extent provided in advance in appropriations acts, the Secretary is authorized to use the funds received as rents to cover administrative expenses of the lease, maintenance and repair of the leased property, or environmental restoration activities at the facility where the leased property is located. General Services Administration (GSA) Disposition of Real Property 40 U.S.C. § 543 The Administrator of GSA is authorized to dispose of surplus real property by sale, exchange, lease, permit, or transfer for cash, credit, or other property. Conveyance of Property Consolidated Appropriations Act of 2005, Pub. L. No. 108-447, §412, 118 Stat. 2809, 3259 (2004) The Administrator of GSA, notwithstanding any other provision of law, is authorized to convey by sale, lease, exchange, or otherwise, including through leaseback arrangements, real and related personal property, or interests therein. Relevant statute and description of authority Retention of Proceeds/Conveyance of Property Consolidated Appropriations Act of 2005, Pub. L. No. 108-447, § 412, 118 Stat. 2809, 3259 (2004) Net proceeds from the disposition of real property are deposited in GSA’s Federal Buildings Fund (FBF) and are used for GSA real property capital needs to the extent provided in appropriations acts. General Powers of the Commandant, U.S. Coast Guard 14 U.S.C. § 93(a)(13) The U.S. Coast Guard may rent or lease real property, not required for immediate use, for a period not exceeding 5 years. Payments received from the rental or lease, less amount of expenses incurred (exclusive of governmental personal services), to be deposited in the Treasury. Leases for National Park System (NPS) 16 U.S.C. § 1a-2(k)(1)-(4) Interior is authorized to enter into a lease with any person or governmental entity for the use of buildings and associated property administered by the Secretary as part of the National Park System. Leases shall be for fair market value rental. Buildings and associated property leased shall be used for an activity that is consistent with the purposes established by law for the unit in which the building is located; shall not result in degradation of the purposes and values of the unit; and shall be compatible with National Park Service programs. Retention of Proceeds/Leases for NPS 16 U.S.C. § 1a-2(k)(5) Rental payments must be deposited into a special Treasury account where the availability of funds is not subject to an appropriation act. Funds are available for infrastructure needs such as facility refurbishment, repair and replacement, infrastructure projects associated with park resource protection, and direct maintenance of the leased buildings and associated properties. Leases for Housing NPS employees 16 U.S.C. § 17o Interior is authorized where necessary and justified to make available employee housing, on or off the lands under the administrative jurisdiction of the National Park Service, and to rent or lease such housing to field employees of the National Park Service at rates based on the reasonable value of the housing. Housing for NPS employees 16 U.S.C. § 17o Subject to the appropriation of necessary funds in advance, Interior is authorized to lease federal lands and interests in land to qualified persons for up to 50 years for the construction of field employee quarters. Relevant statute and description of authority agency is authorized to enter into leases with the Presidio Trust which are necessary and appropriate. USPS Real Property Authorities 39 U.S.C. § 401(5) The Postal Service is authorized to acquire in any legal manner, real property or any interest therein, as it deems necessary or convenient in the transaction of its business and to hold, maintain, sell, lease, or otherwise dispose of such property or any interest therein. USPS Real Property Authorities 39 U.S.C. § 401(6) The Postal Service is authorized to construct, operate, lease, and maintain buildings, facilities, or equipment, and to make other improvements on any property owned or controlled by it. USPS Retention of Proceeds/Real Property Authorities 39 U.S.C. §§ 2003, 2401 Proceeds are deposited into the Postal Service Fund and remain available to the Postal Service without fiscal year limitation to carry out the purposes, functions, and powers of the Postal Service. All revenues received by the Postal Service are appropriated to the Postal Service and are available without additional congressional action. VA Transfer Authority – Capital Asset Fund 38 U.S.C. § 8118 The Secretary of VA is authorized to transfer real property under VA’s control or custody to another department or agency of the United States, to a state or political subdivision of a state, or to any public or private entity, including an Indian tribe until December 31, 2018. The property must be transferred for fair market value, unless it is transferred to a homeless provider. Property under this authority cannot be disposed of until the Secretary determines that the property is no longer needed by the department in carrying out its functions and is not suitable for use for the provision of services to homeless veterans by the department under the McKinney-Vento Act. Authority to Outlease 38 U.S.C. § 8122 The Secretary may lease for a term not exceeding 3 years lands or buildings, or parts or parcels thereof, belonging to the United States and under the Secretary’s control. A lease made to any public or nonprofit organization may provide for the maintenance, protection, or restoration, by the lessee, of the property leased, as a part or all of the consideration for the lease. Prior to the execution of any such lease, the Secretary shall give appropriate public notice of the Secretary’s intention to do so in the newspaper of the community in which the lands or buildings to be leased are located. The proceeds from such leases (less expenses for maintenance, operation, and repair of buildings leased for living quarters) shall be turned over to the Treasury of the United States as miscellaneous receipts. Relevant statute and description of authority transfer costs such as demolition, environmental remediation, and maintenance and repair; costs associated with future transfers of property under this authority; costs associated with enhancing medical care services to veterans by improving, renovating, replacing, updating, or establishing patient care facilities through minor construction projects; and costs associated with the transfer or adaptive use of property that is under the Secretary’s jurisdiction and listed on the National Register of Historic Places. This pilot program was enacted in the Food, Conservation, and Energy Act of 2008, Pub. L. No. 110- 246, § 7409, 112 Stat. 1651, 2014-2016 (2008). Our review of DOD did not include real property at a military installation designated for closure or realignment under a base closure law. Therefore, for purposes of this appendix we have excluded DOD authorities relating to base closure or realignment. Additionally, while some authorities in this enclosure, such as 10 U.S.C. § 2667, contain subsections relating to base closure and realignment, for purposes of this enclosure we are referring to the other subsections of the statute. Department of Defense Appropriations Act for Fiscal Year 2005, Pub. L. No. 108-287, § 8034, 118 Stat. 951, 978 (2004). This authority does not apply to property or facilities located on or near a military installation approved for closure under a base closure law. See 10 U.S.C. § 2878(b). David J. Wise, (202) 512-2834 or wised@gao.gov. In addition to the contact named above, Keith Cunningham (Assistant Director); Jessica A. Evans; Colin Fallon; Gary Guggolz; Alison Hoenk; Hannah Laufe; SaraAnn Moessbauer; Joshua Ormond; Susan Sachs; and Crystal Wesco made key contributions to this report. Federal Real Property: National Strategy and Better Data Needed to Improve Management of Excess and Underutilized Property. GAO-12-645. Washington, D.C.: June 20, 2012. Federal Buildings Fund: Improved Transparency and Long-term Plan Needed to Clarify Capital Funding Priorities. GAO-12-646. Washington, D.C.: July 12, 2012. U.S. Postal Service: Postal Service: Challenges Related to Restructuring the Postal Service’s Retail Network. GAO-12-433. Washington, D.C.: April 17, 2012. Decennial Census: Census Bureau and Postal Service Should Pursue Opportunities to Further Enhance Collaboration. GAO-11-874. Washington, D.C.: September 30, 2011. Federal Real Property: Overreliance on Leasing Contributed to High-Risk Designation. GAO-11-879T. Washington, D.C.: August 4, 2011. Federal Real Property: Proposed Civilian Board Could Address Disposal of Unneeded Facilities. GAO-11-704T. Washington, D.C.: June 9, 2011. Federal Real Property: Progress Made on Planning and Data, but Unneeded Owned and Leased Facilities Remain. GAO-11-520T. Washington, D.C.: April 6, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: Feb. 16, 2011. VA Real Property: VA Emphasizes Enhanced-Use Leases to Manage Its Real Property Portfolio. GAO-09-776T. Washington, D.C.: June 10, 2009 Federal Real Property: Authorities and Actions Regarding Enhanced Use Leases and Sale of Unneeded Real Property. GAO-09-283R. Washington, D.C.: February 17, 2009. Federal Real Property: Strategy Needed to Address Agencies’ Long- standing Reliance on Costly Leasing. GAO-08-197. Washington, D.C.: January 24, 2008. Federal Real Property: Progress Made Toward Addressing Problems, but Underlying Obstacles Continue to Hamper Reform. GAO-07-349. Washington, D.C.: April 13, 2007.
GAO designated the federal government’s management of its nearly 400,000 real property assets as high-risk in part because of overreliance on leasing and the retention of excess facilities. Real property management is coordinated nationally by the FRPC—an association of landholding agencies chaired by the Deputy Director for Management of the Office of Management and Budget (OMB). To explore the potential to reduce leasing by better utilizing owned properties, GAO was asked to examine: (1) the potential for collocation and the factors that can affect that potential, (2) the possible benefits of collocation, and (3) the challenges associated with collocation, and what solutions, if ny, can mitigate these challenges. GAO reviewed property data and documents from eight of the largest propertyholding agencies; laws, regulations and guidance; and prior GAO reports. GAO also analyzed eight case study markets of varying size and federal agency presence, and interviewed agency officials. The federal government owns facilities that are underutilized in locations where it also leases space. In some cases, space within these government-owned properties could be occupied by other government agencies. This is particularly true for the U.S. Postal Service (USPS), for which declining mail volume and operational changes have freed space in many facilities. However, this potential for collocation of federal agencies is affected by such factors as the size, location, and condition of the available space. Officials from various agencies said that, in some cases, collocation could result in more efficient service delivery and cost savings or avoidance. For example, underutilized USPS floor and retail window space could be used by other federal agencies, generating space-use efficiencies for USPS and expanding citizen access to government services. Collocation could also help achieve agency synergies, such as shared technology infrastructure. Agency officials said that strategic partnerships among federal agencies targeted to meet specific needs and a formal local coordination mechanism could mitigate certain challenges to collocation, including administrative and data challenges. Agencies have varying authorities to share available space in their properties and differing capabilities to handle the administrative tasks associated with sharing space. The General Services Administration (GSA), as the federal government’s property manager, possesses the capability and experience to market properties and manage leases on a large scale. Officials from other agencies suggested that partnerships with GSA or a private entity could address some administrative challenges and improve collocation efforts. However, the ability to identify collocation opportunities is hindered by the lack of a formal information-sharing mechanism. The Federal Real Property Council (FRPC) is a national, policyoriented body and, as such, does not manage the local-level negotiations that collocation would require. The FRPC established a database describing all executive branch properties, but it was not designed to identify and manage collocation opportunities, nor does it include USPS data. In contrast, local federal officials indicated that they possess detailed knowledge of specific properties owned by their respective agencies and, with more structured local coordination, could share that knowledge to support collocation efforts. GSA officials said that local councils were an effective method for sharing information OMB should work with FRPC and USPS to, among other things, (1) lead the creation of strategic partnerships between GSA and other property-owning federal agencies with less experience sharing real property, and (2) establish a mechanism (including USPS) for local coordination to improve coordination and identify specific opportunities to share space. OMB, GSA, and USPS generally agreed with the recommendations. The details of agencies’ comments and GAO’s response are addressed more fully within the report.
DOD’s readiness assessment and reporting system was designed to assess and report on military readiness at three levels—(1) the unit level; (2) the joint force level; and (3) the aggregate, or strategic, level. Unit-level readiness is assessed with the Global Status of Resources and Training System (GSORTS), which is an automated system that assesses the extent to which military units possess the required resources and training to undertake their wartime missions. To address joint readiness, the Chairman of the Joint Chiefs of Staff established the Joint Monthly Readiness Review (now called the Joint Quarterly Readiness Review or JQRR), that compiles readiness assessments from the combatant commands, the combat support agencies, and the military services. The Joint Staff and the services use these assessments to brief DOD’s leadership on the Senior Readiness Oversight Council—an executive-level forum for monitoring emerging readiness issues at the strategic level. The briefings to the council are intended to present a view of readiness at the aggregate force level. From these briefings to the council, DOD prepares a legislatively mandated quarterly readiness report to Congress. Figure 1 provides an overview of DOD’s readiness assessment process. We have issued several reports containing recommendations for improving readiness reporting. In 1994, we recommended DOD develop a more comprehensive readiness system to include 26 specific readiness indicators. In 1998, we reported on shortcomings in DOD’s readiness assessment system. At that time, we stated GSORTS’ limitations included lack of precision in measurements, late reporting, subjective input, and lack of standardization. Secondly, we reported that while the Quarterly Readiness Reports to the Congress accurately reflected briefs to the Senior Readiness Oversight Council, they lacked specific details on deficiencies and remedial actions and thus did not meet the requirements of 10 U.S.C. 482 (b). DOD concurred with our recommendation that the Secretary of Defense take steps to better fulfill the legislative reporting requirements under 10 U.S.C. 482 by providing (1) supporting data on key readiness deficiencies and (2) specific information on planned remedial actions. Finally, we reported that deficiencies identified as a result of the Joint Monthly Readiness Reviews remained open because the solutions require funding over the long term. In 2002, we issued a classified report on DOD’s process for tracking the status of deficiencies identified in the Joint Monthly Readiness Reviews. We made recommendations to improve DOD’s deficiency status reporting system and for DOD to develop funding estimates for correcting critical readiness deficiencies. In its comments, DOD generally agreed with the report’s findings and recommendations. Although DOD has made progress in resolving readiness reporting issues raised in our 1998 report, we found that some of the same issues still exist today. For example, DOD has added information to its Quarterly Readiness Reports to the Congress (hereafter referred to as the quarterly reports). However, we found that the reports still contain vague descriptions of readiness problems and remedial actions. Even though some report annexes contain detailed data, the data as presented are not “user friendly”—it is largely unevaluated and is not linked to readiness issues mentioned in the report plus the report text does not explain how the data relates to units’ readiness. Thus, as we reported in 1998, these reports do not specifically describe readiness problems or remedial actions as required under 10 U.S.C. 482 (b). We believe that this kind of information would be useful for Congress to understand the significance of the information in these reports for use in its oversight role. DOD has improved some aspects of its unit-level reporting system, the Global Status of Resources and Training System (GSORTS). For example, in 1998 GSORTS’ data were maintained in multiple databases and data were not synchronized. As of September 2002, the data are reported to a central site, and there is one database of record. Also in 1998, U.S. Army GSORTS review procedures delayed submission of Army data, and all the services’ data entry was manual. As of September 2002, Army reporting procedures require reporting consistent with GSORTS’ requirements, and all the services have automated data entry, which reduces errors. In 1998, combat units only reported on readiness for wartime missions. As of September 2002, combat units report on assigned mission readiness in addition to wartime mission readiness. Conversely, DOD has not resolved some issues we raised in 1998. For example, readiness ratings are still reported in broad bands and actual percentages of required resources are not externally reported. These issues remain because the manual specifying readiness reporting rules has not changed in these areas. The manual’s definition of readiness levels for personnel has not changed since our 1998 report—it still defines readiness levels in bands of 10 percentage points or more and does not require external reporting of actual percentages. For example, the highest personnel rating can range from 90 percent to 100 percent, and there is no requirement to report the actual percentage outside of DOD. We have also reported that GSORTS does not always reflect training and equipment deficiencies. For example, we reported in April and June 2002 that readiness data do not reflect the effect of training range restrictions on unit readiness. We have also reported that GSORTS does not include whether a unit’s chemical/biological equipment is usable. In commenting on our analysis, the OUSD P&R office responsible for readiness reporting stated that it recognized the imprecision of the current measurements. According to that office, an effort to develop the planned new readiness reporting system, which is discussed later in this report, includes working with the DOD components to enhance and expand readiness reporting. Since our 1998 report, the quarterly reports improved in some areas, but degraded in others. Although some information was added, we found that some of the same quality issues remain—namely, that the reports do not specifically describe readiness problems, their effects on readiness, or remedial actions. DOD has added information to the quarterly reports in response to legislative direction. For example, DOD added information on the services’ cannibalization rates. Also, DOD added annual reports on infrastructure and institutional training readiness. However, some information was eliminated from the quarterly reports. For example, the law requires results of joint readiness reviews to be reported to Congress. DOD included these results until the July-September 2001 Quarterly Readiness Report to the Congress. Since that report, four quarterly reports have been issued without the joint force assessments. Defense officials responsible for readiness reporting said that the joint readiness reviews were not included because the scenarios were based on the former national security strategy of two major wars. The officials stated they plan to include results from the joint readiness reviews in future reports. In commenting on our analysis the OUSD P&R office responsible for readiness reporting stated that it continues to seek better ways to provide concise, quality information. As we reported in 1998, we found that the quarterly reports still contain broad statements of readiness issues and remedial actions, which are not supported by detailed examples and are not related to data in the reports’ annexes. Among other things, the law requires the quarterly reports to specifically describe each readiness problem and deficiency as well as planned remedial actions. The reports did not specifically describe the nature of each readiness problem or discuss the effects of each on unit readiness. Also, the reports included only broad statements of remedial actions that lacked details on timelines, objectives, or funding requirements. For example, one report said that the Air Force continued to experience shortages in critical job skills that affected the service’s ability to train. The report did not refer the reader to data in annexes showing training readiness ratings; it did not state which skills were short, which training was not accomplished, or whether this shortage had or was expected to affect units’ readiness ratings. Further, the report did not explain the remedial actions taken or planned to reverse the skill shortage, how long it would take to solve this problem, or what funding was programmed to implement remedial actions. Defense readiness officials agreed, stating that information in the quarterly reports is summarized to the point that there are no details on readiness deficiencies, remedial actions, or funding programmed to implement remedial actions. We believe the Congress needs this type of information to understand the significance of the information reported. Although some of the quarterly report annexes contain voluminous data, the data are not adequately explained or related to units’ readiness. The law does not mandate specific explanations of these “readiness indicators,” but we believe it is essential for Congress to understand the significance of the information in these reports for use in its oversight role. For example, DOD is required to report on the maintenance backlog. Although the report provides the quantity of the backlog, it does not explain the effect the backlog had on readiness. Specifically, the report did not explain whether units’ readiness were affected because maintenance was not accomplished when needed. In addition, DOD is required to report on training commitments and deployments. The Expanded Quarterly Readiness Report to Congress Implementation Plan dated February 1998 stated that “either an excessive or a reduced level of commitment could be an indicator of potential readiness problems.” However, OUSD P&R did not define what kind of “readiness problems” this data may indicate would occur as a result of “excessive or reduced” levels of training and deployments, such as degraded equipment or training. The data reported are the amount of training away from home station and the amount of deployments. However, these data are not explained or related to a unit’s equipment or training ratings. Further, criteria have not been established to distinguish between acceptable and unacceptable levels of the training and deployment data reported. As a result, the reader does not know whether the data reported indicate a problem or the extent of the problem. In commenting on our analyses, OUSD P&R acknowledged “the Department would be better served by providing more information as to how various data relates to readiness.” Generally, the quarterly reports also do not contain information on funding programmed to implement specific remedial actions. For example, one quarterly report included the statement that budgets were revised “to address readiness and capabilities issues,” but no examples were provided. Also, the report lacked statements explaining how this “budget revision” would improve readiness. Although not required by law, we believe it would prove useful for Congress to understand how DOD addresses specific readiness problems. In commenting on our analysis, OUSD P&R officials stated that they would continue to work with the services to provide more fidelity with the information presented in the quarterly report annexes. However, they also said that detailed examples require significant staff effort throughout DOD and that the added time for more detailed analysis could render the report a historical document. They further said that complete information would certainly be desired and agreed it is important for the Congress to understand the significance of the information in the quarterly reports for use in its oversight role. DOD has complied with most, but not all, of the readiness reporting requirements added by Congress in the National Defense Authorization Acts for Fiscal Years 1998 through 2002. Congress added readiness reporting requirements out of concern over contradictions between assessment of military unit readiness in official readiness reports and observations made by military personnel in the field. In a review of these acts, we identified both recurring readiness reporting requirements that were added to existing law and one-time reporting requirements related to military readiness. We compared current readiness reporting to the requirements in these acts to make an overall judgment on the extent of compliance. We did not develop a total count of the number of reporting requirements because the acts included a series of sections and subsections that could be totaled in various ways. Because DOD is not reporting on all the requirements added over the past several years, the Congress is not receiving all the information mandated by law. Our analysis showed that DOD has complied with most of the requirements added in the National Defense Authorization Acts for Fiscal Years 1998-2002. For example, DOD took the following actions in response to legislative requirements: DOD is now reporting on the readiness of prepositioned equipment and is listing individual units that have reported low readiness as required by the National Defense Authorization Act for Fiscal Year 1998. DOD is reporting on infrastructure and institutional training readiness as required by the National Defense Authorization Act for Fiscal Year 1999. DOD contracted for an independent study of requirements for a comprehensive readiness reporting system and submitted the study report to the Congress as required by the National Defense Authorization Act for Fiscal Year 2000. DOD has added quarterly information on the military services’ cannibalization rates as required by the National Defense Authorization Act for Fiscal Year 2001. DOD is reporting on some, though not all, of the items Congress required be added to the quarterly readiness reports. For example, the National Defense Authorization Act for Fiscal Year 1998 required 19 specific items be reported that are consistent with our previously cited 1994 report on readiness reporting. The 1994 report included a list of 26 readiness indicators that DOD commanders said were important for a more comprehensive assessment of readiness. A 1994 DOD-funded study by the Logistics Management Institute found that 19 of the 26 indicators could help DOD monitor critical aspects of readiness. The 19 items listed in the National Defense Authorization Act for Fiscal Year 1998 are very similar to those identified in the 1994 Logistics Management Institute study. DOD is reporting on 11 of the 19 items and is not reporting on the other 8. The eight items are (1) historical personnel strength data and trends, (2) personnel status, (3) borrowed manpower, (4) personnel morale, (5) operations tempo, (6) training funding, (7) deployed equipment, and (8) condition of nonpacing equipment as required in the Act. In an implementation plan setting forth how it planned to comply with reporting on the 19 items, which was also required by the National Defense Authorization Act for Fiscal Year 1998, DOD stated that it would not report on these eight indicators for the following reasons: Deployed equipment was considered part of the status of prepositioned equipment indicator. Historical personnel strength data and trends were available from the annual Defense Manpower Requirements Report. Training funding and operations tempo were believed to be represented adequately in the budget requests as flying hours, steaming days, or vehicle miles and were not considered good measures of readiness output. Personnel strength status was considered to be part of the personnel rating, but DOD agreed to investigate other ways to evaluate the effect of service personnel working outside the specialty and grade for which they were qualified. Borrowed manpower data was only captured in a limited sector of the deployable force and may not be meaningful until a better method is developed to capture the data. Personnel morale had no existing data sources. The condition of nonpacing equipment had no reasonable measurement to use as an indicator. Notwithstanding the reasoning that DOD stated, these eight indicators continue to be required by law, and we saw no indication in our work that DOD is working to develop data for them. Also, DOD is not complying with some of the requirements in the National Defense Authorization Act for Fiscal Year 1999. Examples are as follows: The act required DOD to establish and implement a comprehensive readiness reporting system by April 2000. As of January 2003, DOD had not implemented a new system, and officials said it is not expected to be fully capable until 2007 or 7 years later than required. The act also required DOD to develop implementing regulations for the new readiness reporting system. DOD had not developed implementing regulations as of January 2003. The act required DOD to issue regulations for reporting changes in the readiness of training or defense infrastructure establishments within 72 hours. Although DOD has provided some guidance, officials stated they have not issued regulations because no mechanism exists for institutional training or defense infrastructure establishments to report changes and because these entities are not part of an established readiness reporting system. In commenting on our analyses, DOD officials acknowledged “the Department is not in full compliance” and stated that they plan to achieve compliance with the new readiness reporting system under development. OUSD P&R officials said that the shortfalls in reporting are unwieldy under the current system; OUSD P&R intends to correct these shortfalls when the new system is functional. However, as noted above, DOD does not plan to implement its new system until 2007. As of January 2003, DOD also had not targeted incremental improvements in readiness reporting during the period in which the new system is being developed. Until then, Congress will receive less readiness information than it mandated by law. DOD issued a directive in June 2002 to establish a new readiness reporting system. The Undersecretary of Defense for Personnel and Readiness is to oversee the system to ensure the accuracy, completeness, and timeliness of its information and data, its responsiveness, and its effective and efficient use of modern practices and technologies. Officials in the OUSD P&R readiness office responsible for developing the new system said that they plan to use the new system to comply with the requirements in the National Defense Authorization Acts and to address many of the recommendations contained in a congressionally directed independent study. However, as of January 2003, there are few details of what the new system would include. Although the new system may have the potential to improve readiness reporting, as of January 2003, it is only a concept without detailed plans to guide development and monitor implementation. As a result, the extent to which the new system will address existing shortcomings is unknown. The National Defense Authorization Act for Fiscal Year 1999 required DOD to establish a comprehensive readiness reporting system. In doing so, the Congress expressed concern about DOD’s lack of progress in developing a more comprehensive readiness measurement system reflective of operational realities. The Congress also noted that past assessments have suffered from DOD’s inability to create and implement objective and consistent readiness reporting criteria capable of providing a clear picture to senior officials and the Congress. Subsequently, the August 2001 Defense Planning Guidance for Fiscal Years 2003-2007 called for the development of a strategic plan for transforming DOD readiness reporting. In June 2002, DOD issued a directive establishing the Department of Defense Readiness Reporting System. The system will measure and report on the readiness of military forces and the supporting infrastructure to meet missions and goals assigned by the Secretary of Defense. All DOD components will align their readiness reporting processes in accordance with the directive. The directive assigns oversight and implementing responsibility to the Undersecretary of Defense for Personnel and Readiness. The Undersecretary is responsible for developing, fielding, maintaining, and funding the new system and scenario assessment tools. The Undersecretary—in collaboration with the Joint Chiefs of Staff, Services, Defense Agencies, and Combatant Commanders—is to issue implementing instructions. The Chairman of the Joint Chiefs of Staff, the Service Secretaries, the commanders of the combatant commands, and the heads of other DOD components are each assigned responsibilities related to readiness reporting. OUSD P&R established a timetable to implement the new readiness reporting system. OUSD P&R plans to achieve initial capability in 2004 and reach full capability in 2007. OUSD P&R officials involved in developing the system said that they have been briefing the concept for the new reporting system since October 2002. As of January 2003 these officials stated that they are continuing what they have termed the “concept demonstration” phase, which began in October 2002. This phase consists of briefing various offices within DOD, the Joint Staff, and the services to build consensus and refine the new system’s concept. These officials also said that the new system will incorporate many, but not all, of the recommendations contained in a legislatively mandated independent study of readiness reporting, which concluded that improvements were needed to meet legislatively mandated readiness reporting requirements and included numerous recommendations for what a new system should include. For example, the study recommended that (1) DOD report on all elements essential to readiness, such as depots, combat support agencies, and Defense agencies; (2) reporting should be in terms of mission essential tasks; and (3) DOD should measure the capability to carry out the full range of National Security Strategy requirements—not just a unit’s wartime mission. We believe that successfully developing and implementing a large-scale effort, such as DOD’s new readiness reporting system, requires an implementation plan that includes measurable performance goals, identification of resources, performance indicators, and an evaluation plan. As discussed earlier, full implementation of DOD’s new readiness reporting system is several years away, and much remains to be done. In January 2003 the OUSD P&R office responsible for developing the new system said that the new readiness reporting system is a large endeavor that requires buy-in from many users and that the development of the system will be challenging. This office also wrote that it had just been given approval to develop the new readiness reporting system, was targeting development of an implementing instruction in the March 2003 time frame, and had not developed an implementation plan to assess progress in developing and implementing the new reporting system. The directive establishing the new reporting system requires the Undersecretary of Defense for Personnel and Readiness, in collaboration with others, to issue implementing instructions for the new system. DOD has experienced delays in implementing smaller readiness improvements than envisioned in the new readiness reporting system. One such effort involved development of an interface to query the existing readiness data base (GSORTS). In a July 2002 report, the DOD Inspector General reported that the planned implementation of this interface slipped 44 months, or just over 3.5 years. Also, history has shown it takes DOD time to make changes in the readiness reporting system. As illustrated in figure 2, DOD began reporting on specific readiness indicators 4 years after it agreed with GAO recommendations to include them in readiness reporting (see fig. 2). Other DOD development efforts recognize the need for effective planning to guide development. For example, DOD is working to transform military training as directed by the Defense Planning Guidance for Fiscal Years 2003-07. A March 2002 Strategic Plan for Transforming DOD Training developed by a different office within OUSD P&R discusses a training transformation road map with major tasks subdivided into near-, mid-, and long-term actions. The plan includes a list of near-term actions to be completed by October 2003 and definition of mid- and long-term actions in a comprehensive implementation plan that will identify specific tasks, responsibilities, timelines, resources, and methods to assess completion and measure success. The May 2002 Defense Planning Guidance update for fiscal years 2004-2009 directs OUSD P&R, working with other DOD components, to develop a comprehensive program to implement the strategic training transformation plan and provide it to the Deputy Secretary of Defense by April 1, 2003. Since the directive for creating a new readiness reporting system established broad policy with no specifics and since DOD has not developed an implementation plan, the extent to which the new system will address the current system’s shortcomings will remain unknown until the new system is fully capable in 2007. Until then, readiness reporting will continue to be based on the existing system. Commenting on its plans for the new system, OUSD P&R said that it is in the process of creating an Advanced Concept Technology Demonstration (ACTD) structure for the new system and will produce all necessary planning documents required within the established ACTD process. However, this process is intended to provide decision makers an opportunity to understand the potential of a new concept before an acquisition decision. We do not believe the ACTD process will necessarily result in an implementation plan to effectively monitor development and assess whether the new system is being implemented on schedule and achieving desired results. DOD’s ACTD guidelines state the principal management tool for ACTDs is a management plan, which provides a top-level description of the objectives, critical events, schedule, funding, and measures of evaluation for the project. We reported in December 2002 that these guidelines contain advice and suggestions as opposed to formal directives and regulations. DOD’s guidelines state that the ACTD should plan exercises or demonstrations to provide an adequate basis for utility assessment. We also reported in December 2002 that DOD lacks specific criteria to evaluate demonstration results, which may cause acquisition decisions to be based on too little knowledge. Therefore, we still believe an implementation plan is necessary since the ACTD process does not require a detailed implementation plan and does not always include specific criteria to evaluate effectiveness. While DOD has made some improvements in readiness reporting since 1998, some of the same issues remain unresolved today. Although DOD is providing Congress more data than in 1998, the voluminous data are neither evaluated nor explained. The quarterly reports do not link the effects of “readiness issues” or deficiencies to changes in readiness at the unit level. Also, as in 1998, the reports contain vague descriptions of remedial actions not linked to specific deficiencies. Finally, the quarterly reports do not discuss funding that is programmed to implement specific remedial actions. As a result, the information available to Congress is not as effective as it could be as an oversight tool. Even though DOD directed development of a new readiness reporting system, it has not yet developed an implementation plan identifying objective and measurable performance goals, the resources and personnel needed to achieve the goals, performance indicators, and an evaluation plan to compare program results with goals, and milestones to guide overall development of the new readiness system. Even though the new system may have the potential to improve readiness reporting, without an implementation plan little assurance exists that the new system will actually improve readiness assessments by the time full capability is planned in 2007. Without such a plan, it will also remain difficult to gauge progress toward meeting the 2007 target date. This concern is reinforced in light of the (1) years-long delays in implementing other readiness reporting improvements and (2) the deficiencies in existing reporting that OUSD P&R plans to rectify with the new system. Furthermore, without an implementation plan neither senior DOD leadership nor the Congress will be able to determine if the resources spent on this system are achieving their desired results. To improve the information available to Congress for its use in its oversight role, we recommend that the Secretary of Defense direct the OUSD P&R to improve the quality of information contained in the quarterly reports. Specifically, we recommend that DOD’s reports explain (in the unclassified section) the most critical readiness issues that are of greatest concern to the department and the services. For each issue, we recommend that DOD’s reports include an analysis of the readiness deficiencies, including a clear explanation of how the issue affects units’ readiness; a statement of the specific remedial actions planned or implemented; and clear statements of the funding programmed to implement each remedial action. To be able to assess progress in developing the new readiness system, we recommend that the Secretary of Defense direct the OUSD P&R to develop an implementation plan that identifies performance goals that are objective, quantifiable, and measurable; the cost and personnel resources needed to achieve the goals, including an identification of the new system’s development and implementation costs in the President’s Budget beginning in fiscal year 2005 and Future Years Defense Plan; performance indicators to measure outcomes; an evaluation plan to compare program results with established goals; and milestones to guide development to the planned 2007 full capability date. To assist Congress in its oversight role, we recommend that the Secretary of Defense give annual updates to the Congress on the new readiness reporting system’s development to include performance measures, progress toward milestones, comparison of progress with established goals, and remedial actions, if needed, to maintain the implementation schedule. In written comments on a draft of this report, which are reprinted in appendix II, the Department of Defense did not agree with our recommendations. In response to our recommendation that DOD improve the quality of information contained in its quarterly readiness reports, DOD said that the Quarterly Readiness Report to the Congress is one of the most comprehensive and detailed reports submitted to the Congress that discusses serious readiness issues and ways in which these issues are being addressed. DOD further stated that the department presents briefings on specific readiness issues to the Congress and that spending more time and resources expanding the existing written report would be counterproductive. We recognize that the Quarterly Readiness Reports to the Congress contain voluminous data. However, as discussed in this report, we found that the quarterly reports’ annexes are large and mostly consist of charts or other data that are not adequately explained and are not related to units’ readiness. In some cases, criteria have not been established to enable the reader to distinguish between acceptable and unacceptable levels of the data reported. As a result, the reader cannot assess the significance of the data because it is not at all clear whether the data reported indicate a problem or the extent of the problem. Considering that the quarterly reports contain inadequately explained data and that much of the information is not “user friendly,” we continue to believe the quality of information in the quarterly reports can be improved. In fact, we reviewed all the quarterly reports provided to Congress since 1998 and found that through the January-June 2001 report the reports did include an unclassified summary of readiness issues for each service addressing four topics—personnel, equipment, training, and enablers (critical units or capabilities, such as specialized aircraft, essential to support operations). However, the reports did not include supporting data or a discussion of remedial actions. Since that time, these summaries have been eliminated from the quarterly reports. For example, the unclassified narrative of the last two reports available at the time we performed our work—January- March 2002 and April-June 2002—were less than two pages long and neither discussed readiness issues nor ways in which these issues are being addressed. One report discussed the new readiness reporting system, and the other discussed a review of seven environmental laws. Given that DOD has highlighted key issues in the past, we believe that improving the quarterly reports would be beneficial if DOD were to focus on the most critical readiness issues that are of greatest concern to the services and includes supporting data and a discussion of remedial actions. Therefore, we have modified our recommendation that DOD improve the quality of readiness reporting to focus on readiness issues deemed to be critical by the Secretary and the military services and to provide more detailed data and analyses of those issues and the remedial actions planned for each one. DOD did not agree with our recommendations that it (1) develop an implementation plan with, among other things, performance goals that are objective, quantifiable, and measurable and (2) provide annual updates to the Congress on the new readiness reporting system’s development. DOD said that it had undertaken an initiative to develop better tools for assessing readiness and that it intended to apprise Congress on its efforts to develop tools for readiness assessment. DOD further stated that the effort to improve readiness reporting is in its infancy, but that it has established milestones, cost estimates, functional responsibilities, and expected outcomes. DOD believes that further planning and a prescriptive annual update to the Congress is unnecessary. We agree that the new readiness reporting system may have the potential to improve readiness reporting. However, as discussed in this report, the directive establishing the new system contains very broad, high-level statements of overall functional responsibilities and outcomes, but no details on how these will be accomplished. Further, DOD has established two milestones—initial capability in 2004 and full capability in 2007. DOD does not have a road map explaining the steps needed to achieve full capability by 2007, which is seven years after Congress mandated a new system be in place. In addition, as discussed earlier in this report, DOD has experienced delays in implementing much smaller readiness improvements. While DOD has undertaken an initiative to develop better tools for assessing readiness and intends to routinely and fully apprise the Congress on its development efforts, tools are the mechanics for evaluating readiness data. As such, tools are not the same thing as the comprehensive readiness reporting system mandated by Congress that DOD has said will include new metrics and will evaluate entities within DOD that currently do not report readiness. Considering that Congress expressed concern about DOD’s lack of progress in developing a comprehensive system, that developing and implementing DOD’s planned new system is scheduled to take 4 more years, and that delays have been experienced in earlier efforts to make small improvements in readiness reporting, we continue to believe that it is important for DOD to develop an implementation plan to gauge progress in developing and implementing the new readiness reporting system and to provide annual updates to the Congress. Such a plan would be consistent with DOD’s approach to other major initiatives such as transforming training. We have therefore retained these two recommendations. DOD also provided technical corrections and we have modified the report where appropriate. We are sending copies of this report to the Ranking Minority Member, Subcommittee on Readiness, House Committee on Armed Services; the Chairman and Ranking Minority Member, Subcommittee on Readiness and Management Support, Senate Committee on Armed Services; other interested congressional committees; Secretary of Defense; and the Director, Office of Management and Budget. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please call me on (757) 552-8111 or by E-mail at curtinn@gao.gov. Major contributors to this report were Steven Sternlieb, Brenda Waterfield, James Lewis, Dawn Godfrey, and Herbert Dunn. To assess the progress the Department of Defense (DOD) has made in resolving issues raised in our prior report concerning both the unit level readiness reporting system and the lack of specificity in DOD’s Quarterly Readiness Reports to the Congress, we met with DOD officials and reviewed regulations and quarterly reports. Specifically, we met with officials of the Office of the Undersecretary of Defense for Personnel and Readiness (OUSD P&R) responsible for readiness reporting, the Joint Staff, and the military services to discuss their individual progress in each of these areas. To assess progress regarding unit level readiness reporting, we reviewed the Chairman of the Joint Chiefs of Staff manual governing this system and the related service implementing instructions to determine if these documents had changed since our 1998 report or if the manual and service instructions continued to allow reporting in the same manner as reflected in our earlier report. Through a comparison of the current and prior documents, discussions with pertinent officials, and our analysis, we determined whether the readiness reporting issues we raised in 1998 had been resolved. We also reviewed the content of quarterly reports to assess their quality and usefulness, and assess whether the problems we reported in 1998 had been rectified. We discussed our analysis with OUSD P&R officials and provided them with our analyses in order that they could fully consider and comment on our methodology and conclusions. We did not assess the accuracy of reported readiness data. To determine the extent to which DOD has complied with legislative reporting requirements enacted since our prior report, we compared a complete listing of these requirements to DOD’s readiness reporting. First, we identified the legislatively mandated readiness reporting requirements enacted since our 1998 report. To accomplish this, we reviewed the National Defense Authorization Acts for Fiscal Years 1998-2002 to list the one-time and recurring reporting requirements related to military readiness. We also requested congressional staff and OUSD P&R to review the list, and officials from both offices agreed it was accurate. We did not develop a total count of the number of reporting requirements because the acts included a series of sections and subsections that could be totaled in various ways. Once we obtained concurrence that this listing was complete and accurate, we compared this list to current readiness reporting to make an overall judgment on the extent of compliance. To assess how DOD plans to improve readiness reporting, we reviewed the June 2002 DOD directive establishing a new readiness reporting system and a progress update briefing on the new system. We also obtained readiness briefings from each of the services, OUSD P&R, and Joint Staff officials. We performed several electronic searches of the Deputy Under Secretary of Defense (Readiness) electronic Web site to determine the status of readiness reporting. To assess how smoothly other readiness improvements progressed, we reviewed DOD audit reports. We discussed our findings with OUSD P&R officials and worked proactively with them in conducting our analyses. Specifically, we provided them drafts of our analyses for their comments and corrections. We conducted our review from June 2002 through January 2003 in accordance with generally accepted government auditing standards.
The Department of Defense's (DOD) readiness assessment system was designed to assess the ability of units and joint forces to fight and meet the demands of the national security strategy. In 1998, GAO concluded that the readiness reports provided to Congress were vague and ineffective as oversight tools. Since that time, Congress added reporting requirements to enhance its oversight of military readiness. Therefore, the Chairman asked GAO to examine (1) the progress DOD made in resolving issues raised in the 1998 GAO report on both the unit-level readiness reporting system and the lack of specificity in DOD's Quarterly Readiness Reports to the Congress, (2) the extent to which DOD has complied with legislative reporting requirements enacted since 1997, and (3) DOD's plans to improve readiness reporting. Since 1998, DOD has made some progress in improving readiness reporting--particularly at the unit level--but some issues remain. For example, DOD uses readiness measures that vary 10 percentage points or more to determine readiness ratings and often does not report the precise measurements outside DOD. DOD included more information in its Quarterly Readiness Reports to the Congress. But quality issues remain--in that the reports do not specifically describe readiness problems, their effects on readiness, or remedial actions to correct problems. Nor do the reports contain information about funding programmed to address specific remedial actions. Although current law does not specifically require this information, Congress could use it for its oversight role. DOD complied with most, though not all, of the legislative readiness reporting requirements enacted by Congress in the National Defense Authorization Acts for Fiscal Years 1998-2002. For example, DOD (1) is now listing the individual units that have reported low readiness and reporting on the readiness of prepositioned equipment, as required by the fiscal year 1998 Act; (2) is reporting on 11 of 19 readiness indicators that commanders identified as important and that Congress required to be added to the quarterly reports in the fiscal year 1998 Act, but is not reporting on the other 8 readiness indicators; and (3) has not yet implemented a new comprehensive readiness reporting system as required in the fiscal year 1999 Act. As a result, Congress is not receiving all the information mandated by law. DOD issued a directive in June 2002 to establish a new comprehensive readiness reporting system that DOD officials said they plan to use to comply with the reporting requirements specified by Congress. The new system is intended to implement many of the recommendations included in a congressionally directed independent study for establishing such a system. However, the extent to which the new system will actually address the current system's shortcomings is unknown, because the new system is currently only a concept, and full capability is not scheduled until 2007. As of January 2003, DOD had not developed an implementation plan containing measurable performance goals, identification of resources, performance indicators, and an evaluation plan to assess progress in developing the new reporting system. Without such a plan, neither DOD nor the Congress will be able to fully assess whether the new system's development is on schedule and achieving desired results.
The use of information technology has created many benefits for agencies such as IRS in achieving their missions and providing information and services to the public, but extensive reliance on computerized information also creates challenges in securing that information from various threats. Information security is especially important for government agencies, where maintaining the public’s trust is essential. Without proper safeguards, computer systems are vulnerable to individuals and groups with malicious intentions who can intrude and use their access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. The risk to these systems are well-founded for a number of reasons, including the increase in reports of security incidents, the ease of obtaining and using hacking tools, and steady advances in the sophistication and effectiveness of attack technology. The Federal Bureau of Investigation has identified multiple sources of threats, including foreign entities engaged in intelligence gathering and information warfare, domestic criminals, hackers, virus writers, and disgruntled employees or contractors working within an organization. In addition, the U.S. Secret Service and the CERT® Coordination Center studied insider threats in the government sector and stated in a January 2008 report that “government sector insiders have the potential to pose a substantial threat by virtue of their knowledge of, and access to, employer systems and/or databases.” Insider threats include errors or mistakes and fraudulent or malevolent acts by insiders. Our previous reports, and those by federal inspectors general, describe persistent information security weaknesses that place federal agencies, including IRS, at risk of disruption, fraud, or inappropriate disclosure of sensitive information. Accordingly, we have designated information security as a governmentwide high-risk area since 1997, most recently in 2011. Information security is essential to creating and maintaining effective internal controls. The Federal Managers’ Financial Integrity Act of 1982 requires us to issue standards for internal control in federal agencies. The standards provide the overall framework for establishing and maintaining internal control and for identifying and addressing major performance and management challenges and areas at greatest risk of fraud, waste, abuse, and mismanagement. The term internal control is synonymous with the term management control, which covers all aspects of an agency’s operations (programmatic, financial, and compliance). The attitude and philosophy of management toward information systems can have a profound effect on internal control. Information system controls consist of those internal controls that are dependent on information systems processing and include general controls at the entitywide, system, and business process application levels (security management, access controls, configuration management, segregation of duties, and contingency planning); business process application controls (input, processing, output, master file, interface, and data management system controls); and user controls (controls performed by people interacting with information systems). Recognizing the importance of securing federal agencies’ information systems, Congress enacted the Federal Information Security Management Act (FISMA) in December 2002 to strengthen the security of information and systems within federal agencies. FISMA requires each agency to develop, document, and implement an agencywide information security program for the information and information systems that support the operations and assets of the agency, using a risk-based approach to information security management. Such a program includes assessing risk; developing and implementing cost-effective security plans, policies, and procedures; providing specialized training; testing and evaluating the effectiveness of controls; planning, implementing, evaluating, and documenting remedial actions to address information security deficiencies; and ensuring continuity of operations. IRS has demanding responsibilities in collecting taxes, processing tax returns, and enforcing federal tax laws, and relies extensively on computerized systems to support its financial and mission-related operations. In fiscal year 2010, IRS processed hundreds of millions of tax returns, collected about $2.3 trillion in federal tax payments, and paid about $467 billion in refunds to taxpayers. Further, the size and complexity of IRS add unique operational challenges. IRS employs over 100,000 people in its Washington, D.C., headquarters and over 700 offices in all 50 states and U.S. territories and in some U.S. embassies and consulates. To manage its data and information, the agency operates three enterprise computing centers located in Detroit, Michigan; Martinsburg, West Virginia; and Memphis, Tennessee. IRS also collects and maintains a significant amount of personal and financial information on each American taxpayer. Protecting the confidentiality of this sensitive information is paramount; otherwise, taxpayers could be exposed to loss of privacy and to financial loss and damages resulting from identity theft or other financial crimes. The Commissioner of Internal Revenue has overall responsibility for ensuring the confidentiality, integrity, and availability of the information and information systems that support the agency and its operations. FISMA requires the Chief Information Officer (CIO) or comparable official at federal agencies to be responsible for developing and maintaining an information security program. IRS has delegated this responsibility to the Associate Chief Information Officer for Cybersecurity, who heads the Office of Cybersecurity. The Office of Cybersecurity’s mission is to protect taxpayer information and the IRS’s electronic system, services, and data from internal and external cyber security-related threats by implementing security practices in planning, implementation, risk management, and operations. IRS develops and publishes its information security policies, guidelines, standards, and procedures in the Internal Revenue Manual and other documents in order for IRS divisions and offices to carry out their respective responsibilities in information security. In October 2010, the Treasury Inspector General for Tax Administration (TIGTA) stated that security, including computer security, was the top priority in its list of top 10 management challenges for IRS in fiscal year 2011. Although IRS has made progress in correcting information security weaknesses that we have reported previously, many weaknesses have not been corrected and we identified many new weaknesses during fiscal year 2010. Specifically, 65 out of 88 previously reported weaknesses—about 74 percent—have not yet been corrected. In addition, we identified 37 new weaknesses. These weaknesses relate to access controls, configuration management, and segregation of duties. Weaknesses in these areas increase the likelihood of errors in financial data that result in misstatement and expose sensitive information and systems to unauthorized use, disclosure, modification, and loss. An underlying reason for these weaknesses—both old and new—is that IRS has not yet fully implemented key components of a comprehensive information security program. These weaknesses continue to jeopardize the confidentiality, integrity, and availability of the financial and sensitive taxpayer information processed by IRS’s systems and, considered collectively, are the basis of our determination that IRS had a material weakness in internal control over its financial reporting related to information security in fiscal year 2010. A basic management objective for any organization is to protect the resources that support its critical operations from unauthorized access. Organizations accomplish this objective by designing and implementing controls that are intended to prevent, limit, and detect unauthorized access to computing resources, programs, information, and facilities. Access controls include those related to user identification and authentication, authorization, cryptography, audit and monitoring, and physical security. However, IRS did not fully implement effective controls in these areas. Without adequate access controls, unauthorized individuals may be able to login, access sensitive information, and make undetected changes or deletions for malicious purposes or personal gain. In addition, authorized individuals may be able to intentionally or unintentionally add, modify, or delete data to which they should not have been given access. A computer system needs to be able to identify and authenticate each user so that activities on the system can be linked and traced to a specific individual. An organization does this by assigning a unique user account to each user, and in so doing, the system is able to distinguish one user from another—a process called identification. The system also needs to establish the validity of a user’s claimed identity by requesting some kind of information, such as a password, that is known only by the user—a process known as authentication. The combination of identification and authentication—such as user account/password combinations—provides the basis for establishing individual accountability and for controlling access to the system. The Internal Revenue Manual requires the use of a strong password for authentication (defined as a minimum of eight characters, containing at least one numeric or special character, and a mixture of at least one uppercase and one lower case letter). Furthermore, the Internal Revenue Manual states that database account passwords are not to be reused within 10 password changes and that the password grace time for a database—the number of days an individual has to change his or her password after it expires—should be set to 10. IRS properly configured password complexity on its servers used to manage access to network resources. In addition, IRS made progress in correcting a previously identified weakness by restricting remote login access. However, IRS did not consistently implement strong authentication controls on certain systems, as required by the Internal Revenue Manual. For example: Databases that support IRS administrative accounting and procurement systems had a certain password control set to “null.” This password control verifies certain password settings, such as password complexity and minimum password length, to ensure the user’s password complies with IRS policy. By configuring this control to “null,” no password verifications are performed. Seventeen of 90 network devices we reviewed had a password length of 6 characters. Databases that support the IRS’s administrative accounting and procurement systems contained several password resource values that were not set to the settings required by IRS policy. For example, the password reuse and password grace time values were set to “unlimited.” As a result of these weaknesses, increased risk exists that an individual with malicious intentions could gain inappropriate access to these sensitive IRS applications and data, and potentially use the access to attempt compromises of other IRS systems. Authorization is the process of granting or denying access rights and permissions to a protected resource, such as a network, a system, an application, a function, or a file. A key component of granting or denying an individual access rights is the concept of “least privilege.” Least privilege, which is a basic principle for securing computer resources and information, means that a user is granted only those access rights and permissions needed to perform official duties. To restrict legitimate users’ access to only those programs and files needed to do their work, organizations establish access rights and permissions to users. These “user rights” are allowable actions that can be assigned to one user or to a group of users. File and directory permissions are rules that regulate which users can access a particular file or directory and the extent of that access. To avoid unintentionally authorizing a user access to sensitive files and directories, an organization should give careful consideration to its assignment of rights and permissions. IRS policy states that access control measures based on least privilege and that provide protection from unauthorized alteration, loss, unavailability, or disclosure of information should be implemented. Additionally, the Internal Revenue Manual requires that the guest account be disabled to prevent any user from being authenticated as a guest. Although IRS had taken steps to control access to systems, it continued to permit excessive access. For example, IRS had corrected a previously identified weakness by limiting access to certain key financial documents used for input into the administrative accounting system. However, it continued to permit excessive access to several systems by granting rights and permissions that gave users more access than they needed to perform their assigned functions. For example, IRS granted excessive privileges to a database account on the online system used to support and manage its computer access request, approval, and review process. In addition, the agency allowed some individuals to manually enter commands that would permit them to bypass the application programs intended to be used to access the data. Also, all database users had unnecessary execute permissions on several sensitive database packages that allowed them to manipulate data and gain access to sensitive files and directories on IRS’s access authorization, administrative accounting, electronic tax payment, and procurement systems. Furthermore, while IRS made progress in correcting a previously identified weakness by disabling the guest account on some SQL servers, IRS had not disabled the SQL server guest account on its real property management system, increasing the risk that unauthorized users could use this account to gain system access. These excessive access privileges can provide opportunities for individuals to circumvent security controls. Cryptography underlies many of the mechanisms used to enforce the confidentiality and integrity of critical and sensitive information. A basic element of cryptography is encryption, which is used to transform plain text into cipher text using a special value known as a key and a mathematical process known as an algorithm. According to IRS policy, the use of insecure protocols should be restricted because its widespread use can allow passwords and other sensitive data to be transmitted across its internal network unencrypted. Although IRS discontinued the use of unencrypted protocols on the servers supporting the procurement system, its network devices were configured to use protocols that allowed unencrypted transmission of sensitive data. For example, 37 of the 90 network devices we reviewed and a server supporting the IRS’s tax payment system used unencrypted protocols to transmit sensitive information. In addition, IRS had not corrected previously identified weaknesses, such as weak encryption controls over user login to its administrative accounting system and transmission of unencrypted mainframe administrator login information across its network. By not encrypting sensitive data, IRS is at increased risk that an unauthorized individual could view and then use the data to gain unwarranted access to its system and/or sensitive information. To establish individual accountability, monitor compliance with security policies, and investigate security violations, it is crucial to determine what, when, and by whom specific actions have been taken on a system. Organizations accomplish this by implementing system or security software that provides an audit trail—a log of system activity—that they can use to determine the source of a transaction or attempted transaction and to monitor users’ activities. The way in which organizations configure system or security software determines the nature and extent of information that can be provided by the audit trail. To be effective, organizations should configure their software to collect and maintain audit trails that are sufficient to track security-relevant events. The Internal Revenue Manual states that IRS should enable and configure audit logging on all systems to aid in the detection of security violations, performance problems, and flaws in applications. Additionally, IRS policy states that security controls in information systems shall be monitored on an ongoing basis. IRS is currently utilizing a commercial off-the-shelf audit trail solution allowing the agency to review audit log reports and analyze audit data. In addition, IRS has established the Enterprise Security Audit Trails Project Management Office, which is responsible for managing all enterprise audit initiatives and identifying and overseeing deployment and transition of various audit trail solutions. Despite these steps forward, IRS did not enable certain auditing features on three systems we reviewed. For example, IRS did not enable security event auditing or system privilege auditing features on databases that support its access authorization, administrative accounting, and procurement systems. In addition, IRS had not corrected a previously identified weakness in which certain servers were not configured to ensure sufficient audit trails. As a result, IRS’s ability to establish individual accountability, monitor compliance with security policies, and investigate security violations was limited. Physical security controls are important for protecting computer facilities and resources from espionage, sabotage, damage, and theft. These controls involve restricting physical access to computer resources, usually by limiting access to the buildings and rooms in which they are housed and periodically reviewing access granted, in order to ensure that access continues to be appropriate. At IRS, physical access control measures, such as physical access cards that are used to permit or deny access to certain areas of a facility, are vital to safeguarding its facilities, computing resources, and information from internal and external threats. The Internal Revenue Manual requires access controls to protect employees and contractors, information systems, and the facilities in which they are located. The policy also requires that entry to restricted areas should be limited to only those who need it to perform their job duties. It also requires department managers of restricted areas to review, validate, sign, and date the authorized access list for restricted areas on a monthly basis and then forward the list to the physical security office for review of employee access. Although IRS had implemented numerous physical security controls, certain controls were not working as intended, and the agency had not consistently applied the policy in others. IRS has a dedicated guard force at each computing center visitor entrance. These guards screen every visitor that enters these facilities. The agency had also corrected a previously identified weakness by consistently reviewing the images displayed on x-ray machines while screening employees, visitors, and contractors entering restricted areas. However, visitor physical access cards to restricted areas at one computing center provided unauthorized access to other restricted areas within the center—a weakness previously reported in 2010. In addition, IRS had not consistently applied its processes for reviewing access to restricted areas within its computing centers. For example, effective procedures were not in place at two of the three computing centers to ensure that individuals with an ongoing need to access restricted areas within the center were reviewed regularly in order to assess whether the access was warranted to perform their job. Although one computing center regularly reviewed the visitor access list, it did not review the list of individuals who had ongoing access. The other center only reviewed access based on the number of times in a given week the individual entered certain areas within the center, rather than based on the individual’s need to perform job duties. Further, at the third data center, IRS was unable to provide evidence that the physical security office had addressed a prior recommendation to remove employee access to restricted areas when a manager indicated access was no longer needed. Because employees and visitors may have unnecessary access to restricted areas, IRS has reduced assurance that its computing resources and sensitive information are adequately protected from unauthorized access. In addition to access controls, other important controls should be in place to ensure the confidentiality, integrity, and availability of an organization’s information. These controls include policies, procedures, and techniques for securely configuring information systems, and segregating incompatible duties. However, IRS has weaknesses in these areas, thus increasing its risk of unauthorized use, disclosure, modification, or loss of information and information systems. Configuration management involves, among other things, (1) verifying the correctness of the security settings in the operating systems, applications, or computing and network devices and (2) obtaining reasonable assurance that systems are configured and operating securely and as intended. Patch management, a component of configuration management, is an important element in mitigating the risks associated with software vulnerabilities. When a software vulnerability is discovered, the software vendor may develop and distribute a patch or work-around to mitigate the vulnerability. Without the patch, an attacker can exploit a software vulnerability to read, modify, or delete sensitive information; disrupt operations; or launch attacks against systems at another organization. Outdated and unsupported software is more vulnerable to an attack and exploitation because vendors no longer provide updates, including security updates. Accordingly, the Internal Revenue Manual states that IRS will manage systems to reduce vulnerabilities by promptly installing patches. In addition, the manual states that system administrators will ensure the operating system version is a version for which the vendor still offers standardized technical support. Although IRS made progress in updating certain systems, it did not always apply critical patches to its databases that support two financial applications. For example, the agency made major upgrades to key servers supporting the administrative accounting system; however, databases supporting this and another administrative accounting application had not been updated with the latest critical patches. In addition, patches had not been applied since 2006 for at least four other database installations on servers supporting the agency’s general ledger system for tax-related activities. IRS had also not corrected previously identified weaknesses related to outdated and unsupported software on domain name servers. As a result, the agency has limited assurance that its systems are protected from known vulnerabilities. Segregation of duties refers to the policies, procedures, and organizational structures that help ensure that no single individual can independently control all key aspects of a process or computer-related operation and thereby gain unauthorized access to assets or records. Often, organizations achieve segregation of duties by dividing responsibilities among two or more individuals or organizational groups. This diminishes the likelihood that errors and wrongful acts will go undetected, because the activities of one individual or group will serve as a check on the activities of the other. Inadequate segregation of duties increases the risk that erroneous or fraudulent transactions could be processed, improper program changes implemented, and computer resources damaged or destroyed. The Internal Revenue Manual requires that IRS divide and separate duties and responsibilities of incompatible functions among different individuals so that no individual shall have all of the necessary authority and system access to disrupt or corrupt a critical security process. Furthermore, the manual specifies that the primary security role of any database administrator is to administer and maintain database repositories for proper use by authorized individuals and that database administrators shall not have system administrator access rights. IRS did not always appropriately segregate certain duties. Specifically, on its general ledger system for tax-related activities, IRS granted certain database administration privileges to at least 25 database users with no database administration duties. These privileges allowed them to grant other users access to tables within the database, including the ability to add, change, or delete important accounting data. In addition, IRS had not corrected a previously identified weakness related to permitting an individual the ability to execute the roles and responsibilities of both a database and system administrator for the procurement system. By not properly segregating incompatible duties in these financial management systems, IRS reduces the effectiveness of its internal controls over financial management and increases the likelihood of errors and misstatements. Additionally, these weaknesses increase the potential for unauthorized use or disclosure of sensitive information or disruption of systems. An underlying reason for the information security weaknesses in IRS’s financial and tax processing systems is that it has not yet fully implemented key components of its comprehensive information security program. FISMA requires each agency to develop, document, and implement an information security program that, among other things, includes: periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information and information systems; policies and procedures that (1) are based on risk assessments, (2) cost- effectively reduce information security risks to an acceptable level, (3) ensure that information security is addressed throughout the life cycle of each system, and (4) ensure compliance with applicable requirements; plans for providing adequate information security for networks, facilities, security awareness training to inform personnel of information security risks and of their responsibilities in complying with agency policies and procedures, as well as training personnel with significant security responsibilities for information security; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, to be performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in its information security policies, procedures, or practices; and plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. IRS has made progress in developing and documenting elements of its information security program. To bolster security over its networks and systems and to address its information security weaknesses, IRS has developed various initiatives. For example, IRS has created a detailed roadmap to guide its efforts in targeting critical weaknesses. The agency is in the process of implementing this comprehensive plan to mitigate numerous information security weaknesses, such as those associated with access controls, audit trails, contingency planning, and training. According to the plan, the last of these weaknesses is scheduled to be resolved in the first quarter of fiscal year 2014. In addition, IRS has developed metrics to measure success in complying with guides, policies, and standards in the areas of inventory management, configuration management, access authorizations, auditing, and change management. As long as these efforts remain flexible to address changing technology and evolving threats, include our findings and those of TIGTA in measuring success, and are fully and effectively implemented, they should improve the agency’s overall information security posture. Although the agency has a framework in place for its comprehensive information security program, as demonstrated below, key components of IRS’s program have not yet been fully implemented. According to the National Institute of Standards and Technology (NIST), risk is determined by identifying potential threats to the organization and vulnerabilities in its systems, determining the likelihood that a particular threat may exploit vulnerabilities, and assessing the resulting impact on the organization’s mission, including the effect on sensitive and critical systems and data. Identifying and assessing information security risks are essential to determining what controls are required. Moreover, by increasing awareness of risks, these assessments can generate support for the policies and controls that are adopted in order to help ensure that the policies and controls operate as intended. Consistent with NIST guidance, IRS requires its risk assessment process to detail the residual risk assessed, as well as potential threats, and to recommend corrective actions for reducing or eliminating the vulnerabilities identified. IRS policy also requires system risk assessments to be updated a minimum of every 3 years or whenever there is a significant change to the system, the facilities where the system resides, or other conditions that may affect the security or status of system accreditation. Although IRS had implemented a risk assessment process, which includes, among other things, threat and vulnerability identification, impact analysis, risk determination, and recommended corrective actions, certain risks may not have been identified. For the six systems that we reviewed, five of the risk assessments were up-to-date, documented, and formally approved by IRS management. However, IRS’s general ledger system for tax-related activities was moved from one mainframe environment to another at a different facility; yet, the risk assessment had not been updated. Further, IRS’s risk assessment of the mainframe environment supporting its general ledger for tax-related activities and tax processing applications was not comprehensive. Specifically, the assessment did not consider all potential threats and vulnerabilities for portions of the system; IRS considered the test and development environment of the system as out of scope although these portions could affect the system’s security. As a result, potential risks to this system may not be fully known and associated controls may not be in place. Another key element of an effective information security program is to develop, document, and implement risk-based policies, procedures, and technical standards that govern security over an agency’s computing environment. If properly developed and implemented, policies and procedures should help reduce the risk associated with unauthorized access or disruption of services. In addition, technical security standards can provide consistent implementation guidance for each computing environment. Developing, documenting, and implementing security policies and standards are the important primary mechanisms by which management communicates its views and requirements; these policies also serve as the basis for adopting specific procedures and technical controls. In addition, agencies need to take the actions necessary to effectively implement or execute these procedures and controls. Otherwise, agency systems and information will not receive the protection that the security policies and controls should provide. IRS had generally developed, documented, and approved information security policies and procedures, and had corrected a previously identified weakness by enhancing its policies and procedures related to password age and configuration settings to comply with federal guidance. However, some policies were inconsistent and some were lacking specifics about administering, managing, and monitoring certain controls. For example, the agency’s overall policy on password management requires that systems be configured such that passwords cannot be reused within 24 password changes; another policy specified 3 in one section and 10 in another. Inconsistent policies can lead to less stringent implementation of controls, such as those for password management. In addition, specific policy and procedures for a key access control were lacking. Although IRS relies on system-managed storage as a key access control to prevent unauthorized access between logical partitions that have different mission support functions and different security requirements, the agency did not document in its policy or related procedures how this control environment should be administered, managed, and monitored. As a result, IRS does not have processes in place to verify that system-managed storage controls are implemented, administered, and monitored in a manner that provides necessary access controls. Further, in an August 2010 report, TIGTA reported that IRS had not documented all IT security roles and responsibilities in the Internal Revenue Manual and had not developed day-to-day IT security procedures and guidelines. Without having fully documented, approved, and implemented policies and procedures, IRS cannot ensure that its information security requirements are applied consistently across the agency. An objective of system security planning is to improve the protection of information technology resources. A system security plan provides an overview of the system’s security requirements and describes the controls that are in place or planned to meet those requirements. OMB Circular A- 130 requires that agencies develop system security plans for major applications and general support systems, and that these plans address policies and procedures for providing management, operational, and technical controls. Furthermore, IRS policy requires that security plans describing the security controls in place or planned for its information systems be developed, documented, implemented, reviewed annually, and updated a minimum of every 3 years or whenever there is a significant change to the system. Although IRS documented its management, operational, and technical controls in system security plans for the six systems we reviewed, one plan did not reflect the current operating environment. IRS used OMB Circular A-130 as guidance to develop system security plans for the respective systems. In addition, IRS documented the review of its system security plans through certification and accreditation memos, which provide IRS with the authorization to operate systems. These memos were formally approved by key officials. Further, all the plans reviewed were within the 3-year time frame. However, one application’s system security plan did not describe controls in place in the current environment. IRS had moved this application from one mainframe to another, but the plan still reflected controls from the previous environment. Without a specific and accurate security plan for this key financial system, IRS cannot ensure that appropriate controls are in place to protect the critical information this system stores. Individuals can be one of the weakest links in securing systems and networks. Therefore, a very important component of an information security program is providing sufficient training so that users understand system security risks and their own role in implementing related policies and controls to mitigate those risks. IRS policy requires that personnel performing information technology security duties meet minimum continuing professional education hours in accordance with their roles. Individuals performing security roles are required by IRS to have 12, 8, or 4 hours of specialized training per year, depending on their specific role. IRS had processes in place for providing employees with security awareness and specialized training. For the employees with specific security-related roles and the newly-hired employees that we reviewed, all met the required minimum security awareness and specialized training hours. Another key element of an information security program is to test and evaluate policies, procedures, and controls to determine whether they are effective and operating as intended. This type of oversight is a fundamental element because it demonstrates management’s commitment to the security program, reminds employees of their roles and responsibilities, and identifies and mitigates areas of noncompliance and ineffectiveness. Although control tests and evaluations may encourage compliance with security policies, the full benefits are not achieved unless the results improve the security program. FISMA requires that the frequency of tests and evaluations be based on risks and occur no less than annually. The Internal Revenue Manual also requires periodic testing and evaluation of the effectiveness of information security policies and procedures. Although IRS has processes in place intended to monitor, test, and evaluate its security policies and procedures, these processes were not always effective. For example, IRS did not: Detect many of the readily identifiable vulnerabilities we are reporting. We previously recommended that IRS expand the scope for testing and evaluating controls to ensure more comprehensive testing. Perform comprehensive testing within the past year for one of its key network components that it considered to be a high-risk system. Test application security over its general ledger system for tax-related activities in its current production environment. This general ledger system was moved from one mainframe environment to another at a different facility; yet, the test and evaluation had not been updated to reflect the current operating environment. We tested access controls in the current environment and identified weaknesses in the general ledger system’s controls that compromised segregation of duties and jeopardized the integrity of the application’s data. Comprehensively test security controls over the mainframe environment supporting its general ledger for tax-related activities and tax processing applications. For example, the test was limited to a portion of the operating environment and, therefore, did not test all of the relevant controls. In addition, in an August 2010 report, TIGTA reported that IRS did not properly conduct compliance assessments to test the implementation of day-to-day IT procedures. Because of the lack of comprehensive testing, IRS may not be fully aware of vulnerabilities that could adversely affect critical applications and data. A remedial action plan is a key component of an agency’s information security program as described in FISMA. Such a plan assists agencies in identifying, assessing, prioritizing, and monitoring progress in correcting security weaknesses that are found in information systems. In its annual FISMA guidance to agencies, OMB requires agency remedial action plans, also known as plans of action and milestones, to include the resources necessary to correct identified weaknesses. According to the Internal Revenue Manual, the agency should document weaknesses found during security assessments, as well as planned, implemented, and evaluated remedial actions to correct any deficiencies. IRS policy further requires that IRS track the resolution status of all weaknesses and verify that each weakness is corrected. IRS had a process in place for evaluating and tracking remedial actions. The agency developed remedial action plans for the systems that we reviewed and implemented a remedial action process to address deficiencies in its information security policies, procedures, and practices. These plans documented weaknesses and included planned actions that were tracked by IRS. In addition, during fiscal year 2010, IRS made progress toward correcting previously reported information security weaknesses, correcting or mitigating 23 of the 88 previously identified weaknesses that were unresolved at the end of our prior audit. However, at the time of our review, 65 of 88—about 74 percent—of the previously reported weaknesses remained unresolved or unmitigated. According to IRS officials, the agency is continuing actions toward correcting or mitigating previously reported weaknesses. However, the agency’s process for verifying whether an action had corrected or mitigated the weakness was not working as intended. The agency informed us that it had corrected 39 of the 88 previously reported weaknesses, but we determined that IRS had not fully implemented the remedial actions for 16 of the 39 weaknesses that it considered corrected. We previously recommended that IRS implement a revised verification process that ensures remedial actions are fully implemented. Until the agency takes additional steps to implement a more effective verification process, it will have limited assurance that weaknesses are being properly mitigated or corrected and that controls are operating effectively. Continuity of operations planning, which includes contingency planning, is critical to protecting sensitive information. To ensure that mission-critical operations continue, organizations should be able to detect, mitigate, and recover from service disruptions while preserving access to vital information. Organizations should prepare plans that are clearly documented, communicated to staff who could be affected, and updated to reflect current operations. In addition, testing contingency plans is essential in determining whether the plans will function as intended in an emergency situation. FISMA requires that plans and procedures be in place to ensure continuity of operations for agency information systems. IRS policy states that individuals with responsibility for disaster recovery should be provided with copies of or access to agency disaster recovery plans. IRS had appropriately documented and communicated the four contingency plans we reviewed. In addition, IRS had resolved prior weaknesses by updating disaster recovery and business resumption plans to include UNIX and Windows mission-critical systems and ensuring the availability of a disaster recovery keystroke manual for its administrative accounting system. Although IRS continues to make progress in correcting or mitigating previously reported weaknesses, implementing controls over key financial systems, and developing and documenting a framework for its comprehensive information security program, information security weaknesses—both old and new—continue to jeopardize the confidentiality, integrity, and availability of IRS’s systems. An underlying reason for the information security weaknesses in IRS’s financial and tax processing systems is that it has not yet fully implemented key components of its comprehensive information security program. The financial and taxpayer information on IRS systems will remain particularly vulnerable to insider threats until the agency (1) addresses newly identified and previously reported weaknesses pertaining to identification and authentication, authorization, cryptography, audit and monitoring, physical security, configuration management, and segregation of duties; and (2) fully implements key components of a comprehensive information security program that ensures risk assessments are conducted in the current operating environment; policies and procedures are appropriately specific and effectively implemented; security plans are written to reflect the current operating environment; processes intended to test, monitor, and evaluate internal controls are appropriately detecting vulnerabilities; comprehensive testing is conducted on key networks on an at least annual basis; and tests and evaluations are conducted in the current operating environment. Until IRS takes these further steps, financial and taxpayer information are at increased risk of unauthorized disclosure, modification, or destruction; financial data is at increased risk of errors that result in misstatement; and the agency’s management decisions may be based on unreliable or inaccurate financial information. These weaknesses, considered collectively, were the basis of our determination that IRS had a material weakness in internal control over financial reporting related to information security in fiscal year 2010. In addition to implementing our previous recommendations, we are recommending that the Commissioner of Internal Revenue take the following eight actions to fully implement key components of the IRS comprehensive information security program: Update risk assessments whenever there is a significant change to the system, the facilities where the system resides, or other conditions that may affect the security or status of system accreditation. Update the risk assessment for the mainframe environment supporting the general ledger for tax-related activities and tax processing applications to include all portions of the environment that could affect security. Update policies and procedures pertaining to password controls to ensure they are consistent. Document and implement policy and procedures for how systems- managed storage as an access control mechanism should be administered, managed, and monitored. Update the application security plan to describe controls in place in its current mainframe operating environment. Perform comprehensive testing of the key network component considered to be a high-risk system, at least annually. Test the application security for the general ledger system for tax-related activities in its current operating environment. Perform comprehensive testing of security controls over the mainframe environment to include all portions of the operating environment. We are also making 32 detailed recommendations in a separate report with limited distribution. These recommendations consist of actions to be taken to correct specific information security weaknesses related to identification and authentication, authorization, cryptography, audit and monitoring, physical security, configuration management, and segregation of duties identified during this audit. In providing written comments (reprinted in app. II) on a draft of this report, the Commissioner of Internal Revenue stated that the security and privacy of taxpayer and financial information is of the utmost importance to the agency and that he appreciated that the draft report recognized the progress IRS has made in improving its information security program and that numerous initiatives are underway. He also noted that IRS is committed to securing its computer environment and will continually evaluate processes, promote user awareness, and apply innovative ideas to increase compliance. The Commissioner stated that IRS is steadily progressing toward eliminating the material weakness in information security by establishing enterprise repeatable processes, which are overseen by an internal team that performs self-inspections, identifies and mitigates risk, and provides executive governance over corrective actions. Further, he stated that IRS will provide a detailed corrective action plan addressing each of our recommendations. This report contains recommendations to you. As you know, 31 U.S.C. § 720 requires the head of a federal agency to submit a written statement of the actions taken on our recommendations to the Senate Committee on Homeland Security and Governmental Affairs and to the House Committee on Oversight and Government Reform not later than 60 days from the date of the report and to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. Because agency personnel serve as the primary source of information on the status of recommendations, we request that the agency also provide us with a copy of the agency’s statement of action to serve as preliminary information on the status of open recommendations. We are sending copies of this report to interested congressional committees, the Secretary of the Treasury, and the Treasury Inspector General for Tax Administration. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact Nancy R. Kingsbury at (202) 512-2700 or Gregory C. Wilshusen at (202) 512-6244. We can also be reached by e-mail at kingsburyn@gao.gov and wilshuseng@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The objective of our review was to determine whether controls over key financial and tax processing systems were effective in protecting the confidentiality, integrity, and availability of financial and sensitive taxpayer information at the Internal Revenue Service (IRS). To do this, we examined IRS information security policies, plans, and procedures; tested controls over key financial applications; and interviewed key agency officials in order to (1) assess the effectiveness of corrective actions taken by IRS to address weaknesses we previously reported and (2) determine whether any additional weaknesses existed. This work was performed in connection with our audit of IRS’s fiscal year 2010 and 2009 financial statements for the purpose of supporting our opinion on internal control over the preparation of those statements. To determine whether controls over key financial and tax processing systems were effective, we considered the results of our evaluation of IRS’s actions to mitigate previously reported weaknesses, and performed new audit work at the three enterprise computing centers located in Detroit, Michigan; Martinsburg, West Virginia; and Memphis, Tennessee, as well as an IRS facility in New Carrollton, Maryland. We concentrated our evaluation on threats emanating from sources internal to IRS’s computer networks. Considering systems that directly or indirectly support the processing of material transactions that are reflected in the agency’s financial statements, we focused on eight critical applications/systems as well as the general support systems. Our evaluation was based on our Federal Information System Controls Audit Manual, which contains guidance for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized information; National Institute of Standards and Technology guidance; and IRS policies and procedures. We evaluated controls by reviewing the complexity and expiration of password settings to determine if password management had been enforced; analyzing users’ system access to determine whether they had been granted more permissions than necessary to perform their assigned functions; reviewing configuration files for servers and network devices to determine if encryption was being used for transmitting data; assessing configuration settings to evaluate settings used to audit security- relevant events and discussing and observing monitoring efforts with IRS officials; observing and analyzing physical access controls to determine if computer facilities and resources had been protected; inspecting key servers to determine whether critical patches had been installed or software was up-to-date; and examining user access and responsibilities to determine whether incompatible functions had been segregated among different individuals. Using the requirements in the Federal Information Security Management Act that establish elements for an effective agencywide information security program, we reviewed and evaluated IRS’s implementation of its security program by analyzing IRS’s risk assessments for six IRS financial and tax processing systems that are key to supporting the agency’s financial statements, to determine whether risks and threats had been documented; comparing IRS’s policies, procedures, practices, and standards to actions taken by IRS personnel to determine whether sufficient guidance had been provided to personnel responsible for securing information and information systems; analyzing security plans for six systems to determine if management, operational, and technical controls had been documented and if security plans had been updated; verifying whether new employees had received system security orientation within the first 10 working days; verifying whether employees with security-related responsibilities had received specialized training within the year; analyzing test plans and test results for six IRS systems to determine whether management, operational, and technical controls had been tested at least annually; reviewing IRS’s system remedial action plans to determine if they were complete; reviewing IRS’s actions to correct weaknesses to determine if they had effectively mitigated or resolved the vulnerability or control deficiency; reviewing system backup and recovery procedures to determine if they had adequately provided for recovery and reconstitution to the system’s original state after a disruption or failure; and examining contingency plans for six IRS systems to determine whether those plans had been tested or updated. In addition, we discussed with management officials and key security representatives, such as those from IRS’s Computer Security Incident Response Center, Office of Cybersecurity, as well as the three computing centers, whether information security controls were in place, adequately designed, and operating effectively. In addition to the individuals named above, David Hayes (assistant director), Jeffrey Knott (assistant director), Angela Bell, Mark Canter, Sharhonda Deloach, Nancy Glover, Nicole Jarvis, George Kovachick, Sylvia Shanks, Eugene Stevens, Michael Stevens, and Daniel Swartz made key contributions to this report.
The Internal Revenue Service (IRS) has a demanding responsibility in collecting taxes, processing tax returns, and enforcing the nation's tax laws. It relies extensively on computerized systems to support its financial and mission-related operations and on information security controls to protect financial and sensitive taxpayer information that resides on those systems. As part of its audit of IRS's fiscal years 2010 and 2009 financial statements, GAO assessed whether controls over key financial and tax processing systems are effective in ensuring the confidentiality, integrity, and availability of financial and sensitive taxpayer information. To do this, GAO examined IRS information security policies, plans, and procedures; tested controls over key financial applications; and interviewed key agency officials at four sites. Although IRS made progress in correcting previously reported information security weaknesses, control weaknesses over key financial and tax processing systems continue to jeopardize the confidentiality, integrity, and availability of financial and sensitive taxpayer information. Specifically, IRS did not consistently implement controls that were intended to prevent, limit, and detect unauthorized access to its financial systems and information. For example, the agency did not sufficiently (1) restrict users' access to databases to only the access needed to perform their jobs; (2) secure the system it uses to support and manage its computer access request, approval, and review processes; (3) update database software residing on servers that support its general ledger system; and (4) enable certain auditing features on databases supporting several key systems. In addition, 65 of 88--about 74 percent--of previously reported weaknesses remain unresolved or unmitigated. An underlying reason for these weaknesses is that IRS has not yet fully implemented key components of its comprehensive information security program. Although IRS has processes in place intended to monitor and assess its internal controls, these processes were not always effective. For example, IRS's testing did not detect many of the vulnerabilities GAO identified during this audit and did not assess a key application in its current environment. Further, the agency had not effectively validated corrective actions reported to resolve previously identified weaknesses. Although IRS had a process in place for verifying whether each weakness had been corrected, this process was not always working as intended. For example, the agency reported that it had resolved 39 of the 88 previously identified weaknesses; however, 16 of the 39 weaknesses had not been mitigated. IRS has various initiatives underway to bolster security over its networks and systems; however, until the agency corrects the identified weaknesses, its financial systems and information remain unnecessarily vulnerable to insider threats, including errors or mistakes and fraudulent or malevolent acts by insiders. As a result, financial and taxpayer information are at increased risk of unauthorized disclosure, modification, or destruction; financial data is at increased risk of errors that result in misstatement; and the agency's management decisions may be based on unreliable or inaccurate financial information. These weaknesses, considered collectively, are the basis for GAO's determination that IRS had a material weakness in internal control over financial reporting related to information security in fiscal year 2010. GAO recommends that IRS take eight actions to fully implement key components of its comprehensive information security program. In a separate report with limited distribution, GAO is recommending 32 specific actions for correcting newly identified control weaknesses. In commenting on a draft of this report, IRS agreed to develop a detailed corrective action plan to address each recommendation.
Exchanging data electronically is a common method of transferring information among federal, state, and local governments; private sector organizations; and nations around the world. As computers play an ever-increasing role in our society, more information is being exchanged regularly. Federal agencies now depend on electronic data exchanges to execute programs and facilitate commerce. For example, federal agencies routinely use data exchanges to transfer funds to contractors and grantees; collect data necessary to make eligibility determinations for veterans, social security, and medicare benefits; gather data on program activities to determine if funds are being expended as intended and the expected outcomes achieved; and share weather information that is essential for air flight safety. To facilitate commerce, federal agencies regulate or provide oversight to organizations that use data exchanges extensively to process payments through the banking system; purchase or sell securities through stock exchanges and futures markets; and facilitate import and export shipments through ports of entry. We have reported on potential data exchange issues that could affect many of these activities (see the list of related products at the end of this report). An electronic data exchange is the transfer (sending or receiving) of a data set using electronic media. Electronic data exchanges can be made using various methods, including direct computer-to-computer exchanges over a dedicated network; direct exchanges over commercially available networks or the Internet; or exchanges of magnetic media such as computer tapes or disks. The information transferred in a data set often includes at least one date. Because many computer systems have been using a 2-digit year in the date format, the data exchanges have also used 2-digit years. Now that many formats are being changed to use 4 digits to correctly process dates beyond 1999, data exchanges using 2-digit year formats must also be changed to 4 digits or bridges must be used to convert incoming 2-digit years to 4-digit years or convert outgoing 4-digit years to 2-digits. These conversions generally involve the use of algorithms to distinguish the century (for example, 2-digit years less than 50 may be considered 2000 dates and 2-digit years of 50 or more may be considered 1900 dates). In addition to using bridges, filters may be needed to screen and identify incoming noncompliant data to prevent it from corrupting data in the receiving system. These conversions are not necessary if the data exchanges are designed to employ certain electronic data interchange standards (see appendix II for a glossary of data exchange standards used by some federal agencies). A data exchange standard defines the format of a specific data set for transmission. Some of these standards specify a 4-digit year format. Federal agencies often use exchanges that do not involve a standard format. Instead, the data exchanges consist of individual text files with a structure that is established by agreement between the exchange partners. Files using these formats are generally referred to as flat files. As part of their Year 2000 correction efforts, organizations must identify the date formats used in their data exchanges, develop a strategy for dealing with exchanges that do not use 4-digit year formats, and implement the strategy. These efforts generally involve the following steps. Assess information systems to identify data exchanges that are not Year 2000 compliant. Contact the exchange partner and reach agreement on the date format to be used in the exchange. Determine if data bridges and filters are needed. Determine if validation processes are needed for incoming data. Set dates for testing and implementing new exchange formats. Develop and test bridges and filters to handle nonconforming data. Develop contingency plans and procedures for data exchanges and incorporate into overall agency contingency plans. Implement the validation process for incoming data. Test and implement new exchange formats. The testing and implementation of new data exchanges must be closely coordinated with exchange partners to be completed effectively. In addition to an agency testing its data exchange software, effective testing involves end-to-end testing—initiation of the exchange by the sending computer, transmission through intermediate communications software and hardware, and receipt and acceptance by receiving computer(s), thus completing the exchange process. Resolving data exchange issues will require significant efforts and costs according to federal and state officials. At an October 1997 summit, federal and state information technology officials estimated that about 20 percent of Year 2000 efforts will be directed toward correcting data exchange problems. This could be significant considering the magnitude of expected Year 2000 costs. According to OMB’s February 15, 1998, Year 2000 status reports of 24 federal agencies, the federal government’s Year 2000 costs are estimated to be about $4.7 billion. Based on estimates provided by states to NASIRE, the states’ Year 2000 costs are estimated to be about $5.0 billion. If Year 2000 data exchange problems are not corrected, the adverse impact could be severe. Federal agencies exchange data with thousands of external entities, including other federal agencies, state agencies, private organizations, and foreign governments and private organizations. If data exchanges do not function properly, data will not be exchanged between systems or invalid data could cause receiving computer systems to malfunction or produce inaccurate computations. For example, such failures could result in the Social Security Administration not being able to determine the eligibility of applicants or compute and pay benefits because it relies on data exchanges for eligibility information and payment processing. This could have a widespread impact on the public since the agency processes payments to more than 50 million beneficiaries each month, which in fiscal year 1997 totaled about $400 billion; National Highway Traffic Safety Administration not being able to provide states with information needed for driver registrations, which could result in licenses being issued to drivers with revoked or suspended licenses in other states; Department of Veterans Affairs not being able to determine correct benefits and make payments to eligible veterans; U.S. Coast Guard not receiving weather information necessary to plan search and rescue operations; and Nuclear Regulatory Commission not receiving information from nuclear reactors that is needed to trigger emergency response actions. The overall responsibility for tracking and overseeing actions by federal agencies to address Year 2000 issues rests with OMB and the President’s Council on Year 2000 Conversion that was established in February 1998. OMB has been tracking major federal agencies’ Year 2000 activities by requiring them to submit quarterly status reports. Efforts to address data exchange issues are in early stages. Federal and state coordinating organizations reached initial agreements in 1997 on the steps to address data exchanges issues; however, many federal agencies and states have not yet finished assessing their data exchanges to determine if they are Year 2000 compliant. Further, little progress has been made in completing key steps such as reaching agreements with partners on exchange formats, developing and testing bridges and filters, and developing contingency plans. Federal and state coordinating organizations began to address Year 2000 data exchange problems in 1997. Initial agreements on steps to address data exchange issues were reached at a state/federal summit in October 1997 that was hosted by the State of Pennsylvania and sponsored by the federal Chief Information Officer Council (CIO Council) and NASIRE. At the summit, federal agency and state representatives agreed to establish a contiguous 4-digit year date as a default standard for exchanges. They also agreed that federal agencies will take the lead in providing information on exchanges with states, any planned date format changes, and timeframes for any changes. In addition, joint federal and state policy and working groups were established to continue the dialogue on exchange issues. To implement these agreements, OMB issued instructions in January 1998 for federal agencies to inventory all data exchanges with outside parties by February 1, 1998, and coordinate plans for transitioning to Year 2000 compliant data exchanges with exchange partners by March 1, 1998. OMB also set March 1999 as the target date to complete the data exchange corrections. In addition, for the February 15, 1998, quarterly reports, OMB required the federal agencies to describe the status of their efforts to inventory all data exchanges with outside entities and the method for assuring that those organizations will be or have been contacted, particularly state governments. However, OMB did not require the agencies to report their status in completing key steps for data exchanges, such as those listed earlier in this report. According to its Year 2000 Coordinator, NASIRE plans to continue implementing the agreements reached at the October 1997 summit through active participation in joint policy and working groups and by holding additional state/federal meetings on data exchange issues. These activities will supplement NASIRE’s continuing efforts to provide states with access to information on vendors, software, and methodologies for resolving Year 2000 problems. The federal CIO Council’s State Interagency Subgroup also plans to continue pursuing the agreements reached at the October 1997 summit through joint state and federal meetings on data exchange issues and by hosting a state/federal meeting in April 1998. The federal CIO Council also designated an official in the State Department to act as the focal point for international exchange issues. The designee plans to work through federal agencies that have international operations to increase our foreign data exchange partners’ awareness of Year 2000 issues. For example, we were told that the State Department will add Year 2000 issues to bilateral and multilateral discussion agendas, such as the Summit of the Americas and the Asian-Pacific Economic Cooperation meetings. Twenty of the 42 federal agencies we surveyed reported having finished inventorying and assessing data exchanges for mission-critical systems as of the first quarter of 1998. Eighteen agencies have not completed their assessments and the status of one federal agency is not discernable because it was not able to provide information on their total number of exchanges and the number assessed. The remaining three federal agencies said they do not have external data exchanges. Federal agencies reported that they have a total of almost 500,000 data exchanges with other federal agencies, states, local governments, and the private sector for their mission-critical systems. Almost 90 percent of the exchanges were reported by the Federal Reserve and the Department of Housing and Urban Development (HUD) which reported having 316,862 and 133,567, respectively. The Federal Reserve exchanges data with federal agencies and the private sector using software it provides to these entities. The Federal Reserve reported that it has assessed all of these exchanges.Similarly, HUD has exchanges with housing authorities, states agencies, and private sector organizations. HUD has determined that 92 percent of these exchanges are not Year 2000 compliant. The other agencies reported their mission-critical systems have about 49,000 data exchanges with other federal agencies, states, local governments, and the private sector, as shown in figure 1. These agencies reported that they have assessed about 39,000, or about 80 percent, of the exchanges. (See appendix III for the status of assessments and other actions for each of the federal agencies.) Significant federal actions will be needed to address Year 2000 problems with data exchanges. Of the 39,000 exchanges that federal agencies said they assessed, they reported about 27 percent as not being Year 2000 compliant. Only six federal agencies told us that all their data exchanges are Year 2000 compliant and these represent only 123 of the approximately 39,000 data exchanges that have been assessed. As discussed previously, dealing with data exchanges involves a number of steps. For each noncompliant exchange, the agency must reach agreement with the exchange partners on whether they will (1) change the date format to make it compliant or (2) agree to retain the existing 2-digit format and use bridges as an interim measure. To resolve Year 2000 data exchange problems, all federal agencies have chosen to adopt a contiguous 4-digit year format; however, some agencies plan to continue using a 2-digit year format for some of their exchanges in the near term. If a 2-digit exchange format is retained but the agency’s system will be using 4-digit years, the agency must develop, test, and implement (1) bridges to convert dates to a useable form and (2) filters to recognize 2-digit years and prevent them from entering agency systems. In addition, the agencies should identify the exchanges where there is a probability that, even though agreements have been reached to exchange 4-digit years, one partner may not be compliant. In these cases, agencies must develop contingency plans to ensure that mission-critical operations continue. The status of activities to contact and reach agreement on Year 2000 readiness with exchange partners varies significantly among federal agencies. Only one federal agency reported having reached agreements with all its exchange partners. While on average the other federal agencies reported having reached agreements on about 24 percent of their exchanges, almost half of federal agencies reported that they have reached agreements on 10 percent or less of their exchanges, as shown in figure 2 below. Few federal agencies reported having taken actions to install bridges or filters. Seventeen federal agencies responding to our survey have identified the need to install 988 bridges or filters. In total, the agencies reported having developed and tested 203, or 21 percent, of the needed bridges or filters. In addition, only 38 percent of the federal agencies reported having developed contingency plans for data exchanges. The need for bridges, filters, and contingency plans may increase as agencies continue assessing data exchanges and contacting and reaching agreements with exchange partners. Only two states reported to us that they have finished inventorying and assessing data exchanges for mission-critical systems. The status of 15 of the 39 states that responded to our survey is not discernable because they were not able to provide us with information on their total number of exchanges and the number assessed. In addition, all but two states were able to provide only partial responses or estimates on the status of exchanges. For the 24 states that provided actual or estimated data on the status of their exchanges, an average of 47 percent of the exchanges had not been assessed. Similar to the federal agencies, states reported that the largest number of exchanges were with the private sector, as shown in figure 3 below. (See appendix IV for the status of assessments and other actions for each state.) Significant state actions will be needed to address Year 2000 data exchange issues. Of the 12,262 total exchanges that states reported as having assessed, 5,066 exchanges (41 percent) are reported as not being Year 2000 compliant. None of the states reported that all their data exchanges are Year 2000 compliant. For each of the noncompliant exchanges, the states must take the same types of actions, as described earlier for federal agencies, to reach agreements with the exchange partners, develop, test, and implement bridges and filters, and develop data exchange contingency plans. Similar to federal agencies, states reported having made limited progress in reaching agreement with exchange partners on addressing changes needed for Year 2000 readiness, installing bridges and filters, and developing contingency plans. However, we can draw only limited conclusions on the status of the states actions because data were provided on only a small portion of states’ data exchanges. Officials from several states told us that they were unable to provide actual, statewide data on their exchanges because the states do not collect and maintain such information centrally and the state agencies did not provide the data requested in our survey. According to NASIRE’s Year 2000 committee chairman, individual state agencies are aware of data exchange issues and have started taking action to address them, but few state chief information officers have begun monitoring these actions on a statewide basis. In addition to working with their exchange partners to resolve Year 2000 issues, some federal agencies are providing Year 2000 guidance to the organizations that they regulate or oversee and monitoring their Year 2000 activities. Sixteen federal agencies reported that they have regulatory or oversight responsibilities. Seven of the agencies focus on the financial services area, including banks, thrifts, and security exchanges. The others regulate or provide oversight to organizations performing government services, such as housing authorities and grantees, and private organizations in a variety of industry sectors such as the import and export industry, the maritime industry, manufacturers of medical devices and pharmaceuticals, and the oil, gas, and mineral industries. All but 3 of the 16 agencies reported providing guidance or establishing working groups addressing Year 2000 issues for the organizations for which they have regulatory or oversight responsibility. In total, 11 of the 16 federal agencies provided guidance on Year 2000 issues and the guidance from all but two addressed data exchange issues, 10 agencies have sponsored Year 2000 working groups, 12 agencies have monitored progress in resolving Year 2000 problems, and 5 have established inspection or validation programs. Of the 12 agencies that have been monitoring progress on the resolution of Year 2000 problems, 10 reported that they have data on the corrective action status of the organization they regulate or oversee. See appendix V for Year 2000 activities undertaken by each federal regulatory or oversight agency. Federal agencies in the financial services area reported having initiated efforts domestically and internationally to address Year 2000 problems with international data exchanges, but other federal agencies reported that they are still in the initial stages of addressing these issues. Ten federal agencies reported having 702 data exchanges with foreign governments or the foreign private sector. These 702 foreign data exchanges reported by federal agencies represent less than 1 percent of all federal data exchanges. The federal agencies reported reaching agreement on formats for 98, or 14 percent, of the foreign exchanges. Three federal agencies—the Departments of the Interior, Treasury, and Defense—have the bulk of the reported foreign data exchanges. For its 416 reported foreign exchanges, Interior plans to notify its foreign data exchange partners that it will continue to use a 2-digit year in data exchanges and use bridges with algorithms to compute the century. Treasury has reached agreement on year formats for 71 of its 107 reported foreign exchanges and advised us that it is using bank examiners to monitor the activities to make all the exchanges Year 2000 compliant. The Department of Defense reported reaching agreement on 18 of its 103 data exchanges with foreign entities. The remaining seven federal agencies reported having reached agreement on 9 of their 76 foreign data exchanges. Interior was the only agency that reported having developed and tested bridges and filters to convert dates and prevent the corruption of its systems. None of the agencies reported having developed contingency plans to process transactions if the exchange partners’ systems were not Year 2000 compliant. Nine federal agencies—six in the financial services area—said they have regulatory or oversight responsibility for organizations with international data exchanges. Three agencies in the financial services area said they are relying on bank examiners to monitor progress and one is providing guidance to exchange partners for addressing Year 2000 problems. Four of the nine agencies stated that they are also addressing Year 2000 problems by working with international organizations, such as the Bank for International Settlements, the International Organization of Securities Commissions, and the Securities Industry Association. Two of the nine agencies reported having no ongoing international Year 2000 activities. International organizations identified by federal agencies as forums for Year 2000 activities were primarily in the financial services area including the Bank for International Settlements, International Organization of Securities Commissions, Securities Industry Association, and Futures Industry Association. The Department of Transportation also identified the International Civil Aviation Organization as a potential international forum for the resolution of Year 2000 problems. In addition, from our search of the Internet for Year 2000 activities by international organizations, we identified eight other potential international forums. The activities of these organizations are highlighted in table 1 and the reported current and planned activities of each organization are summarized in appendix VI. The primary efforts cited by the international organizations are increasing awareness and providing information and guidance on resolving Year 2000 problems, including posting the information on their Internet web sites. Six organizations also reported that they are sponsoring conferences or workshops to discuss Year 2000 issues and six reported that they are monitoring or surveying the status of their members’ Year 2000 activities. Organizations in the financial services area are the most active in Year 2000 efforts. According to the Bank for International Settlements, payment and settlement systems are essential elements of financial market infrastructures through which clearing organizations, settlement agents, securities depositories, and the various direct and indirect participants in these systems are intricately connected. It is therefore imperative that the systems be adapted and certified early enough to ensure that they are Year 2000 compliant and to allow for testing among institutions. To address these issues, officials at the Bank for International Settlements told us that it is coordinating with the International Organization of Securities Commissions and the International Association of Insurance Supervisors to draw attention to Year 2000 issues. In September 1997, the Bank for International Settlements issued a technical paper for banks which sets out a strategic approach for the development, testing, and implementation of system solutions as well as defining the role that central banks and bank supervisors need to play in promoting awareness of the issue and enforcing action. Other organizations have also used the Bank for International Settlements’ technical framework to stimulate activities of their members. For example, the Securities Industry Association used the framework to develop a project plan with target dates for completing various tasks and posted the plan on its Internet web site for members to use in planning their Year 2000 activities. The Securities Industry Association also used the framework as the basis for a survey instrument for assessing the status of its members’ Year 2000 activities. The European Commission has been publishing issue papers and conducting workshops to increase awareness of Year 2000 computer problems among its member countries. These issue papers and workshops also addressed the implication of European countries’ efforts to convert to the new Euro currency. Because this conversion is taking place at about the same time as the Year 2000 date conversion activities, the two are in competition for financial, technical, and management resources. To identify how businesses are approaching the Euro conversion and the inter-relationship with activities to resolve Year 2000 problems, the European Commission sponsored a survey of more than 1,000 senior information technology managers in 10 countries. The result of this survey, as well as the issue papers and workshop results, are posted on the European Commission’s web site (www.ispo.cec.be/y2keuro). In addition to assisting their members, several of the international organizations reported having programs to ensure that their own systems will be able to process international data exchanges for their members in the Year 2000. For example, the Bank for International Settlements, the International Air Transport Association, and Interpol told us that they have information systems that process transactions and information exchanges for their member organizations. Each of these organizations said that their Year 2000 programs are on schedule and that they will be able to support international data exchanges with Year 2000 dates. Unless federal agencies take action to reach date format agreements with their data exchange partners and deal with data exchanges that will not be Year 2000 compliant, some of the agencies’ mission-critical systems may not be able to function properly. The data reported to us by federal agencies and state governments suggest that the full extent of the managerial and operational challenges posed by the heavy reliance on others for data needed to sustain government activity is not yet known. For the vast majority of data exchanges, including those with international entities, federal agencies have not reached agreement with their exchange partners and, therefore, do not know if the partners will be able to effectively exchange data in the Year 2000. Without knowing the status of activities or reaching agreements with exchange partners, federal agencies can not identify all the exchanges requiring (1) filters to prevent incoming invalid data from corrupting mission-critical systems or (2) provisions in the agencies’ business continuity and contingency plans to ensure the continuation of mission-critical operations. In addition, without extensive coordination with exchange partners, federal agencies will not be able to develop and test new data exchange formats, bridges, and filters to ensure that they will function properly. Because federal agencies and states are still in the early stages of resolving Year 2000 problems for data exchanges and the status of exchange partner activities is generally unknown, federal agencies need to take the lead in setting target dates for critical activities to prevent disruptions to their operations. These include setting target dates for testing and implementing new exchange formats and decision points for initiating the development and implementation of contingency plans. International forums for Year 2000 issues are available for a few economic sectors and primarily in North America and Western Europe. Only recently have any federal activities been directed at international issues and these have been limited to increasing awareness. We recommend that the Director, OMB, in consultation with the Chair of the President’s Council on Year 2000 Conversion, issue the necessary guidance to require federal agencies to take the following actions. Establish schedules for testing and implementing new exchange formats prior to the March 1999 deadline for completing all data exchange corrections; such schedules may include national test days that could be used for end-to-end testing of critical business processes and associated data exchanges affecting federal, state, and/or local governments. Notify exchange partners of the implications to the agency and the exchange partners if they do not make date conversion corrections in time to meet the federal schedule for implementing and testing Year 2000 compliant data exchange processes. Give priority to installing the filters necessary to prevent the corruption of mission-critical systems from data exchanges with noncompliant systems. Develop and implement, as part of their overall business continuity and contingency planning efforts, specific provisions for the data exchanges that may fail, including the approaches to be used to mitigate operational problems if their partners do not make date conversion corrections when needed. Report, as part of their regular Year 2000 status reports, their status in completing key steps for data exchanges, such as the percent of exchanges that have been inventoried, the percent of exchanges that have been assessed, the percent of exchanges that have agreements with exchange partners, the percent of exchanges that have been scheduled for testing and implementation, and the percent of exchanges that have completed testing and implementation. We also recommend that the Director, OMB, ensure that the federal CIO Council (1) identifiy the areas in which adequate forums on Year 2000 issues are not available for our international trade partners and (2) develop an approach to promote Year 2000 compliance activities by these trading partners. We provided a draft of this report to NASIRE, the President’s Council on the Year 2000 Conversion, and OMB for comment. NASIRE stated that its Year 2000 Committee had reviewed the draft and had no suggested changes. The NASIRE President also commented that the information and recommendations seemed reasonable and should assist federal agencies and states in their Year 2000 efforts. The President’s Council on Year 2000 Conversion did not provide comments on the report. OMB provided comments that are reproduced in appendix VIII and summarized and evaluated below. OMB provided updated information on the initial steps taken by federal agencies to address data exchange issues, described actions taken to partially implement three of our recommendations, cited plans to implement one recommendation, and gave reasons for disagreeing with the remaining two recommendations. OMB commented that our survey results would have been markedly different if the data had been collected 1 month later. OMB stated that, after our survey, 24 of the largest federal agencies reported that they had completed their assessments of data exchanges, and that virtually all of these agencies had now reached agreements with their exchange partners on exchange formats. We agree with OMB that these steps would represent a good start; however, many essential actions are yet to be completed. Our recommendations focus on the actions needed to ensure that federal agencies appropriately build on these fundamental steps to comprehensively address data exchange issues. In commenting on our recommendation concerning the establishment of schedules for testing and implementation of new exchange formats, OMB listed the actions that the CIO Council had taken in cooperation with NASIRE to (1) establish lists of exchanges and a contact point for each exchange and (2) develop a reporting format for federal agencies to report monthly on the status of each data exchange with states starting in July 1998. OMB stated that this information will be posted on an Internet web site and be available for federal and state officials to review and determine whether testing is being conducted successfully. While these are positive steps toward implementation of our recommendation, they do not address the need to establish schedules for testing and implementing new exchange formats. Schedules with target dates for testing and implementation of new exchanges are needed for coordinating efforts and measuring progress toward specific milestones. In addition, the actions described by OMB apply only to states and thus do not address exchanges with other federal agencies, local governments, and the private sector that constitute over 80 percent of the total reported exchanges. As to our recommendation concerning the development and implementation of contingency plans for data exchanges that may fail, OMB stated that on April 28, 1998, it directed federal agencies to ensure that their continuity of business plans address all risks to information flows, including those with external organizations. OMB plans to evaluate this guidance and amplify it as necessary based on its review of agencies’ May 15, 1998, Year 2000 status reports. OMB has taken an important step by issuing this directive. However, the May progress reports showed that federal agencies are making slow progress in their Year 2000 activities and this reinforces the need for OMB to provide clear directions on this critical issue. Because of the risk that exchange partners may not be able to make their systems and exchanges Year 2000 compliant and the importance of developing effective contingency plans, OMB should provide explicit directions to ensure that agencies devote sufficient management attention and resources to this critical activity. Such directions should clearly require agencies to perform the key tasks associated with initiating the project, preparing business impact analysis, developing contingency plans, and testing the plans. Regarding our recommendation that OMB require agencies to report their status in completing key steps for data exchanges as part of the regular Year 2000 status reports, OMB stated that the posting of data exchange status information on a web site, as discussed above, will be used rather than imposing an additional reporting requirement on agencies. OMB explained that it and NASIRE have agreed to this approach because it (1) provides sufficient information at a policy level to ensure that the work is getting done, (2) promotes the greatest exchange of information at the working level, and (3) minimizes duplication of reporting. As we previously stated, establishing this status reporting process is a positive step; however, the website will contain information on thousands of data exchanges with states and must be summarized and analyzed for it to be useful in managing and monitoring the time-critical activities to resolve data exchange issues. Also, as previously noted, this reporting requirement only covers the status of exchanges with states and thus excludes the other data exchanges that constitute over 80 percent of the total exchanges. OMB agreed with our recommendation that agencies should give priority to installing the filters necessary to prevent the corruption of mission-critical systems and said that it plans to update its guidance to agencies to make sure they recognize this priority as well. OMB did not agree that agencies need to notify their exchange partners of the implications to the agency and the exchange partners if they do not make date conversions in time to meet the schedule for testing and implementing Year 2000 compliant data exchange processes. OMB stated that exchange partners are well aware of the implications of failing to make date conversions. Although exchange partners are aware of the general implications of date exchange failures, the partners will not know the implications if they do not meet testing and implementation schedules for specific exchanges, unless the agencies notify their exchange partners. Knowledge of these implications is important because the exchange partners have many competing demands for Year 2000 resources and may have to decide which activities will be completed on time and which will be deferred. Therefore, exchange partners need to know the implications of data exchange failures, including the actions that will be needed under contingency plans if the partners do not meet key milestones for testing and implementing data exchanges. OMB also disagreed with our recommendation that the federal CIO Council (1) identify the areas in which adequate forums on Year 2000 issues are not available for our international trade partners and (2) develop an approach to promote Year 2000 compliance activities by these trading partners. OMB said that the Chair of the President’s Council on Year 2000 Conversion agreed that international implications of the Year 2000 problems are of the gravest concern, but disagreed that the CIO Council would be the right place to begin addressing these problems. According to OMB, the Chair has met with representatives from two international organizations to encourage them to be more involved in Year 2000 activities and with the Secretary of State who agreed to have ambassadors conduct outreach efforts in each country. OMB also said that the Chair has asked agency heads to encourage international organizations to cooperate in addressing Year 2000 problems. The steps taken by the Chair to promote international actions on Year 2000 problems represent progress but much more organized, concerted, and continuous effort are needed to adequately address this far-reaching and complex issue—one that the Chair has acknowledged as being of gravest concern. Because the CIO Council includes representatives of agencies that regulate or influence private sector organizations that operate internationally in every economic sector, it could, and should, play an important role in providing the President’s Council with the support needed to deal effectively with Year 2000 issues worldwide. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the Chairman of the Committee on Science; the Ranking Minority Member of the Committee on Science; the Chairman of the Subcommittee on Technology; other interested congressional committees; the Director, Office of Management and Budget; and other interested parties. Copies will also be made available to others upon request. I can be reached at (202) 512-6408 or by e-mail at willemssenj.aimd@gao.gov, if you or your staff have any questions. Major contributors to this report are listed in appendix IX. As requested by the Ranking Minority Member of the Subcommittee on Technology, House Committee on Science, our overall objectives for the review were to identify (1) the key actions taken to date to address electronic data exchanges among federal, state, and local governments, (2) actions the federal government has taken to minimize the adverse economic impact of noncompliant Year 2000 data from other countries’ information systems corrupting critical functions of our nation, and (3) international forums where the worldwide economic implications of this issue have been or could be addressed. To identify the key actions taken to date to address electronic data exchanges among federal, state, and local governments, we contacted federal and state organizations responsible for coordinating Year 2000 activities to identify their approaches for addressing data exchange issues. We obtained information on the status of actions of federal agencies and states using a data collection instrument (DCI). The DCI contains questions based on our Year 2000 Computing Crisis: An Assessment Guide (a copy of the DCI is reproduced in appendix VII). The DCI was pretested by having it reviewed for clarity and reasonableness by three agencies’ representatives who are knowledgeable about data exchanges. We revised the DCI based on their comments and further tested it by sending it to six federal agencies and three states. Five of the six federal agencies responded with a completed DCI in November and December 1997 and the other agency did not respond until February 1998. The three states provided oral comments, but did not respond with a completed DCI. Based on the five agencies’ responses and our subsequent follow-up questions concerning inconsistent or incomplete data, we revised the DCI by adding additional definitions and cross references. The DCI was sent to an additional 36 federal departments and major agencies (referred to collectively as federal agencies) and the remaining 47 states, the District of Columbia, and Puerto Rico. All 36 federal agencies and 39 of the 52 state-level organizations responded to our survey between January and March 1998. Three of the federal agencies reported that they did not have external data exchanges. In cases involving incomplete responses or inconsistent data on responses, we contacted the respondents to request additional data or clarification, as appropriate. Responses to follow-up questions were received in February, March, and April 1998. The DCI was also used to identify the federal government’s actions taken to minimize the adverse economic impact of noncompliant Year 2000 data from other countries’ information systems corrupting critical functions of our nation. In this regard, we collected information from federal and state organizations that have, or oversee entities that have, international data exchanges using the DCI. To identify international forums where the worldwide economic implications of this issue have been or could be addressed, we collected information from federal agencies using the DCI and researched international organization and Year 2000 Internet sites. We contacted the organizations identified as potential forums for international Year 2000 data exchange issues from October 1997 through March 1998 and ascertained their current and planned Year 2000 activities. Five of the international organizations that we contacted did not have Year 2000 activities or did not respond to our request for information. These organizations were the International Monetary Fund, Organization for Economic Cooperation and Development, European Monetary Institute, Asia-Pacific Economic Cooperation, and Association of Southeast Asian Nations. We did not independently verify the data provided in the DCI. We performed our work between September 1997 and April 1998 in accordance with generally accepted government audit standards. American National Standards Institute Accredited Standards Committee X12: An ANSI committee that formulates electronic data interchange standards governing transaction sets, segments, data elements, code sets, and interchange control structure. Standards define the format for specific electronic data interchange messages. In June 1997, the committee approved the use of a 8-digit date in X12 that includes the first 2 digits of the year. The Clearing House Interbank Payments System: a computerized network for the transfer of international dollar payments. CHIPS links 115 depository institutions which have offices in New York City. ANSI ASC X12 standards for the formatting and transmission of Medicare electronic transmissions involving enrollments, claims, reimbursements, and other payments. Federal Reserve’s electronic funds and securities transfer service. Fedwire is used by Federal Reserve Banks and branches, the Department of the Treasury, other government agencies, and depository institutions. Federal Information Processing Standards Publication 4-1, Representation for Calendar Date and Ordinal Date for Information Interchange. FIPS 4-1 strongly encourages agencies to use a 4-digit year format for data exchanges. A standard for electronic data exchange in certain health care applications involving patient, clinical, epidemiological, and regulatory data. HL7 standards are not used in healthcare insurance administration applications. United Nations-supported international electronic data exchange standard for administration, commerce, and transport. Department/agency (response date) Agency for International Development (1/22/98) Commodity Futures Trading Commission (1/23/98) Department of Agriculture (3/12/98) Department of Commerce (4/2/98) Department of Defense (4/7/98) Department of Education (4/1/98) Department of Energy (1/29/98) Department of Health and Human Services (3/26/98) Department of Housing and Urban Development (4/8/98) Department of the Interior (3/18/98) Department of Justice (4/9/98) Department of Labor (1/26/98) Department of State (3/4/98) Department of the Treasury (3/25/98) Department of Transportation (3/5/98) Department of Veterans Affairs (1/21/98) Environmental Protection Agency (2/11/98) Federal Communications Commission (4/7/98) Federal Deposit Insurance Corporation (3/13/98) Federal Emergency Management Agency (1/23/98) Federal Maritime Commission (3/31/98) Federal Reserve (3/26/98) Federal Trade Commission (1/23/98) General Services Administration (3/27/98) National Aeronautics and Space Administration (1/23/98) National Archives and Records Administration (3/6/98) National Credit Union Administration (1/23/98) National Science Foundation (1/26/98) National Transportation Safety Board (3/9/98) Nuclear Regulatory Commission (3/31/98) Office of Personnel Management (1/27/98) Overseas Private Investment Corporation (1/22/98) Pension Benefit Guaranty Corporation (1/27/98) Railroad Retirement Board (3/4/98) Securities and Exchange Commission (3/18/98) 18 (continued) Department/agency (response date) Small Business Administration (1/30/98) Social Security Administration (3/9/98) U.S. International Trade Commission (2/2/98) U.S. Postal Service (1/26/98) The date that the agency supplied the most recent information, including new data supplied as the result of follow-up questions. These states were not able to provide information for all state organizations and a significant amount of data were not available. These states provided estimates. (no activities reported) Information on the Year 2000 activities of international organizations was obtained by interviews with their officials and research of information posted on Internet web sites. There may be other organizations addressing international Year 2000 issues that we did not identify. The Bank for International Settlements (BIS) has undertaken a worldwide campaign to increase awareness, provide guidance, and identify the status of Year 2000 efforts by central banks and major international banking organizations. BIS hosts the Basle Committee on Banking Supervision and the Committee on Payment and Settlement Systems that are sponsored by the Group of Ten Governors. According to BIS, payment and settlement systems are an essential element of financial market infrastructures through which clearing organizations, settlement agents, securities depositories, and the various direct and indirect participants in these systems are intricately connected. It is therefore imperative that such systems be adapted and certified early enough to ensure that they are Year 2000 compliant and, very importantly, to allow inter-institution testing. This information is available on the BIS web site (www.bis.org). To increase awareness, in September 1997, the Basle Committee on Banking Supervision issued a technical paper for banks that sets out a strategic approach for the development, testing, and implementation of system solutions as well as defining the role that central banks and bank supervisors need to play in promoting awareness of the issue and enforcing action. The Committee on Payment and Settlement Systems is collecting and publishing information on the state of preparedness of payment and settlement systems around the world with respect to the Year 2000 issue. For this purpose, a special reporting framework has been developed that operators of payment and settlement systems can use to indicate the state of internal testing as well as testing with external participants for key components of their information technology infrastructure. The framework distinguishes between the key components of such infrastructures—the central system, the networks and network interfaces, the participants’ front-end systems, and other main components. For each of these components, information is provided on the start and completion dates for internal testing as well as testing with external participants. An indication is also given as to the connections of the respective payment or settlement systems with other external systems, on the coordinated effort with other payment systems and/or major participants, and where more information can be obtained from the respective operator. The Basle Committee also plans to survey the efforts that banking supervisors have underway in each country as well as the state of readiness of the local banking system. They expect to complete these surveys during the first half of 1998. In April 1998, the Basle Committee, the Committee on Payment and Settlement Systems, the International Organization of Securities Commissions, and the International Association of Insurance Supervisors held a round table on the Year 2000 in order to provide a global platform for the sharing of relevant strategies and experiences across key industries by international bodies representing both the public and the private sector. As the principal international organization of securities regulators, the International Organization of Securities Commissions (IOSCO) has taken a leadership role in promoting awareness of the Year 2000 computer problem and in encouraging its membership and all market participants to take swift and aggressive action to address Year 2000 issues. IOSCO is the largest international organization of securities regulators with 99 members—principally domestic government agencies entrusted with securities regulation. Among other things, IOSCO has called for regular monitoring of Year 2000 readiness and global, industrywide testing to take place in sufficient time to address any weaknesses or deficiencies that are revealed. IOSCO currently exchanges information, periodically engages in joint work with, and to some extent coordinates its ongoing work with, the Basle Committee on Banking Supervision and the International Association of Insurance Supervisors. IOSCO has a working relationship and/or exchanges information on a regular basis with BIS, the International Accounting Standards Committee, the International Federation of Accountants, the Fédération Internationale des Bourses de Valeurs, the International Monetary Fund, and members of the World Bank Group. IOSCO also maintains a liaison relationship with the International Organization for Standards. Information on IOSCO’s current work program is regularly provided to the Group of Seven. IOSCO is surveying and obtaining information on a regular basis about measures being taken by industry and regulators to address Year 2000 computer issues. IOSCO is also encouraging global, industrywide testing. IOSCO’s current work builds on its public statement of June 1997, exhorting all members and market participants in their jurisdictions to take all necessary and appropriate action to address the critical challenges presented by the Year 2000 issue. IOSCO’s Technical Committee, which consists of regulators of the most developed and internationalized markets, is currently surveying its members to ascertain what actions are being taken within member jurisdictions to avoid Year 2000 problems. Because of the critical nature of this project, the Technical Committee decided to conduct similar surveys on industry readiness every 6 months. Each Technical Committee member was requested to supply the following information to the IOSCO Secretary General by January 15, 1998. 1. Awareness: What actions has your organization taken to impress upon relevant entities (self-regulatory organizations, industry groups, financial firms) the importance of addressing the Year 2000 issues identified in the Technical Committee Statement? 2. Guidance: What specific policies and/or procedures are being used by your organization and other relevant organizations within your jurisdiction to prepare markets and market participants for Year 2000? 3. Progress: What steps (including the use of specific interim goals) are being taken by your organization and by the other relevant organizations in your jurisdiction to monitor the progress of relevant entities in addressing Year 2000 problems? 4. Testing: What plans have been made by your organization or other relevant organizations in your jurisdiction for industrywide systems testing for Year 2000 problems? IOSCO added a specific section on the Year 2000 issue to its Internet web site (www.iosco.org) that contains a substantive reference list on this topic. The Securities Industry Association’s (SIA) activities are primarily directed at increasing awareness; however, it is taking a leadership role in its efforts to establish a testing schedule. SIA staff have been making presentations at conferences to increase international awareness of Year 2000 problems. For example, SIA staff gave Year 2000 awareness presentations at IOSCO conferences in Kenya, Taipei, and European cities. SIA is also conducting scenario planning sessions at international conferences to stimulate planning. These sessions focus on priorities for resolving Year 2000 problems. To identify Year 2000 readiness in the securities industry, SIA is conducting an industrywide survey. The survey form is posted on its Internet web site (www.sia.com/year_2000). If sufficient response is received, SIA will post a summary of the results on its web site. SIA has also developed and posted on its web site a conversion and testing schedule for its members to use in coordinating their Year 2000 activities. In addition, SIA is developing a checklist to help chief executive officers focus on key Year 2000 activities. SIA has coordinated extensively with other international organizations, including the Investment Dealer Association, IOSCO, International Insurance Association, Futures Industry Association, Institute Internationale Finance, and Fédération Internationale des Bourses de Valeurs. SIA is considering a coordinated effort with multilateral development banks, such as the World Bank, Asian Development Bank, and the European Development Bank, to promote awareness. The focus of the Futures Industry Association’s (FIA) Year 2000 activities is information sharing and test coordination among its 200 members. Its members include futures commissions merchants, international exchanges, and others interested in the futures market. FIA compiled a “conditions catalog” of products and transactions to be tested on an exchange-by-exchange basis in the United States. It is making this available to international members and encouraging members to adopt the same format for testing between exchanges and intermediaries. FIA has posted this information on its Internet web site (www.fiafii.org). FIA has also placed information about various exchanges on the web site and plans to include additional information about international exchanges in the future. FIA met with brokerage firms, exchanges, the London Clearing House, and key service providers in June and December 1997 to raise awareness of Year 2000 issues and discuss possible test scenarios. FIA also hosted an international meeting at its Futures & Options Expo in October 1997 to discuss various Year 2000 activities around the world. At the FIA International Futures Industry Conference in March 1998, FIA asked key members to support an industrywide test. FIA is surveying 20 of the member exchanges with the highest trade volume to identify their Year 2000 activities. At a Global Technology Forum held in London March 30-April 1, 1998, FIA will request that the 20 member exchanges provide information about the scope of their Year 2000 activities, including their current status, interfaces with intermediaries, plans for individual testing with intermediaries, and willingness to participate in an industrywide test. The International Association of Insurance Supervisors’ Year 2000 activities are primarily directed at increasing awareness of Year 2000 issues among its insurance supervisor members from over 70 countries. It is also working cooperatively with other international organizations to increase awareness. In November 1997, it issued a joint statement with the Basle Committee on Banking Supervision and the International Organization of Securities Commissions that emphasized the importance of the Year 2000 issue. The joint statement urged the development of action plans to resolve Year 2000 problems, including data exchange problems with financial institutions and clients. In December 1997, the International Civil Aviation Organization sent a letter to its members to increase their awareness of Year 2000 computer problems. The letter explained that air traffic service providers may need to perform assessments on operational air traffic control systems and nonoperational systems that provide business and commercial support. Air traffic service operational systems may be date dependent and subject to local implementation. Such systems include aeronautical fixed telecommunication networks, radar data processing, and flight data processing systems. In addition, operational systems often use date information for logging performance information. The letter also suggested a schedule for assessing, implementing solutions, and testing systems. The International Civil Aviation Organization requested that members advise it on remedial actions they have taken. The International Air Transport Association (IATA) represents and serves 259 members in the airline industry. In addition to the airlines, IATA works with airline industry suppliers, including airports, air traffic controls, aircraft/avionics manufacturers, travel agencies, global distribution systems, and information technology suppliers. IATA serves as a clearing house between its airline members to process their debit/credit notes. IATA has an internal Year 2000 project that includes four major steps: software/hardware inventory, Year 2000 compliance analysis, software modification, and contingency planning. IATA has set a target date of December 25, 1998, for Year 2000 compliance for all of its products and services. As an association of international airlines, IATA has established a group to coordinate and synchronize efforts within the industry to ensure timely solutions to Year 2000 issues. Specifically, the date format of interline messages (messages airlines exchange among themselves and other parties as a part of business processes) has been frozen. The member airlines’ applications will have to handle date conversion, if required. In addition, IATA has conducted Year 2000 conferences and seminars to exchange information among members. To monitor the status of Year 2000 activities, IATA has conducted surveys of airline members and industry suppliers. The survey of member airlines showed that (1) very few organizations claim to be fully compliant, (2) the majority of the organizations are well aware of the problem and have already initiated Year 2000 compliance activities, and (3) the typical target date for full compliance is the end of 1998. The results of the survey are available on IATA’s web site (www.iata.org/y2k). The European Commission has declared that it is concerned about the vulnerability of enterprises, infrastructures, and public administrations to the Year 2000 computer problem as well as the possible consequences of this problem for consumers. The Commission had extensive consultations with the public and private sectors during workshops in 1997 to identify the main priorities for action and the roles for enterprises, associations, administrations, and the Commission itself. As a result of these consultations, the Commission adopted a course of action and published it in an official communication on February 25, 1998. The purpose of the communication was to raise awareness and set out the Commission’s steps to address Year 2000 issues, including encouraging and facilitating the exchange of information and experience on Year 2000 initiatives undertaken by the Commission’s member states and European associations, with a view to identifying how synergies can be established to reduce duplication of effort and increase the overall impact; serving as a liaison with the European and international organizations that are responsible for regulating or supervising infrastructural sectors with significant cross-border effects (finance, telecommunications, energy, transportation) in order to exchange information about respective activities and identify where cooperation may be required. An area of particular concern is the planning and implementation of coordinated cross-border testing activities in those sectors that are likely to involve organizations in different member states. The Commission will initiate discussions between relevant organizations and member states; discussing the Year 2000 and its implications through all the relevant contacts available to the Commission services in industry and member states. In particular, attention will be paid to the impact on and preparation of infrastructural sectors, the impact on consumers and small and medium size enterprises, and the potential impact on the functioning of the internal market; and maintaining a Internet web site on the Year 2000 computer problem (www.ispo.cec.be/y2keuro). This site provides access to information about activities in different economic sectors and member states, points to sources of advice on specific aspects of the problem, and links to other sites as well as to all documents and reports produced by the Commission on the subject. The Commission also plans to monitor progress, exchange information, and benchmark best practices while reporting regularly on the progress towards Year 2000 readiness and its related issues. In the context of its policies such as those on industry, small and medium size enterprises, consumers, and training, the Commission will examine whether a further contribution could be made towards helping raise awareness and address Year 2000-related problems. In addition to its Year 2000 activities, the Commission is also addressing the information technology implications of European countries’ conversion to the new Euro currency. Because this conversion is taking place about the same time period as the Year 2000 date conversion activities, the two activities are in competition for financial, technical, and management resources. To identify how businesses are approaching the Euro conversion and the interrelationship with activities to resolve Year 2000 problems, the Commission sponsored the survey of over 1000 senior information technology managers in 10 countries. The results of this survey, as well as the issue papers and workshop results, are posted on the Commission’s web site. The World Bank is conducting an awareness campaign directed toward its client governments and implementing agencies that are responsible for World Bank-financed projects in developing countries. The Bank wants to ensure the continued success and viability of its clients and avoid problems with development projects, many of which comprise information technology systems and embedded logic components that may be vulnerable to the Year 2000 problem. In this effort, however, the Bank limits its role to raising awareness and pointing clients toward ways of evaluating and remediating the problem. To begin this effort, the Bank is (1) distributing an information packet on the Year 2000 problem, (2) pointing recipients to further sources on the Internet, and (3) providing some advice on ascertaining Year 2000 compliance in the procurement process. In the near future, the Bank plans to provide Year 2000 information on the Bank’s Internet web site (www.worldbank.org). The Bank also is hiring a contractor to develop a guide for developing country governments on creating a national Year 2000 policy. When ready, this guide will be placed on the Bank’s Internet web site and will be conveyed to governments via seminars to be held around the world. In November 1997, the United Nations’ Information Technology Services Division posted information on its Internet web site (www.un.org/members/yr2000) to increase awareness of the actions needed to resolve Year 2000 computer problems. This included information on the actions being taken concerning the computer systems operated by United Nations’ organizations and references to issue papers and guidance documents that member countries could use in developing their own Year 2000 program. It also circulated a letter to member countries that recommended dates for Year 2000 compliance and contained references to reading materials and companies providing Year 2000 services. At that time, the United Nations was considering a program to encourage member countries that have not already begun a Year 2000 assessment to take aggressive action in the development of strategic plans to deal with Year 2000 problems. It also circulated a letter to member countries that recommended dates for Year 2000 compliance and contained references on reading materials and companies providing Year 2000 services. The Steering Committee is sponsored by the Group of Seven and its objective is to promote the international sharing of information on the resolution of Year 2000 computer problems. To achieve this objective, the Steering Committee has established an Internet web site (www.itpolicy.gsa.gov) that includes (1) links to Year 2000 web sites of various countries and (2) databases showing the Year 2000 compliance status of commercial-off-the-shelf software, telecommunications, facilities, and biomedical equipment. The Steering Committee is also planning to use the web site to conduct a virtual Year 2000 international conference. The International Council sponsored a workshop in August 1997 with the objectives of exchanging information among members on Year 2000 issues related to each member country and identifying areas of common interest. The workshop was attended by representatives from 14 countries (a report on the workshop is located at www.ogit.gov.au/ica/icay2k). The International Council has scheduled a second workshop for June 1998. Interpol operates an international network that its 177 member countries use to exchange law enforcement information. Member countries connect to telecommunication hubs that are located around the world and their information systems transmit data through the network. Interpol has a project underway to ensure that its network will be ready well before the Year 2000. According to project officials, Interpol has been working with suppliers to ensure that the network’s hardware and software will be Year 2000 compliant. It has also sent its Year 2000 plans to each member country. A key part of these plans is the testing of the network. This testing is scheduled to be performed in October 1998 and January 1999. Year 2000 Computing Crisis: Continuing Risks of Disruption to Social Security, Medicare, and Treasury Programs (GAO/T-AIMD-98-161, May 7, 1998). Year 2000 Computing Crisis: Potential for Widespread Disruption Calls for Strong Leadership and Partnerships (GAO/AIMD-98-85, April 30, 1998). Department of the Interior: Year 2000 Computing Crisis Presents Risk of Disruption to Key Operations (GAO/T-AIMD-98-149, April 22, 1998). Year 2000 Computing Crisis: Federal Regulatory Efforts to Ensure Financial Institution Systems Are Year 2000 Compliant (GAO/T-AIMD-98-116, March 24, 1998). Year 2000 Computing Crisis: Strong Leadership Needed to Avoid Disruption of Essential Services (GAO/T-AIMD-98-117, March 24, 1998). Year 2000 Computing Crisis: Business Continuity and Contingency Planning (GAO/AIMD-10.1.19, Exposure Draft, March 1998). Year 2000 Computing Crisis: FAA Must Act Quickly to Prevent Systems Failures (GAO/T-AIMD-98-63, February 4, 1998). FAA Computer Systems: Limited Progress on Year 2000 Issue Increases Risk Dramatically (GAO/AIMD-98-45, January 30, 1998). Defense Computers: Air Force Needs to Strengthen Year 2000 Oversight (GAO/AIMD-98-35, January 16, 1998). Social Security Administration: Significant Progress Made in Year 2000 Effort, But Key Risks Remain (GAO/AIMD-98-6, October 22, 1997). Defense Computers: Technical Support Is Key to Naval Supply Year 2000 Success (GAO/AIMD-98-7R, October 21, 1997). Veterans Affairs Computer Systems: Action Underway Yet Much Work Remains To Resolve Year 2000 Crisis (GAO/T-AIMD-97-174, September 25, 1997). Year 2000 Computing Crisis: An Assessment Guide (GAO/AIMD-10.1.14, September 1997). Defense Computers: Improvements to DOD Systems Inventory Needed for Year 2000 Effort (GAO/AIMD-97-112, August 13, 1997). Defense Computers: Issues Confronting DLA in Addressing Year 2000 Problems (GAO/AIMD-97-106, August 12, 1997). Defense Computers: DFAS Faces Challenges in Solving the Year 2000 Problem (GAO/AIMD-97-117, August 11, 1997). Veterans Benefits Computer Systems: Uninterrupted Delivery of Benefits Depends on Timely Correction of Year-2000 Problems (GAO/T-AIMD-97-114, June 26, 1997). Veterans Benefits Computer Systems: Risks of VBA’s Year-2000 Efforts (GAO/AIMD-97-79, May 30, 1997). Medicare Transaction System: Success Depends Upon Correcting Critical Managerial and Technical Weaknesses (GAO/AIMD-97-78, May 16, 1997). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on actions taken to address year 2000 issues for electronic data exchanges, focusing on the: (1) key actions taken to date to address electronic data exchanges among federal, state, and local governments; (2) actions the federal government has taken to minimize the adverse economic impact of non-compliant year 2000 data from other countries' information systems corrupting critical functions of the United States; and (3) international forums where the worldwide economic implications of this issue have been or could be addressed. GAO noted that: (1) key actions to address year 2000 data exchange issues are still in the early stages; however, federal and state coordinating organizations have agreed to use a 4-digit contiguous year format and establish joint federal and state policy and working groups; (2) to implement these agreements, the Office of Management and Budget (OMB) issued instructions in January 1998 to federal agencies to inventory all data exchanges with outside parties by February 1, 1998, and coordinate with these exchange partners by March 1, 1998; (3) at the time of GAO's review, no actions had been taken to establish target dates for additional key tasks; (4) about half of the federal agencies reported during the first quarter of 1998 that they have not yet finished assessing their data exchanges to determine if they will be able to process data with dates beyond 1999; (5) two of the 39 state-level organizations reported having finished assessing their data exchanges; (6) for the exchanges already identified as not year 2000 ready, respondents reported that little progress has yet been made in completing key steps such as reaching agreements with partners on date formats, developing and testing bridges and filters, and developing contingency plans for cases in which year 2000 readiness will not be achieved; (7) most federal agency actions to address year 2000 issues with international data exchanges have been in the financial services area; (8) ten federal agencies reported having a total of 702 data exchanges with foreign governments or the foreign private sector; (9) these foreign data exchanges represented less than 1 percent of federal agencies' total reported exchanges; (10) federal agencies reported reaching agreements so far on formats of 98 of the foreign data exchanges; (11) international organizations addressing year 2000 issues have been the most active in the financial services area; and (12) during 1997, several international organizations initiated activities to increase awareness, provide guidance, and monitor the status of year 2000 efforts.
Congress passed ERISA to protect the interests of participants and beneficiaries of private sector employee benefit plans. Before the enactment of ERISA, few rules governed the funding of defined benefit pension plans, and participants had no guarantee that they would receive promised benefits. Title IV of ERISA created PBGC to insure private sector plan participants’ benefits. PBGC receives no funds from general tax revenues. Instead, operations are financed by insurance premiums set by Congress and paid by sponsors of defined benefit plans, investment income, assets from pension plans trusteed by PBGC, and recoveries from the companies formerly responsible for the plans. Since its inception, PBGC’s workloads have increased significantly. In fiscal year 1975, PBGC administered three pension plans covering a total of 400 participants. By fiscal year 2007, PBGC administered almost 3,800 pension plans, incurring responsibility for more than 1.3 million participants. To service this increased workload, PBGC employed 847 federal employees in fiscal year 2007 working across several divisions. Of the 847, PBGC’s key staff totaled 486, of which approximately 10 percent were attorneys, 7.2 percent were accountants, and 3.4 percent were financial analysts (see table 1). PBGC also relies heavily on the services of a variety of private sector contractors to assist in its mission. As of June 2007, these contractors accounted for 64 percent of PBGC’s total workforce. In fiscal year 2007, PBGC reportedly spent $297 million (75 percent) of its $398.3 million operating budget for contracting and related expenses. In addition to operating PBGC’s 10 field benefit administration offices throughout the country, private sector contractors supplement federal staff at the corporation’s headquarters and a call center facility. PBGC, like many executive branch agencies, is subject to the General Schedule and the federal pay system. In contrast, certain federal financial regulatory agencies, including the Federal Deposit Insurance Corporation (FDIC), the Office of the Comptroller of the Currency (OCC), the National Credit Union Administration (NCUA), and the Securities and Exchange Commission (SEC), have the flexibility to establish their own compensation programs outside the various statutory provisions on classification and pay for executive branch agencies. (See app. II for a list and description of the federal financial regulatory agencies.) These financial regulatory agencies are generally required to seek to maintain pay comparability with each other. In June 2007, we examined the actions these agencies have taken to assess and implement comparability in pay and benefits with each other. While PBGC has unique responsibilities pertaining to insuring certain employee defined benefit pensions, some of these federal financial regulators highlighted under the Federal Institutions Reform, Recovery, and Enforcement Act of 1989 (FIRREA), such as FDIC and NCUA, also have insurance programs and funds. Further, PBGC employs occupations not only similar to those of the FIRREA agencies, but also to those of SEC, the Office of Federal Housing Enterprise Oversight (OFHEO), and the Federal Reserve Board (FRB). While human capital authorities and flexibilities vary governmentwide, we have noted in our prior work that there are two key principles that remain central to the human capital idea. First, people are assets whose value can be enhanced through investment, and second, an organization’s human capital policies must be aligned to support the organization’s “shared vision”—that is, the mission, vision for the future, core values, goals and objectives, and strategies by which the organization has defined its direction and expectations for itself and its people. As noted in the report, all human capital policies and practices should be designed, implemented, and assessed by the standard of how well they help the organization pursue its shared vision. We have also reported that strategic workforce planning generally addresses the alignment of an organization’s human capital program with its current and emerging mission and programmatic goals and the development of long-term strategies for acquiring, developing, motivating, and retaining staff to achieve programmatic goals. There are a variety of models of how federal agencies can conduct workforce planning, but certain key principles are generally common to such planning: involving top management, employees, and other stakeholders in developing, communicating, and implementing the strategic workforce plan; determining skills and competencies needed in the future workforce to meet the organization’s goals and identifying gaps in skills and competencies that an organization needs to address; selecting and implementing human capital strategies that are targeted toward addressing these gaps and issues; building the capacity needed to address administrative, educational, and other requirements important to support workforce planning strategies; and evaluating the success of the human capital strategies. According to our strategic human capital management model, self- assessment is the starting point for creating “human capital organizations”—agencies that focus on valuing employees and aligning “people policies” to support organizational performance goals. Part of the impetus for creating human capital organizations comes from the Government Performance and Results Act of 1993, but agencies themselves must follow through on tailoring their human capital systems to their specific missions, visions for the future, core values, objectives, and strategies. To strategically manage an agency’s human capital approach, there are certain planning documents an agency can utilize: Strategic mission plan: An overall agency strategic plan includes a clear and coherent shared vision of an agency’s mission, goals, values, and strategies that is clearly and consistently communicated and reinforced to all employees. Human capital strategic plan: A coherent human capital strategic plan, integrated with the agency’s overall strategic planning, outlines a framework of human capital policies, programs, and practices specifically designed to steer the agency toward achieving its shared vision. Succession plan: A formal succession plan includes a review of the agency’s current and emerging leadership needs in light of its strategic and program planning, identifies sources of executive talent both within and outside the agency, and includes planned development opportunities, learning experiences, and feedback for executive candidates. Workforce plan: A workforce planning document, linked to the agency’s strategic and program planning efforts, identifies its current and future human capital needs, including the size of the workforce; its deployment across the organization; and the knowledge, skills, and abilities needed for the agency to pursue its shared vision. Appendix III discusses elements of GAO’s human capital framework in further detail. From fiscal years 2000 to 2007, PBGC was generally able to hire and retain staff in its key occupations, but the corporation has had some difficulty hiring and retaining financial analysts and certain other staff. Despite difficulty in these areas, PBGC’s overall ability to retain staff in key occupations has been similar to that of the rest of the federal government. However, our analysis suggests that PBGC may face several workforce and compensation challenges in the future—such as (1) a workforce with relatively fewer years of federal experience, (2) the possible retirement of up to a quarter of its workforce within the next 4 years, and (3) the potential difficulty of hiring and retaining staff due to the corporation’s existing compensation structure, which offers salaries lower than those of some other executive branch agencies that employ similar staff. From fiscal years 2000 to 2007, PBGC was generally able to hire staff for occupations it views as key to its organizational mission—accountants, actuaries, attorneys, auditors, financial analysts, information technology specialists, and pension law specialists. Our analysis of OPM’s CPDF found that from fiscal years 2000 to 2007, PBGC hired 289 employees in these key occupations, compared to 203 employees in those occupations who left PBGC. Of these, PBGC hired more people than it lost in each of the key occupations except for attorneys and pension law specialists. Additionally, PBGC officials stated that data indicated that it was able to fill 65 percent of its vacancies in fiscal year 2007 across all occupations within 45 days of the close of the announcement of the job vacancy, exceeding OPM’s government standard of 60 percent for that year. However, PBGC officials emphasized that the 45-day hiring model, which includes data across all occupations, can hide the corporation’s inability to fill certain key positions. PBGC officials acknowledged that the corporation was generally able to hire people for most of its key occupations, but they stated they have had difficulty filling certain positions like the chief financial officer, senior financial analyst, systems accountant, and procurement attorney. To address this difficulty, PBGC officials stated that the corporation has left some positions unfilled and on occasion has hired individuals who required in-house training. From fiscal year 2000 to 2007, PBGC retained staff at rates similar to the rest of the federal government for the corporation’s key occupations. PBGC’s overall average attrition rate was about 6 percent for these occupations, roughly equal to that of other federal agencies over that period (see fig. 1). For example, for occupations like attorneys and information technology specialists, the attrition rates were similar to those of other federal executive branch agencies. Likewise, the attrition rates for PBGC’s executives were similar to those of other federal executive branch agencies. Our analysis also found that PBGC’s overall attrition experience was comparable to those at the federal financial regulatory agencies— such as FDIC and SEC. However, attrition rates did differ for certain key occupations. For example, PBGC’s attrition rates for financial analysts were greater than those of other agencies, with an average attrition rate of 8.4 percent, compared to 3.7 percent in other federal executive branch agencies governmentwide, and 5.5 percent for the financial regulators. Further, over the last 3 years, PBGC’s and other executive branch agencies’ overall attrition rates were higher than their 8-year averages, which were nearly identical, as shown in figure 2, while the rates for the federal regulators collectively dipped below their 8-year average in 2007 (see fig. 2). PBGC’s attrition rates for staff hired from fiscal years 2000 to 2007 were also similar to rates at other federal executive branch agencies. For the seven key occupations collectively, a total of 10.8 percent of key PBGC staff hired from fiscal years 2000 to 2007 left during their first 2 years of employment, roughly equal to staff in these positions at other federal executive branch agencies. However, at PBGC, only 60 percent of financial analysts remained; a rate lower than for the six other key occupations hired during these 8 years (see fig. 3). In reviewing staff separations from the corporation, we found that departing PBGC staff in key occupations were more likely to resign from federal employment—for example, seeking employment outside the federal government—than their counterparts in other federal agencies (see fig. 4). We could not determine where employees that resigned from the federal government moved to because CPDF does not include information on employment outside the federal government. However, PBGC officials indicated that while they do not systematically track the employment of all their employees after separation, it was common for employees to find employment with private sector entities after leaving PBGC. Our analysis also found that during the same time period, key staff departing from PBGC were slightly more likely than staff from other agencies to transfer to another federal agency (see fig. 4). However, no identifiable pattern existed among the specific federal agencies to which PBGC staff transferred. In fact, from fiscal year 2000 to 2007, CPDF data showed that 47 PBGC staff in key occupations transferred to 23 different federal agencies and only one of those agencies—the Department of Labor—employed more than 5 of these staff. On the basis of the results of PBGC’s voluntary exit surveys and additional information collected by PBGC officials, employees frequently cited greater pay as a reason why they left PBGC for the private sector or another federal agency. Further differences existed in PBGC’s seven key occupations with respect to whether departing staff retired, transferred to another agency, or otherwise resigned. According to CPDF data, resignations accounted for 83 percent of separations for financial analysts, the highest rate of any key occupation, while resignations accounted for just 13 percent of auditors’ separations. Other federal agencies also experienced varied rates across occupations with respect to types of separations (see fig. 5). Despite the corporation’s overall ability to hire and retain key staff, our analysis of CPDF data suggests that PBGC may face several workforce challenges in the future, such as (1) a staff with relatively fewer years of federal experience, (2) the possibility of losing a significant number of its key staff due to retirement eligibility, and (3) potential difficulties hiring and retaining certain staff because of PBGC’s compensation. First, in fiscal year 2007, PBGC’s key staff had relatively fewer years of federal experience than their counterparts in other federal executive branch agencies. Specifically, according to CPDF data, overall key PBGC staff had an average of 12.8 years of federal experience, while staff in similar positions at the other federal executive branch agencies had 16.8 years of federal experience. Additionally, we found that while 25 percent of PBGC’s accountants had less than 3 years of experience, only 10 percent of accountants in other executive branch agencies had similar years of experience. In fact, for every key occupation except pension law specialist, PBGC had a greater percentage of staff with less than 3 years of experience compared to other agencies, though in some cases the differences were slight (see table 2). Similarly, PBGC’s workforce data corroborated this finding. Our analysis of the corporation’s tenure data indicated that 23.3 percent of accountants, auditors, attorneys, financial analysts, and actuaries had 3 or fewer years of experience. The limited federal experience may indicate a workforce challenge for PBGC, because PBGC officials said that it can take entry-level staff 3 to 4 years to reach full productivity. However, PBGC officials added that the corporation regularly hires individuals with prior private sector experience, so not all staff with limited tenure at PBGC would be entry- level staff. For more seasoned staff, PBGC officials said that there is generally an expectation that such staff will reach full productivity within a 90- to 120-day time frame. Using CPDF, we could not determine the extent of an individual’s work experience outside of the federal government. Second, PBGC faces the prospect of losing a significant number of its key staff due to retirement eligibility. Over the next 4 years, nearly one-quarter of PBGC’s staff in key occupations will be eligible to retire. While this rate is lower than that of other federal executive branch agencies—about 32 percent of staff at other federal executive branch agencies in these occupations will be eligible to retire in the next 4 years—retirement eligibility could still present a workforce challenge for PBGC, because the corporation could lose key institutional knowledge. According to CPDF data, PBGC’s pension law specialists and attorneys will have the greatest retirement eligibility over the next 4 years (see fig. 6). Third, PBGC may face difficulties in hiring and retaining certain key staff in the future due to the corporation’s existing compensation structure, which offers salaries lower than some other federal agencies that employ similar occupations. Specifically, PBGC officials noted that the corporation—which is subject to the General Schedule—has lost staff to some federal financial regulators that are not on the General Schedule and pay higher salaries for these key occupations. While our analysis found that PBGC has lower pay ranges and lower average basic salaries (which do not include locality pay) than the federal financial regulators in these key occupations, CPDF data did not suggest that large numbers of key PBGC staff were leaving the corporation for these agencies. In addition, PBGC’s data indicated that just 7 of 99 departing employees (from all occupations, not just key occupations) transferred to federal financial regulators between fiscal year 2005 through 2007—PBGC officials said that these 7 employees left PBGC for increased pay. While PBGC’s average salaries were lower than those at the financial regulators, PBGC staff collectively have pay ranges and salaries similar to those in other federal executive branch agencies, many of which are subject to the General Schedule. For example, salaries for attorneys, auditors, and executives at other federal agencies were generally similar to those at PBGC. However, PBGC had higher average salaries for financial analysts, accountants, and information technology specialists and lower average salaries for pension law specialists and actuaries (see fig. 7). While this information may be informative on a broad scale, the number of years of experience and general schedule grade level at which PBGC workers are hired play a significant role in their salaries, as is true for other federal executive branch agencies on the General Schedule as well. However, because the financial regulators are not subject to the General Schedule, these agencies have greater flexibilities in setting their salaries. (App. IV contains information on minimum and maximum pay ranges and average salary for mission critical occupations for selected federal agencies.) PBGC has taken some steps in recent years to improve its human capital planning and practices. These steps have included drafting planning documents, such as components of a human capital plan like a succession management directive. However, as of March 2008, the corporation had no formal, comprehensive human capital plan integrating all necessary components to prepare for future challenges, nor had it systematically collected and analyzed its workforce data to identify such challenges. In addition to limited planning and data, PBGC had not fully explored all available compensation options under its statutory authority even though corporation executives stated that PBGC’s compensation structure may hinder it from attracting and retaining key staff. PBGC has taken steps to improve its human capital planning and practices. These steps have included drafting planning documents, such as components of a human capital plan like a succession management directive. In addition, PBGC has included key human capital goals in its annual report. This report highlights PBGC’s initiatives for the management of human capital, such as ensuring employees have the skills and competencies needed to support its mission and establishing a performance-based culture within the corporation. PBGC has made some progress toward these goals. For instance, PBGC recently hired a new director of human resources and a new human capital specialist with expertise in human capital and succession planning. Also, in an effort to establish a performance-based culture, PBGC linked employees’ performance expectations to corporate goals and objectives in 2007. Specifically, key PBGC human capital officials, including the Chief Management Officer, are to be evaluated based on their progress toward developing strategic human capital plans and policies. In addition, the human capital office is developing new human capital policies and practices, including increasing management’s involvement in order to produce better results. Toward that end, a PBGC official stated that the human capital office is planning to adjust the process of writing position descriptions so that the human capital specialist and the department manager can discuss the position’s responsibilities and duties and create job announcements more collaboratively. Furthermore, PBGC’s human capital office has developed and implemented various recruitment strategies in recent years, such as an outreach program to colleges and universities and recruitment through federal internship and fellowship programs. PBGC human capital officials stated that certain recruitment strategies are being reassessed, with the goal of increasing their effectiveness. Despite these actions, the corporation lacks a formal, comprehensive human capital strategy, articulated in a formal human capital plan that includes human capital policies, programs, and practices. In our previous work we have identified critical success factors that agencies should use to manage their workforces strategically. The critical success factors are interrelated and mutually reinforcing so that no human capital issue can be compartmentalized and addressed in isolation (see app. III). Workforce planning and succession management, among other things, are critical components of a comprehensive human capital plan. Workforce planning uses workforce data to develop long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals and prepare the agency for its current and future needs. In 2001, PBGC established a workforce planning team and conducted a comprehensive review of its future human capital needs in response to a GAO recommendation in 2000. As part of this effort, the team identified needed skills and future critical needs for the corporation and prepared a gap analysis for the seven key occupations. From this analysis, the team then determined if and where workforce gaps existed and formulated corresponding strategies to address the gaps, all of which was documented in a workforce planning report drafted in 2002 that was to serve as the basis for its future ongoing workforce planning efforts. Since that time, the corporation has conducted little workforce planning and the workforce planning team has dissolved. However, PBGC has recently done more in the area of succession planning, with the goal of identifying and developing appropriate leaders to meet their future challenges. In fact, PBGC human capital officials have drafted a succession management plan, and the corporation continues to use a program developed in 2002 to prepare staff for PBGC’s leadership vacancies. According to a senior PBGC official, the corporation has lacked a formal, comprehensive human capital plan in recent years because the increased workload and demand for qualified staff required the human capital office to primarily focus on hiring and training new staff, with little time to strategically plan, and because the human capital office required a higher level of expertise to develop a comprehensive human capital strategy. However, PBGC officials stated that because PBGC has now acquired such expertise with the hiring of a new human capital director and a new human capital specialist, the corporation intends to have a formal human capital strategic plan by the end of fiscal year 2008. GAO’s prior work has shown that high-performing organizations must have a leadership team committed to human capital management who personally develop and direct reform and continuously drive improvement. Several PBGC officials have undertaken actions to conduct succession planning within their own departments; however, differing opinions among PBGC’s leadership concerning some aspects of human capital planning—such as workforce and succession planning—may complicate and prolong PBGC’s strategic efforts. For example, some PBGC executives conduct departmental succession planning, while others believe any succession management plan should incorporate a corporate viewpoint. Our prior work suggests that efforts to address human capital management are most likely to succeed if an agency’s top management and human capital leaders set the overall direction, pace, tone, and goals from the outset. PBGC has not routinely and systematically targeted and analyzed all key workforce data—such as attrition rates, occupational skills mix, and trends—necessary to create an overall workforce profile that addresses current and future workforce needs. Instead, PBGC human capital officials stated that they generally collected personnel data and reported certain workforce statistics—such as counts of the number of open positions filled, recruitment and retention incentives used, workforce diversity, and separation—for top management on a monthly basis. However, the monthly report does not provide context regarding the significance of these statistics for the corporation as a whole. Furthermore, officials stated that they generally conducted in-depth data collection and analysis in response to requests from the corporation’s executive management. For example, in 2006, PBGC’s human capital office conducted an analysis of the representation of minorities and women by grade and occupation to target the corporation’s recruitment with the goal of ensuring a diverse workforce. However, because such analysis has been conducted only periodically and on requested topics, information on PBGC’s overall workforce trends has been limited and therefore unavailable for anticipating the corporation’s current and future needs. While PBGC’s human capital office has conducted some workforce analysis, it has not taken steps to formally evaluate needed data that could inform its workforce planning efforts. Our prior work has found that collecting and analyzing workforce data are fundamental to measuring the effectiveness of an organization’s human capital approaches in support of the mission and goals of an agency. To evaluate factors affecting attrition, agencies can compare their attrition rates to those of other federal agencies, estimate the cost of recruiting and training new employees who leave and the cost of recruiting and training their replacements, and evaluate labor market conditions in locations where it operates. While PBGC’s human capital office maintains data on the corporation’s attrition rates, it does not perform certain types of analysis to better understand its attrition. As of March 2008, PBGC had not conducted any of these analyses. Similarly, the corporation has done little since 2002 to identify and analyze its workforce skills by gathering skills data on current employees, critical skills that are needed throughout the agency, to determine if and where gaps exists. Our prior work has noted that maintaining current information on staff members’ critical skills and competencies is especially important for federal agencies operating in an ever-changing environment. Shifts in national priorities, budget constraints, and other factors affect the critical skills an agency needs to fulfill its mission. For PBGC, such information is particularly useful for determining and addressing gaps in the critical workforce skills of staff and making efficient resource allocations, because PBGC must respond quickly to changes in the financial markets and defined benefit pension plan industry. However, PBGC has only in recent months, and at the request of the newly hired Chief Information Officer, taken steps to evaluate the critical skill needs and gaps of one of its key occupations—Information Technology Specialist. For the other key occupations, PBGC had not yet determined or updated the skills inventory and competencies of its workforce, as of March 2008. PBGC officials told us that they planned to develop a process for identifying such skill needs by the end of fiscal year 2008. PBGC has not fully explored all available compensation options under its statutory authority, even though corporation officials stated that PBGC’s current compensation structure limits its ability to hire and retain certain key staff. While data suggest that PBGC is generally able to hire and retain most key staff, PBGC officials have expressed the belief that the corporation is at a competitive disadvantage not only with the private sector, but also with certain federal agencies like FDIC and SEC that employ similar staff. As we noted, while PBGC staff in key occupations have pay ranges and salaries similar to those of the rest of the federal government, PBGC’s pay ranges and average salaries are lower than those of their counterparts at some similar agencies. Moreover, data suggest that as of September 2007, PBGC’s highest pay for financial analysts, the key occupation that PBGC appears to have the most difficulty hiring and retaining, was lower than that of both the federal financial regulators and the rest of the federal government. Yet, despite corporate concerns, PBGC has not taken steps to fully explore all available compensation options with OPM and the Office of Management and Budget (OMB). Our prior work has found that the insufficient and ineffective use of flexibilities can significantly hinder the ability of an agency to recruit, hire, retain, and manage its workforce, and that the effective, efficient, and transparent use of human capital flexibilities must be a key component of agency efforts to address human capital challenges. According to our Internal Control Management and Evaluation Tool, an agency’s compensation system should be adequate to acquire, motivate, and retain personnel, and incentives should be used to provide encouragement for personnel to perform at their maximum capability. Further, to assist agencies, OPM has developed a handbook describing currently available human capital flexibilities. In recent years, PBGC has made use of various human capital flexibilities in which the corporation has discretionary authority to provide direct compensation in certain circumstances to support its recruitment and retention efforts. Our review of PBGC’s use of the compensation options recorded in CPDF found that PBGC had used options such as recruitment and retention incentives, superior qualification pay-setting authority, and special pay rates for specific occupations. We also found that PBGC used performance management incentives, such as awards (bonuses) for suggestions, superior accomplishments, or special acts. (See app. V for a list of selected compensation flexibilities and authorities.) However, some PBGC officials stated that the corporation has not used these flexibilities to their fullest potential. For example, some senior management officials said the corporation should provide recruitment and retention incentives to more employees. Our review of CPDF found that between fiscal year 2004 and 2007, PBGC used recruitment incentives 14 times and retention incentives 10 times. Further, PBGC officials said that they had not recently explored additional flexibilities that required the approval of OPM and OMB to determine whether they would be applicable or appropriate for the corporation. For example, as of March 2008, PBGC officials said that they had not explored whether positions, such as its Chief Insurance Program Officer, Chief Financial Officer, and Chief Investment Officer—positions that require specialized technical expertise specifically related to defined benefit pension plan structure and finance—may fall under OPM’s criteria for critical position pay authority. While most of these positions are currently filled, PBGC officials have cited difficulty filling some of these more technical positions and have expressed concern about filling them in the future as individuals leave. In another example, PBGC had not explored whether it would be appropriate or applicable to waive the recruitment and retention incentive limitation of 25 percent based on a critical agency need. In addition to not exploring all available compensation options, PBGC has done little over the last decade to determine what effect its compensation system and lower pay ranges may have on its recruitment and retention efforts and the extent to which an alternative pay system may be needed. In the early 1990s, PBGC evaluated its workforce and conducted a compensation study comparing its compensation system with those of federal financial regulators and the private sector. The study concluded that some PBGC staff were relatively under compensated compared to the private sector and those federal agencies classified under FIRREA. On the basis of that evidence, PBGC sought to establish a new compensation system (outside of the federal government’s General Schedule and merit pay systems), arguing that PBGC could do so because the corporation did not pay compensation entirely from appropriated funds. However, in response, the Solicitor of Labor concluded that PBGC’s compensation was in fact paid from appropriated funds and that PBGC was not exempt from the General Schedule. As of March 2008, DOL’s Office of the Solicitor had not changed its conclusions. In addition, GAO has long held the view that the revolving funds of PBGC are appropriated funds. According to PBGC officials, the corporation has not taken steps to evaluate its compensation structure since the early 1990s, because of the position taken by the Department of Labor. Officials told us that it would not be cost-effective for the corporation to invest resources in evaluating the corporation’s compensation structure if no action could be taken to modify the pay system, if needed. However, other federal agencies also facing increased workload demands have in some cases explored and obtained alternative pay systems—systems where market rates and performance are central drivers of pay—after establishing a need for additional compensation. For example, Congress enacted FIRREA after the U.S. savings and loan crisis, and specifically provided the federal financial regulators with the flexibility to establish their own pay system. Although PBGC is a relatively small agency, it is faced with the challenge of insuring retirement income for millions of Americans’ promised defined benefit pensions. Because of this, PBGC must have at its disposal a highly qualified workforce with the skills necessary to seek the best financial arrangements needed to support its mission. While it appears that PBGC is generally able to hire and retain key staff, the corporation has faced hiring and retaining difficulties in certain technical positions, such as its financial analysts and chief financial officer. In addition to these difficulties, the corporation may face several workforce challenges in the near future if it does not take steps now to strategically prepare itself by identifying its current and future challenges. However, because PBGC does not systematically collect and analyze all necessary workforce data, the foundation on which to identify and address such challenges is limited. In order to develop strategies for identifying and filling any workforce gaps or spotlight areas in need of attention, PBGC management must rely on valid workforce data. If it does not, the corporation’s ability to effectively target its resources or know which key areas to focus on when recruiting, developing, and retaining top talent is limited. While the costs of collecting such data may require some trade-offs among PBGC’s competing priorities, the costs of making decisions without the necessary information could be even greater over time. PBGC officials have suggested that the corporation’s compensation system places it at a competitive disadvantage not only with the private sector, but also with other federal entities when competing for some key staff. While such perceptions may be reasonable for certain occupations, the fact that PBGC has not fully explored all compensation options with OPM and OMB may hinder its ability to develop innovative compensation packages within its current statutory authorities. If PBGC were to fully exhaust all available options, PBGC executives, in conjunction with the corporation’s board of directors, could more reasonably take steps to seek additional flexibilities, such as an alternative compensation structure, if it seemed warranted. By doing so, PBGC’s ability to insure and deliver retirement benefits to the millions of Americans that rely upon them could be strengthened. To improve PBGC’s human capital management structure, we recommend that PBGC’s Director instruct PBGC’s Chief Management Officer to Integrate formal workforce and succession planning components as part of the corporation’s efforts in developing a formal strategic planning approach to managing its workforce. Systematically collect and analyze workforce data and integrate the results of such analyses into its workforce planning efforts. Such an approach could include updating PBGC’s 2002 Workforce Planning Report, analyzing the reason for and the associated costs of its attrition, and identifying the types of skills and competencies critical to PBGC’s mission. Fully explore with the Office of Personnel Management and Office of Management and Budget all compensation options currently available to determine and document what options are appropriate and applicable within its statutory authority. Subsequently, the corporation should make use of all applicable and appropriate options, and continuously track, document, and monitor the use of such options. Once such steps are taken, PBGC should determine the extent to which its ability to hire and retain is hindered by its compensation structure. If such efforts conclude that PBGC is in fact hindered, the corporation’s board of directors and Director should work to formulate recommendations to Congress for modifying its structure. We obtained written comments on a draft of this report from PBGC, which are reproduced in appendix VI. In addition, we provided copies of the draft report to the Departments of the Treasury, Labor, and Commerce as well as OPM for their comments. In instances where comments were provided, they were incorporated in the report where appropriate. In response to our draft report, PBGC generally concurred with our recommendations and outlined the actions the corporation has underway or plans to take with regard to them. Specifically, PBGC stated that the corporation would do more to better manage its workforce particularly with regard to its human capital succession planning and workforce data analysis. PBGC reiterated the steps that the corporation is taking to strengthen its human capital management, and added that these improvements were expected to address many of our concerns. Further, PBGC stated that the corporation has taken steps in years past to explore compensation options, but noted that legal and policy considerations beyond the purview of PBGC’s management have hindered the corporation’s ability to do so. Nevertheless, as we recommended, PBGC stated that the corporation will continue to explore other compensation options considered appropriate and maintain a dialogue with OPM, OMB, and members of PBGC’s board of directors regarding this issue. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the director of PBGC; the Secretaries of the Treasury, Labor, and Commerce; and other interested parties. We will also make copies available to others on request. If you or your staff have any questions concerning this report, please contact me on (202) 512-7215. Key contributors are listed in appendix VII. To determine the Pension Benefit Guaranty Corporation’s (PBGC) recent experience in hiring and retaining mission-critical staff, we worked with PBGC executives and human capital officers to identify which staff were considered critical to PBGC’s mission. On the basis of these discussions and in conjunction with a 2002 PBGC workforce planning team report, it was agreed with PBGC officials that we would focus on seven occupations that were considered key to the corporation’s business operation and also made up the majority of the corporation’s workforce. These occupations were 6. information technology specialists, and 7. pension law specialists. After identifying these key occupations, we assessed PBGC’s recent experience in hiring and retaining staff by using the Office of Personnel Management’s (OPM) Central Personnel Data File (CPDF) to identify different workforce data, such as hiring, attrition, separation types, retirement eligibility, federal tenure, and pay averages for these positions. To identify trends in some of these data, we analyzed hiring, attrition, and separation workforce data sets for PBGC’s key occupations from fiscal year 2000 to 2007 and compared attrition and separation information with comparable information from the rest of the federal government for the same period. We chose to review this data from these fiscal years to determine what trends, if any, existed prior to and after the significant workload increases and financial liabilities resulting from several large companies terminating their defined benefit pension plans around fiscal years 2003 and 2004. To assess the reliability of OPM’s CPDF, we reviewed GAO’s prior data reliability work on CPDF data. We supplemented that work as necessary by analyzing employee movement using CPDF data when we found exceptions from standard personnel procedures, such as employees with a transfer-out code but with an accession code in the hiring agency that did not include a transfer-in code. We also found duplicate separation or accession records for the same individual on the same day. However, these types of data limitations represented less than 1/10th of 1 percent of the data used. As a result, we concluded that the data were sufficiently reliable for the purposes of our review. We also requested attrition and other workforce data from PBGC’s computerized system called the Federal Personnel and Payroll System (FPPS) to determine the extent to which CPDF data matched FPPS data. We reviewed related agency documentation, interviewed agency officials knowledgeable about the data, and brought to the attention of these officials any concerns or discrepancies we found with the data for correction or updating. However, we did not independently verify the workforce data we received from PBGC. In a number of cases, we compared PBGC’s CPDF data with data on other federal executive branch agencies as a group that employ the seven key occupations, or with financial regulators specifically. The following describes the steps that we took to identify selected workforce data in CPDF for the seven occupations. We identified all new hires for fiscal years 2000 through 2007 by using personnel action codes in CPDF for accessions to career or career conditional positions. Accessions include new hires and hires of individuals returning to the government. To put PBGC hiring into context, we used attrition data (discussed below) to compare the numbers of staff hired with the number of staff leaving. Additionally, we used PBGC hiring data from 2007 to describe how quickly PBGC fills its job vacancies and compared that data to OPM standards. To determine the overall attrition rates for staff in these key positions, we analyzed data from the CPDF for fiscal years 2000 to 2007. For each fiscal year, we counted the number of permanent (career) employees with personnel actions indicating they had separated from PBGC. Separation (attrition) data for new hires included resignations, retirements, terminations, and deaths. We did not include a small percentage of individuals with inconsistent data such as multiple or different hiring or separation dates. The small percentage of employees with inconsistent data is congruent with the generally reliable data in the CPDF we have reported previously. We then divided the total number of separations for each fiscal year by the average of the number of these employees in the CPDF as of the last pay period of the fiscal year before the fiscal year of the separations and the number of these employees in the CPDF as of the last pay period of the fiscal year of separation. To determine the attrition rates for new hires in the seven critical occupations, we used CPDF data to identify the newly hired staff and followed them over time to see how many left PBGC. We identified all new hires for fiscal years 2000 through 2007 by using personnel action codes for accessions to career or career conditional positions. Next, we determined whether these individuals had personnel actions indicating they had separated from PBGC. By subtracting the hire date from the separation date, we determined how long individuals worked before separating. We calculated the attrition rates for a specific time period by dividing the number of individuals who left within that time period by the total number of new hires tracked for that time period. Once we identified the overall attrition and new hire attrition rates, we examined CPDF data to determine any patterns or trends for each of the seven key occupations and for PBGC executives. Additionally, we conducted a comparative analysis by calculating the attrition rates of the rest of the federal government and the federal regulators to put PBGC attrition rates into context. To identify the ways key staff separated from PBGC from 2000 through 2007, we reviewed CPDF data identifying employees who resigned from federal employment, retired, transferred to another federal agency, or were separated in another way, such as a reduction in force. As part of this work, we built on our analysis of the CPDF to determine the extent to which PBGC is losing staff to other federal agencies. To determine those PBGC staff that moved to another federal agency, we identified employees who had a CPDF separation code for a voluntary transfer and who also had a CPDF accession code from a federal agency within 25 days of the transfer out. We analyzed separation data to determine any patterns for the receiving agency. To understand whether there were any patterns within PBGC’s key occupations, we reviewed CPDF data to examine the distribution by type of separation for each of the key occupations. To put PBGC’s separation data into context, we compared the types of separations for its key employees with the same information for the rest of the federal executive branch agencies. To identify the reasons that staff left PBGC, we reviewed available reports with information about the reasons for attrition and interviewed officials to determine the reasons why employees leave the agency and how PBGC collects data on such departures. To determine PBGC employee retirement eligibility for fiscal years 2012 and 2017, and after 2017, we used CPDF information on age at hire, years of service, birth date, and retirement plan coverage. We compared PBGC eligibility information to eligibility information for staff in the seven key occupations in the rest of the federal government. To determine federal tenure rates, we examined CPDF information on number of years of federal service for key staff, at both PBGC and other federal agencies. We compared PBGC tenure information to tenure information for staff in the seven key occupations in the rest of the federal government. We also compared CPDF federal tenure data to PBGC’s data for length of service at PBGC specifically. Average Pay and Pay Ranges To report on the average pay and pay ranges for employees in selected occupations and executives, we analyzed basic pay data from CPDF from September 2007 for PBGC, financial regulators, and other federal executive branch agencies that also had staff in the seven mission-critical categories. Using CPDF, we determined the low, high, and mean pay for each of these occupational categories and executives. We did not separately analyze locality pay for these entities. To identify the steps that PBGC had taken to strategically hire and retain key staff, we reviewed previous GAO work on strategic human capital management, PBGC’s single-employer insurance programs, and the corporation’s management challenges. We also reviewed information on PBGC’s organizational objectives and succession planning goals from documents such as annual reports and workforce or succession plans. Moreover, we reviewed GAO and OPM reports on human capital to establish criteria for PBGC’s recruitment, retention, and succession planning efforts. On the basis of the information we obtained, we assessed PBGC’s human capital strategic plan, performance measures, and policies and procedures against GAO’s Standards for Internal Controls in the Federal Government to determine if internal control weaknesses or inefficiencies existed. Weaknesses identified directed our review of PBGC’s human capital operations and were explored further in interviews with PBGC officials. We reviewed PBGC’s efforts to analyze attrition, interviewed PBGC and OPM officials, and relied on prior GAO reports on federal human capital issues to determine how federal agencies develop and analyze data on the reasons for this attrition. other selected agencies’ performance management and pay systems, including succession and strategic plans, guidance, and policies and procedures on the systems; PBGC’s internal assessments of its workforce challenges; OPM’s 2006 Federal Human Capital Survey; recent OPM human capital operations audits; and OPM’s 2006 Recruitment, Relocation, and Retention study. Further, we reviewed relevant provisions of federal law, including the Employee Retirement Income Security Act of 1974; the Government Corporation Control Act; the Financial Institutions Reform, Recovery, and Enforcement Act of 1989; and the Classification Act. As part of this work, we collected and reviewed memorandums and documentation related to PBGC’s compensation proposal as well as correspondence from the Department of Labor (DOL) and Office of Management and Budget (OMB). We also obtained a legal opinion from DOL’s Office of the Solicitor confirming that DOL still held the view that PBGC is not exempt from the General Schedule. Moreover, we collected documents and interviewed officials at PBGC to determine the extent to which PBGC governance and organizational structure have affected PBGC’s ability to pursue alternative compensation and benefit flexibilities. We also used recent GAO work that reviewed compensation flexibilities at the financial regulatory agencies. To gather information on PBGC’s use of human capital flexibilities related to compensation, we used CPDF data to calculate the number of occasions on which these flexibilities were administered between fiscal year 2004 and 2007. Specifically, we identified the number of times PBGC used recruitment incentives, individual and group cash awards, individual and group time-off awards, individual and group suggestion/invention awards, quality step increases, student loan repayments, and retention incentives. In addition, we interviewed PBGC’s human capital officials to determine if PBGC was using certain compensation flexibilities that we did not identify in CPDF. We did not assess whether PBGC was using these flexibilities appropriately. To address both objectives, we also interviewed board representatives, the PBGC Director, PBGC’s executives, senior PBGC management officials, and officials from OPM and DOL. Additionally, we met with the corporation’s union representatives and PBGC’s Inspector General, and coordinated with OPM’s human capital evaluators regarding their audit of PBGC human capital policies and programs. At the time of our review, OPM officials confirmed they were finalizing a PBGC Human Resource Operations Evaluation. While OPM’s review covered certain aspects of strategic human capital management, OPM’s review also focused on specific human capital programs, such as competitive examining. OPM officials stated they would be presenting findings and working with PBGC on corrective measures to improve the corporation’s human capital program. Regulates commodity futures and option markets in the United States. Ensures a safe, sound, and dependable source of credit and related services for agriculture and rural America. Preserves and promotes public confidence in the U.S. financial system by insuring deposits in banks and thrift institutions for at least $100,000 per depositor; by identifying, monitoring, and addressing risks to the deposit insurance funds; and by limiting the effect on the economy and the financial system when a bank or thrift institution fails. Regulates the 12 Federal Home Loan Banks that were created in 1932 to improve the supply of funds to local lenders that, in turn, finance loans for home mortgages. Federal Reserve Board Conducts the nation’s monetary policy by influencing money and credit conditions in the economy in pursuit of full employment and stable prices; supervises and regulates banking institutions to ensure the safety and soundness of the nation’s banking and financial system and to protect the credit rights of consumers; maintains the stability of the financial system and containing systemic risk that may arise in financial markets; provides certain financial services to the U.S. government, to the public, to financial institutions, and to foreign official institutions, including playing a major role in operating the nation’s payments systems. Charters and supervises federal credit unions. National Credit Union Administration, backed by the full faith and credit of the U.S. government, operates the National Credit Union Share Insurance Fund (NCUSIF) insuring the savings of 80 million account holders in all federal credit unions and many state- chartered credit unions. Charters, regulates, and supervises all national banks. It also supervises the federal branches and agencies of foreign banks. Promotes housing and a strong national housing finance system by ensuring the safety and soundness of Fannie Mae (Federal National Mortgage Association) and Freddie Mac (Federal Home Loan Mortgage Corporation)—the largest housing finance institutions in the United States. Primary federal regulator of federally chartered and state- chartered savings associations, their subsidiaries, and their registered savings and loan holding companies. Administers federal securities law in the United States. The agency is charged with protecting investors, maintaining fair, orderly, and efficient markets, and facilitating capital formation. People are an agency’s most important organizational asset. An organization’s people define its character, affect its capacity to perform, and represent the knowledge base of the organization. As such, effective strategic human capital management approaches serve as the cornerstone of any serious change management initiative. They must also be at the center of efforts to transform the cultures of federal agencies so that they become less hierarchical, process-oriented, stovepiped, and inwardly focused; and flatter and more results-oriented, integrated, and externally focused. Studies by several organizations, including GAO, have shown that successful organizations in both the public and private sectors use strategic management approaches to prepare their workforces to meet present and future mission requirements. For example, preparing a strategic human capital plan encourages agency managers and stakeholders to systematically consider what is to be done, how it will be done, and how to gauge progress and results. Federal agencies have used varying frameworks for developing and presenting their strategic human capital plans. Various agencies are using OPM’s Human Capital Assessment and Accountability Framework (HCAAF) as the basis for preparing such plans. HCAAF, which the Office of Personnel Management developed in conjunction with the Office of Management and Budget and us, outlines six standards for success, key questions to consider, and suggested performance indicators for measuring progress and results. These six standards for success and related definitions are as follows: Strategic alignment: The organization’s human capital strategy is aligned with mission, goals, and organizational objectives and integrated into its strategic plans, performance plans, and budgets. Workforce planning and deployment: The organization is strategically utilizing staff in order to achieve mission goals in the most efficient ways. Leadership and knowledge management: The organization’s leaders and managers effectively manage people, ensure continuity of leadership, and sustain a learning environment that drives continuous improvement in performance. Results-oriented performance culture: The organization has a diverse, results-oriented, high-performance workforce, and a performance management system that effectively differentiates between high and low performance and links individual, team, or unit performance to organizational goals and desired results. Talent management: The organization makes progress toward closing gaps or making up deficiencies in most mission-critical skills, knowledge, and competencies. Accountability: The organization’s human capital decisions are guided by a data-driven, results-oriented planning and accountability system. As we have reported, strategic workforce planning, an integral part of human capital management and the strategic workforce plan, involves systematic assessments of current and future human capital needs and the development of long-term strategies to fill the gaps between an agency’s current and future workforce requirements. Agency approaches to such planning can vary with each agency’s particular needs and mission; however, our previous work suggests that irrespective of the context in which workforce planning is done, such a process should incorporate five key principles: (1) involve management and employees, (2) analyze workforce gaps, (3) employ workforce strategies to fill the gaps, (4) build the capabilities needed to support workforce strategies, and (5) evaluate and revise strategies (see fig. 8). Our human capital model highlights the kinds of thinking that agencies should apply, as well as some of the steps they can take, to make progress in managing human capital strategically. The model consists, in part, of the Critical Success Factors Table. This table identifies eight critical success factors for managing human capital strategically, which embody an approach to human capital management that is fact-based, focused on strategic results, and incorporates merit principles and other national goals. These factors are organized in pairs to correspond with the four governmentwide high-risk human capital challenges that our work has shown are undermining agency effectiveness (see fig. 9). When considering the human capital cornerstones and the critical success factors, it is important to remember that they are interrelated and mutually reinforcing. Any pairing or ordering of human capital issues may have a sound rationale behind it, but no arrangement should imply that human capital issues can be compartmentalized and dealt with in isolation from one another. All of the critical success factors reflect two principles that are central to the human capital idea: People are assets whose value can be enhanced through investment. As with any investment, the goal is to maximize value while managing risk. An organization’s human capital approaches should be designed, implemented, and assessed by the standard of how well they help the organization achieve results and pursue its mission. In developing this model, we built upon GAO’s Human Capital: A Self- Assessment Checklist for Agency Leaders (GAO/OCG-00-14G, September 2000). Self-assessment is the starting point for creating “human capital organizations”—agencies that focus on valuing employees and aligning their “people policies” to support organizational performance goals. Certain unifying considerations should be kept in mind: All aspects of human capital are interrelated: The principles of effectively managing people are inseparable and must be treated as a whole. Any sorting of human capital issues may have a sound rationale behind it, but no sorting should imply that human capital issues can be compartmentalized and dealt with in isolation from one another. Trust requires transparency: To pursue its shared vision effectively, the agency must earn the trust of its workforce by involving employees in the strategic planning process and by ensuring that the process is transparent—that is, consistently making it clear that the shared vision is the basis for the agency’s actions and decisions. Merit principles and other national goals still apply: Performance- based management does not supersede the merit principles or other national goals, such as veterans’ preference. A modern merit system will achieve a reasonable balance among taxpayer demands, employer needs, and employee interests. Constraints and flexibilities need to be understood: The purpose of human capital self-assessment is to help agencies target areas in which to make changes in support of their organizational missions and other needs. Agencies that identify areas for improvement need to learn what constraints exist that apply to them and what flexibilities are available. Fact-based human capital management requires data: Federal agencies typically do not have the data required to effectively assess how well their human capital approaches support results. A more fact-based approach to human capital management will entail the development and use of data that demonstrate the effectiveness of human capital policies and practices—thereby improving managers’ ability to maximize the value of human capital investments while managing the related risks. The use of best practices requires prudent decision making: Identifying best practices and benchmarking against leading organizations are both potentially useful and important pursuits. Federal agencies must be careful to recognize the unique characteristics and circumstances that make organizations different from one another and to consider the applicability of practices that have worked elsewhere. For example, the environments in which public and private sector organizations operate differ significantly; our work has shown that many management principles identified in the private sector are applicable to the federal sector, but these differences need to be taken into account when agencies consider alternatives to their current management approaches. Attention to human capital must be ongoing: To be effective, strategic human capital management requires the sustained commitment and attention of senior leaders and managers at all levels of the agency. Managing the workforce is not a problem for which the organization can supply an answer and then move on. Rather, managers must continually monitor and refine their agencies’ human capital approaches to ensure their ongoing effectiveness and continuous improvement. CPDF did not indicte tht OFHEO employed IT pecili as of encieExecutives include political appointees above GS-15, such as those in the Senior Executive Service, those in the Senior Level and Senior Scientific or Professional pay plans, and equivalent officials. Different executive pay plans have different pay ceilings. For example, the Senior Level and Senior Scientific and Professional pay plans (SL and ST) have lower ceilings than the Senior Executive Service pay plan. PBGC executives included the Director of PBGC and those in the Senior Level pay plan. The maximum base pay allowed for SES in 2007 was the rate for level II of the Executive Schedule ($168,000) for agencies with a certified performance appraisal system, or the rate for level III of the Executive Schedule ($154,600) for agencies without a certified performance appraisal system. The maximum base pay allowed for SL/ST employees in 2007 was $145,400 (the rate for level IV of the Executive Schedule). However, SL/ST employees working in the 48 contiguous States also received locality payments ranging from 12.6percent to 30.3 percent in 2007, depending on location, with locality rates capped at the rate for level III of the Executive Schedule, or $154,600. We submitted the relevant PBGC data to PBGC officials, who concurred with the basic ranges and averages for PBGC. While CFTC, FCA, and NCUA senior management could be classified as executives, each agency has a pay plan that, as of this writing, did not allow GAO to specifically identify executives’ salaries through CPDF. The agencies in the table include the Commodity Futures Trading Commission (CFTC), Farm Credit Administration (FCA), Federal Deposit Insurance Corporation (FDIC), Federal Housing Finance Board (FHFB), National Credit Union Administration (NCUA), Office of the Comptroller of the Currency (OCC), Office of Federal Housing Enterprise Oversight (OFHEO), Office of Thrift Supervision (OTS), Pension Benefit Guaranty Corporation (PBGC), Securities and Exchange Commission (SEC), and the Federal Reserve Board (FRB). A monetary payment to a newly hired employee when the agency has determined that the position is likely to be difficult to fill in the absence of such an incentive. The employee must sign an agreement to complete a specified period of service with the agency (not to exceed 4 years). Upon the request of the head of an agency, OPM may waive the recruitment or relocation incentive 25 percent limitation based on a critical agency need. Under such an approval, the total amount of recruitment or relocation incentive payments may not exceed 50 percent of an employee’s annual rate of basic pay at the beginning of the service period multiplied by the number of years in the service period. Agencies may set the rate of basic pay of a newly appointed employee at a rate above the minimum rate of the appropriate General Schedule grade because (1) the candidate has superior qualifications or (2) the agency has a special need for the candidate’s services. A step increase to reward General Schedule employees at all grade levels who display high-quality performance. It is a step increase that is given sooner than the normal time interval for step increases. A monetary award to recognize superior employee and group performance (also known as spot awards). A monetary award for suggestions, inventions, or a productivity gain. An award of time off to recognize superior employee and group performance. A monetary award to recognize employees who bring new talent into the agency. A monetary payment given to a current employee when the agency determines that the unusually high or unique qualifications of the employee or a special need of the agency for the employee’s services makes it essential to retain the employee and if the employee would be likely to leave the federal service in the absence of a retention incentive. At the request of an agency head, OPM may waive the retention incentive limitation of 25 percent of basic pay for individual employees or 10 percent for a group or category of employees (but not to exceed 50 percent of basic pay) based on a critical agency need. The agency must determine the unusually high or unique qualifications of the employee(s) are critical to the successful accomplishment of an important agency mission, project, or initiative (e.g., programs or projects related to a national emergency or implementing a new law or critical management initiative). The federal student loan repayment program permits agencies to repay federally insured student loans as a recruitment or retention incentive for candidates or current employees of the agency. A monetary payment to an employee who must relocate to a position in a different geographic area that is likely to be difficult to fill in the absence of such an incentive. In return, the employee must sign an agreement to fulfill a period of service of not more than 4 years with the agency. OPM may, upon the request of an agency head, and after consultation with OMB, grant authority to fix the rate of basic pay for one or more critical positions in an agency at not less than the rate that would otherwise be payable for that position, up to the rate for level I of the Executive Schedule under the critical pay authority. A higher rate of pay may be established upon the President’s written approval. The following team members made key contributions to this report: David Lehrer, Assistant Director; Jason Holsclaw, Analyst-in-Charge; Susannah Compton; Monika Gomez; Catherine Hurley; Anar Ladhani; Armetha Liles; Andrew Nelson; Mimi Nguyen; Jessica Orr; Roger Thomas; Rebecca Shea; and Gregory Wilmoth. Pension Benefit Guaranty Corporation: Governance Structure Needs Improvements to Ensure Policy Direction and Oversight. GAO-07-808. Washington, D.C.: July 2007. PBGC’s Legal Support: Improvements Needed to Eliminate Confusion and Ensure Provision of Consistent Advice. GAO-07-757R. Washington, D.C.: May 18, 2007. High Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. Private Pensions: The Pension Benefit Guaranty Corporation and Long- Term Budgetary Challenges. GAO-05-772T. Washington, D.C.: June 9, 2005. Pension Benefit Guaranty Corporation: Single-Employer Pension Insurance Program Faces Significant Long Term Risks. GAO-04-90. Washington, D.C.: October 2003. Pension Benefit Guaranty Corporation: Contracting Management Needs Improvement. GAO/HEHS-00-130. Washington, D.C.: September 2000. Strategic Workforce Planning and Human Capital Management Federal Deposit Insurance Corporation: Human Capital and Risk Assessment Programs Appear Sound, but Evaluations of Their Effectiveness Should Be Improved. GAO-07-255. Washington, D.C.: February 2007. The Federal Workforce: Additional Insights Could Enhance Agency Efforts Related to Hispanic Representation. GAO-06-832. Washington, D.C.: August 2006. Securities and Exchange Commission: Some Progress Made on Strategic Human Capital Management. GAO-06-86. Washington, D.C.: January 2006. International Trade: USTR Would Benefit from Greater Use of Strategic Human Capital Management Principles. GAO-06-167. Washington, D.C.: December 2005. Department of Homeland Security: Strategic Management of Training Important for Successful Transformation. GAO-05-888. Washington, D.C.: September 2005. Human Capital: Selected Agencies Have Opportunities to Enhance Existing Succession Planning and Management Efforts. GAO-05-585. Washington, D.C.: June 2005. Human Capital: Agencies Need Leadership and the Supporting Infrastructure to Take Advantage of New Flexibilities. GAO-05-616T. Washington, D.C.: April 21, 2005. Human Capital: Selected Agencies’ Statutory Authorities Could Offer Options in Developing a Framework for Governmentwide Reform. GAO-05-398R. Washington, D.C.: April 21, 2005. National Nuclear Security Administration: Contractors’ Strategies to Recruit and Retain a Critically Skilled Workforce Are Generally Effective. GAO-05-164. Washington, D.C.: February 2005. Diversity Management: Expert-Identified Leading Practices and Agency Examples. GAO-05-90. (Washington, D.C.: January 2005). Human Capital: Principles, Criteria, and Processes for Governmentwide Federal Human Capital Reform. GAO-05-69SP. Washington, D.C.: December 2004. Human Capital: Increasing Agencies’ Use of New Hiring Flexibilities. GAO-04-959T. Washington, D.C.: July 13, 2004. Human Capital: Key Practices to Increasing Federal Telework. GAO-04-950T. Washington, D.C.: July 8, 2004. Human Capital: Status of Efforts to Improve Federal Hiring. GAO-04-796T. Washington, D.C.: June 7, 2004. Human Capital: A Guide for Assessing Strategic Training and Development Efforts in the Federal Government. GAO-04-546G. Washington, D.C.: March 2004. Human Capital: Selected Agencies’ Experiences and Lessons Learned in Designing Training and Development Programs. GAO-04-291. Washington, D.C.: January 30, 2004. Human Capital: Key Principles for Effective Strategic Workforce Planning. GAO-04-39. Washington, D.C.: December 2003. Human Capital: Succession Planning and Management Is Critical Driver of Organizational Transformation. GAO-04-127T. Washington, D.C.: October 1, 2003. Human Capital: A Guide for Assessing Strategic Training and Development Efforts in the Federal Government (Exposure Draft). GAO-03-893G. Washington, D.C.: July 2003. Human Capital: Opportunities to Improve Executive Agencies’ Hiring Processes. GAO-03-450. Washington, D.C.: May 2003. Human Capital: OPM Can Better Assist Agencies in Using Personnel Flexibilities. GAO-03-428. Washington, D.C.: May 2003. Human Capital: Effective Use of Flexibilities Can Assist Agencies in Managing Their Workforces. GAO-03-2. Washington, D.C.: December 2002. A Model of Strategic Human Capital Management. GAO-02-373SP. Washington, D.C.: March 2002.
The Pension Benefit Guaranty Corporation (PBGC) employs over 800 federal employees and uses some 1,500 private sector employees to insure the pensions of millions of private sector workers and retirees in certain employer-sponsored pension plans. In recent years, PBGC's projected financial liabilities and workloads have increased greatly due to a large number of pension plan terminations. Given this, it is important that PBGC remain well positioned to fulfill its promise to those retirees who depend on it. GAO was asked to report on (1) PBGC's recent experience in hiring and retaining key staff and how it compares to other federal agencies and (2) the actions PBGC has taken to strategically hire and retain key staff and what additional steps, if any, can be taken. To do this, we analyzed PBGC's workforce by using the Office of Personnel Management's (OPM) Central Personnel Data File to identify data and compared those data to data from other federal agencies. We also interviewed officials from selected agencies, including PBGC, OPM, and the Department of Labor. From fiscal years2000to 2007, PBGC was generally able to hire staff in its key occupations--such as accountants, actuaries, and attorneys--and retain them at rates similar to those of the rest of the federal government. However, PBGC has had some difficulty with hiring and retaining staff for specific occupations and positions, including executives and senior financial analysts. Despite the general ability to hire and retain key staff, data also suggest that PBGC may be faced with workforce challenges; these include managing a workforce with relatively few years of federal experience, the prospect of nearly one-quarter of its key staff retiring within the next 4 years, and difficulty hiring and retaining key staff in the future due to PBGC's existing compensation structure, which offers salaries lower than some federal agencies that employ similar staff, such as the Federal Deposit Insurance Corporation. While PBGC is making progress in its human capital management approachby taking steps to improve its human capital planning and practices--such as drafting a succession management plan--the corporation lacks a formal, comprehensive human capital plan that integrates several critical components such as workforce planning. Also, even though it collects workforce data, PBGC has not routinely and systematically targeted and analyzed all necessary workforce data--such as attrition rates, occupational skills mix, and trends--to understand its current and future workforce needs. Instead, officials stated that they generally reacted to management personnel requests, and developed human capital data as needed. In addition to limited planning and data analysis, PBGC has not fully explored all available compensation options under its existing statutory authority, even though officials say and data suggest that the corporation's current compensation structure may limit its ability to hire and retain certain key staff.
On the basis on its 1993 Bottom-Up Review, the Department of Defense (DOD) adopted a strategy of maintaining the capability to fight and win two nearly simultaneous major regional conflicts (MRC), conduct smaller scale operations such as peacekeeping, and provide overseas presence in critical regions. The Bottom-Up Review determined that the Air Force would have 20 fighter wings (13 active and 7 reserve), up to 187 bombers(161 active and 26 reserve), and 500 intercontinental ballistic missiles to implement the strategy. The Bottom-Up Review also concluded the Air Force should maintain the capability to provide (1) airlift to transport people and equipment during conflicts, (2) reconnaissance and command and control aircraft to provide information on the location and disposition of enemy forces, and (3) aerial refuelers to enhance mission effectiveness by refueling aircraft during long-range missions. The review did not specify the number of military personnel required to implement the national military strategy. However, DOD subsequently determined that the active components would consist of about 1.4 million active military personnel, 381,000 of which would be Air Force personnel. By the end of fiscal year 1997, the Air Force plans to have an active duty force of 381,100 personnel with an associated military pay of $16.8 billion. In 1996, Congress established minimum active duty personnel levels for each military service as part of the National Defense Authorization Act for Fiscal Year 1996. The Air Force floor was set at 381,000. In creating the floors, Congress sought to ensure that (1) the services had enough personnel to carry out the national military strategy and (2) the drawdown of active forces was over to avoid future recruiting and retention problems. Finally, Congress believed that this level force would allow the services to manage the effects of high operations and personnel tempo. The National Defense Authorization Act for Fiscal Year 1997 retained the floor, but allowed the Secretary of Defense the flexibility in certain circumstances to decrease personnel by 1 percent of the floor. For the Air Force, this means the number of active duty personnel cannot drop below 377,200. The legislation requires the services to obtain statutory authority for decreases below the 1-percent threshold. Over the last several years, DOD has categorized its planned forces and funding as either mission or infrastructure. Air Force mission forces consist of the fighter wings, bombers, and intercontinental missiles (as defined in the Bottom-Up Review) and the forces that provide direct combat support; intelligence; space support; and command, control and communications in wartime. Activities that provide support to the mission forces and primarily operate from fixed locations are classified as infrastructure forces. Infrastructure is divided into the following eight categories: acquisition management, force management, installation support, central communications, central logistics, central medical, central personnel, and central training. These categories are described in appendix I. Approximately 140,000, or 36 percent, of the active Air Force personnel are currently categorized as mission forces and 241,000, or 64 percent, are in infrastructure activities. The Secretary of Defense wants to reduce and streamline infrastructure to achieve savings to modernize the force. In April 1996, we reported that operations and maintenance and the military personnel appropriations must be reduced if spending for infrastructure activities is to decline, since they account for 80 percent of infrastructure funding. In fiscal year 1996, the Air Force spent about $28 billion of its $73 billion total budget on infrastructure activities. As shown in figure 1.1, about 83 percent of the Air Force’s direct infrastructure costs are funded by two appropriations—military personnel and operations and maintenance. One of the Air Force’s major initiatives to generate savings for weapons modernization is to study the potential to contract out infrastructure functions. In deciding whether a function can be transferred to contractors, the Air Force compares the relative cost of using civilian employees and private contractors to perform the same function. DOD data on cost comparisons completed between fiscal year 1978 and 1994 indicates that shifting work to contractors has reduced annual operating cost on average by 31 percent. Our initial work on another assignment indicates that such savings may not be as high as estimated by DOD, but that some savings do result. The Air Force uses a variety of methods to determine personnel requirements. These processes identify requirements as a function of workload or level of service based on assigned missions. The various methods include Air Force staffing standards for positions common throughout the Air Force such as security police; command staffing standards for functions unique to a particular command such as pilot training; a computer-generated model for aircraft maintenance positions; and crew ratios for each type of aircraft in the inventory. The Air Force does not have total control over the allocation of its personnel. For example, legislation and DOD directives establish ceilings on headquarters positions and mandate the number of positions that the Air Force must fill on the joint staff, and in unified commands and defense agencies. Table 1.1 shows the processes the Air Force uses to develop the number of active military positions required and the number of positions mandated by the Office of the Secretary of Defense (OSD) and legislation. The requirements for some military positions are determined either by directives or legislation rather than by the Air Force. For example, the National Defense Authorization Act for Fiscal Year 1996 restricts the Secretary of Defense from reducing military medical personnel unless DOD certifies that the number of people being reduced is excess to current and projected needs and does not increase the cost of services provided under the Civilian Health and Medical Program of the Uniformed Services. Also, the Goldwater-Nichols Defense Reorganization Act of 1986 gave the Secretary of Defense the authority to determine the number of joint officer positions. An April 1981 memorandum from the Deputy Secretary of Defense states that DOD cannot increase or decrease resources that support the National Foreign Intelligence Program without approval from the Director of the Central Intelligence Agency. Likewise, a December 1989 memorandum from the Deputy Secretary of Defense stated that the number of military positions within the Special Operations Command will not be adjusted unless directed by the Deputy Secretary of Defense. Because of congressional concerns about active duty personnel levels, we assessed (1) how the size and composition of the active Air Force has changed since 1986, (2) whether the Air Force has any shortages in meeting its wartime requirements, and (3) whether there is potential to reduce the active force further. We did not examine the need for the number of fighter wings, bombers, and intercontinental missiles identified by DOD’s 1993 Bottom-Up Review. We interviewed officials and reviewed documents at OSD and Air Force headquarters, Washington, D.C.; Air Combat Command, Norfolk, Virginia; Air Force Materiel Command, Dayton, Ohio; and Air Education and Training Command, San Antonio, Texas. To determine how the size and composition of the active force has changed, we analyzed data contained in the fiscal year 1997 FYDP and historical FYDPs. The FYDP displays the allocation of resources by programs and activities known as program elements. We used the mapping scheme developed by DOD’s Office of Program Analysis and Evaluation to identify mission and infrastructure program elements. We then compared the changes in active personnel by mission and infrastructure categories between fiscal year 1986 and 1997. We used fiscal year 1986 as a starting point because it represented the peak in the number of active duty personnel, preceding the post-Cold War drawdown. We obtained data on the number of active military positions that are determined by legislation or directives, but did not assess how the requirements for these positions were determined. To determine if the Air Force has wartime personnel shortages, we analyzed the results of the Air Force FORSIZE 95 exercise. To determine if the shortages identified by FORSIZE affected the Air Force ability to carry out the national military strategy, we interviewed Air Force headquarters functional managers to determine whether the shortages were in the forces that deploy to theaters of operation or in forces that sustain operations at bases in the United States. We also discussed their plans to resolve the shortages. Since FORSIZE did not analyze wartime requirements for medical personnel, we obtained data on wartime requirements from the Air Force Office of the Surgeon General. To assess the potential to further reduce the active force, we analyzed the military personnel reductions planned in fiscal year 1998. Our analysis was based on the Air Force’s fiscal year 1998 Budget Estimate Submission provided to OSD. In addition, we reviewed Air Force efforts to identify opportunities to replace military personnel with contractor and civilian personnel. Since these efforts have not been completed, our analysis was limited to reviewing the methodology for identifying potential positions and the plans for approving which positions will be studied or converted. In addition, we used our prior work to identify opportunities to more efficiently organize the active force. We conducted our review from November 1995 through December 1996 in accordance with generally accepted government auditing standards. Between fiscal year 1986 and 1997, the Air Force will reduce its active military personnel from 608,199 to 381,100, or by 37 percent. During this time, mission forces will be reduced at a much greater rate than infrastructure forces—47 percent compared to 30 percent. The Air Force reduced active military personnel primarily by (1) implementing the force structure reductions in accordance with the Bottom-Up Review, (2) closing bases, (3) transferring some missions to the reserves, and (4) reorganizing major commands and headquarters activities. Our analysis also indicated that the 1997 active duty Air Force will have a higher percentage of officers compared with the percentage in 1986. Between fiscal year 1986 and 1997, the Air Force will reduce its mission forces from approximately 262,000 to 140,000, or by 47 percent, as shown in table 2.1. The decrease in combat forces primarily resulted from implementing the Bottom-Up Review force structure, which significantly reduced the number of fighter wings, bombers, and intercontinental missiles. Table 2.2 compares the force structure between fiscal year 1986 and 1997. The Air Force reduced the number of fighter wings by retiring the F-4 and F-111 aircraft and transferring the F-15s required for the air defense of the United States as well as some close air support aircraft (A-10s) to the reserves. The Air Force reduced the bomber force by retiring the FB-111s and many B-52’s. Finally, the Air Force reduced the missile force by eliminating the Minuteman II intercontinental ballistic missiles and the ground launched cruise missiles. The decrease in direct combat support forces resulted primarily from transferring some airlift and refueling missions to the reserves and retiring some electronic warfare aircraft (RF-4Gs) and reconnaissance aircraft (TR-1s). In addition, some of the decrease resulted when the strategic airlift function was moved to the central logistics infrastructure category. The number of military personnel in command, control, and communications positions was reduced by abolishing the Air Force Communications Command. In addition, the increase in technologies such as automation and digital communications allowed the Air Force to assign fewer people to operate and maintain needed capability. The decrease in research, development, test, and evaluation primarily resulted from a change in the way these personnel are categorized. Prior to 1988, about 5,600 positions in acquisition and command support were included in the mission forces under the research, development, test, and evaluation activity. However, in 1988 the majority of these positions were moved to the acquisition infrastructure category when the Air Force merged two commands to form the Air Force Materiel Command. The remaining two mission categories, intelligence and space, gained personnel. While there were decreases in some intelligence functions such as the retirement of the SR-71, they were offset by increases resulting from the creation of the Defense Airborne Reconnaissance Office in 1994. The increase in space forces resulted primarily because some activities that were categorized as acquisition and direct support were transferred to space. Between fiscal year 1986 and 1997, the Air Force will reduce the number of active personnel in infrastructure functions from approximately 346,000 to 241,000, or by 30 percent. Significant decreases occurred in all infrastructure forces except acquisition, central medical, and central logistics as shown in table 2.3. The greatest number of personnel decreases occurred in installation support and central training activities. The decline in installation support was caused primarily by the closure of 20 active air bases by the Secretary of Defense’s Base Closure Commission in 1988 and the Base Closure and Realignment Commission in 1991 and 1993. The Air Force also contracted some base operations, which reduced the number of military personnel in installation support. The decrease in central training was related primarily to the decreases in mission force structure. For example, the decrease in the number of wings and bombers resulted in about a 6,700 decline in the number of undergraduate pilot and navigator training positions and about an 8,500 decrease in weapons systems training positions. Likewise, the decrease in the number of strategic forces reduced training requirements by about 2,400 positions. The overall decrease in the number of active personnel caused a decline of approximately 11,300 positions in general skill level training and about 4,900 positions in recruit training units. Approximately another 4,000 positions were eliminated from contracting for base operations at training bases. The primary reasons for the decreases in the other categories are described as follows: • Acquisition management—This category experienced a net decrease of 627 military positions. However, in 1988 the Air Force transferred about 5,600 positions from the mission research, development, test, and evaluation category into acquisition. Since 1988, the number of military personnel in the acquisition has declined by about 5,800. • Force management—The decreases occurred in the weather service, servicewide support, and from consolidation of various headquarters. For example, the Strategic Air Command, the Tactical Air Command, and the Military Airlift Command were combined to form the Air Combat Command and the Air Mobility Command; the Air Force Systems Command and the Air Force Logistics Command were combined to form the Air Force Materiel Command. • Central communications—Approximately 3,900 of the decrease occurred because the smaller number of fighter wings and bombers required fewer air traffic control personnel. • Central medical—The number of personnel in central medical has not decreased significantly. OSD and the services are currently assessing post-Cold War medical requirements. OSD is currently updating a 1994 study that will provide new estimates of wartime medical demands. However, the scheduled March 1996 completion has been delayed because OSD and the services advocate using different assumptions and methodologies for factors such as population-at-risk and casualty replacements, which affect overall medical requirements. • Central personnel—The decrease resulted primarily because the smaller force has reduced the number of permanent change-of-station moves, accessions, and training requirements, which has reduced the number of people in transit. Central logistics is the only infrastructure category that had a net increase of personnel. The increase resulted from a change in the way personnel associated with strategic airlift are categorized. Prior to 1992, airlift personnel were counted as direct support mission forces. However, they were moved to the central logistics category in fiscal year 1992 when the U.S. Transportation Command assumed responsibility for management of air transportation in peacetime. Between fiscal year 1986 and 1997, enlisted personnel will be reduced by 39 percent and officers by 32 percent. Our analysis shows a proportionate decline in officer and enlisted personnel in mission forces, but a higher percentage decrease of enlisted personnel in infrastructure activities as shown in table 2.4. According to Air Force officials, one reason for the smaller percentage decrease of officers versus enlisted personnel in infrastructure functions can be attributed to the disproportionate reduction of enlisted in base operations support where major decreases have occurred. Generally, there has been one officer for every 10 enlisted positions in this category. However, our analysis of this category showed that between fiscal year 1986 and 1997, the Air Force eliminated 46,349 enlisted and 3,256 officer positions or 15 enlisted positions for every officer position eliminated. Another reason for the smaller percentage decrease is that medical and joint/DOD positions, which have a high number of officers, are classified as infrastructure. As shown in table 2.5, the number of medical positions has remained relatively stable and the number of joint positions has decreased by 20 percent between fiscal year 1986 and 1997 while the active force as a whole declined by 37 percent. As a result, these positions have increased from 8 percent of the active force in fiscal year 1986 to 13 percent in fiscal year 1997 as shown in table 2.5. In November 1995, DOD’s Office of the Inspector General reported that although the services have reduced the number of active duty personnel, there has not been a corresponding decrease in the number of positions that must be filled on the Joint Staff and in defense agencies. The report noted that the services must still give priority to joint staffing, with a substantially smaller resource pool. Finally, the Inspector General found that no standard methodology or criteria are used to determine and validate personnel requirements for positions on the Joint Staff or in defense agencies. The National Defense Authorization Act for Fiscal Year 1997 requires us to review DOD’s actions in response to the Inspector General’s report. The Air Force uses the FORSIZE exercise to estimate total wartime personnel requirements. FORSIZE 95 identified an active wartime shortage of 19,585 personnel. However, the Air Force believes this shortage has little impact on their ability to implement the national military strategy because the shortfall is primarily in forces that sustain base operations in the United States during wartime, and the Air Force has identified alternatives for satisfying these shortages. FORSIZE 95 did not analyze medical requirements. However, a separate study of medical requirements concluded that the Air Force has more active duty medical personnel than needed for wartime. FORSIZE does not consider operations other than war (OOTW). Air Force officials stated that defense guidance assumes that the existing force requirements developed for the two MRCs can satisfy the needs of contingency operations without posing additional requirements. Air Force data shows that during fiscal years 1995 and 1996, certain types of units exceeded the Air Force goal of being deployed no more than 120 days per year. However, a July 1996 Air Force study concluded that the Air Force does not need to increase its military personnel requirements because of contingency operations, but it has to closely manage units that deploy frequently. The Air Force has taken several steps in recent years to reduce the impact of OOTWs on certain units. FORSIZE estimates the number of active and reserve forces and civilians needed to (1) deploy to support two MRCs, (2) support strategic missions such as airlift and space, and (3) sustain base operations during wartime. The initial exercise was in 1988; subsequent exercises were conducted in 1994 and 1995. There were no exercises in 1989 through 1993 because of the changing world environment, numerous Air Force command reorganizations, and the Persian Gulf War. FORSIZE 95 did not estimate medical requirements since OSD is conducting a separate study on these requirements. FORSIZE 95, which was completed in February 1996, projects wartime requirements for fiscal year 1997. As a starting point for FORSIZE, the Air Force develops a Time Phased Force Deployment List to deploy all 20 active and reserve fighter wings and bombers required by the Bottom-Up Review. In addition, FORSIZE determines requirements for personnel needed to operate at three additional bare bases (airfields with no supporting infrastructure) and to replace casualties (personnel that are killed or wounded and cannot return to duty). Air Force officials stated that the requirement for the bare bases is based on the Air Force’s experience during the Gulf War and other past operations. The number of aviator positions included in FORSIZE is based on the crew ratio established for each aircraft in the Air Force’s inventory. A crew ratio is the number of aircrews authorized per aircraft and is established to enable the Air Force to meet expected wartime sortie rates. For example, the current crew ratio for the F-15C is 1.25, which means that 1.25 pilots are authorized for each F-15C in the active inventory. Air Force officials noted that the actual sortie rates during the Gulf War were higher than could have been flown under the Air Force’s funded crew ratios and that additional pilots from units that had not deployed were therefore used. On the basis of this experience, the Air Force has increased the crew ratio for some aircraft, increased aircraft spares, and plans to use additional pilots from the schools to achieve higher sortie rates. Air Force officials noted that it is not economically feasible to increase the crew ratios beyond current levels because they would have to buy additional aircraft and spares in order to keep all crews properly trained. Personnel requirements for strategic and sustainment forces are determined at base level for 36 functional areas such as security police, transportation, and munitions. In determining these requirements, FORSIZE assumes that functions currently performed by military personnel will stay military. These base level assessments are intended to ensure that the Air Force has sufficient personnel at bases in the United States and overseas to (1) protect and maintain bases, (2) re-supply deploying forces, and (3) provide support to families of Air Force personnel who deploy to war and those that remain at their locations. FORSIZE then compares these requirements with authorized personnel by functional areas to determine if the Air Force has enough personnel to carry out missions specified in defense guidance. On the basis of FORSIZE 95, the Air Force concluded that it requires 364,324 active military personnel to meet its wartime requirements (not including medical). FORSIZE did not consider whether some functions that do not deploy could be met with other than military personnel such as civilian employees or contractors. As shown in figure 3.1, the forces that deploy make-up approximately one-third of the Air Force’s active military personnel requirements; strategic and sustainment forces account for the remaining two-thirds. (36%) (33%) The wartime personnel requirements estimated during FORSIZE include requirements to replace casualties. This number is classified but is based on two key elements. First, the population-at-risk is determined by an Air Force threat model. The population-at-risk includes the day-to-day casualty stream of personnel within the two theaters of operation who are expected to be killed in action, wounded in action, and otherwise disabled by disease or non-battle injuries. Second, casualty rates for each career field are established based on their proximity to the war zone. The closer the career field is to the war zone, the higher the casualty rate. For example, maintenance personnel on the flight line have a higher casualty rate than maintenance personnel working in a rear area. FORSIZE 95 identified a net active shortage of 19,585 personnel needed to meet wartime requirements. According to Air Force officials the shortage poses little risk to carry out the two MRC strategy because (1) it is predominantly in the forces that sustain base operations in the United States and not in the deploying forces and (2) other alternatives exist to cover most of the shortfall. Security police, transportation, intelligence, maintenance, and munitions account for approximately 16,300, or 83 percent, of the Air Force’s total shortage (see table 3.1). The remaining shortage occurred in 10 other functional areas. According to Air Force officials, all of the shortages, except munitions, are associated with requirements for sustaining forces. The munitions shortage exists because the Air Force has a shortage of military personnel in the bomb assembly and bomb loading specialties for the bomber force. In September 1996, we reported that the Air Force cannot meet its war-fighting requirement to support the full complement of B-1B and B-52H bombers allocated to regional commanders due to these personnel shortages. The Air Staff has tasked the Air Combat Command to develop a plan and identify funding requirements to eliminate the shortages using active or reserve personnel or a combination of both. According to Air Force officials, the security police shortage would occur in the sustaining force when some security police personnel guarding bases in the United States deployed to theaters of operation during wartime. Such deployment would create a shortage of security personnel to guard bases in the United States. Air Force security police personnel told us they could work around the shortfall by increasing workshifts, closing some gates at bases, and taking advantage of new sensor technology. In addition, one official noted that the Air Force could also contract for part-time security personnel. The transportation shortfall relates primarily to personnel that operate and maintain base motor pools in the United States. According to transportation officials, the individual ready reserve could be used to offset some of the shortage. According to a maintenance official, the maintenance shortfall represents only 2 percent of total maintenance requirements and is spread throughout a number of career fields, including jet engines, guidance and control, avionics systems, fabrication and parachute, and aircraft metal and technology. Maintenance officials told us that, because the maintenance shortfall is so small and would not impact mission readiness, they have no plans to examine alternatives to cover it. Few of the 10 remaining functional areas, which included such functions as comptroller, fuels, judge advocate, and weather, have significant shortages. Most have a shortage that ranges between 2 and 4 percent of their wartime requirement. According to Air Force officials, these shortages will be covered primarily by using the individual ready reserve and other management actions. In nine functional areas, authorized personnel exceeded requirements by 1,266 but the Air Force did not reallocate any of these positions to functional areas with shortages. For example, the education and training functional area had an excess of 244 personnel, but senior Air Force officials decided not to reallocate these positions until ongoing training initiatives have been completed. Likewise, there was an excess of 302 personnel in communications, but no action was taken because the career field is being merged with information management, which showed a shortage. According to an Air Force official, another reason the Air Force decided not to reallocate personnel is because the ongoing Quadrennial Defense Review may change the current national military strategy, which could change the Air Force’s active requirements and the need to reallocate personnel. OSD and Air Force analyses indicate the Air Force has more active duty medical personnel than needed for wartime requirements, but they have not yet agreed on the actual number of personnel to be reduced. A 1994 OSD study concluded that the number of medical positions within the services exceeded projected wartime requirements. This study is currently being updated because of the services’ concerns regarding the assumptions made to treat casualties and maintain peacetime operational readiness and training. However, a separate Air Force analysis showed the Air Force has about 5,900 active military medical personnel who are excess to projected wartime requirements. The Air Force expects that the ongoing OSD study will recommend reductions in medical personnel, so the Air Force plans to reduce the number of medical personnel in fiscal years 1998 through 2003. The Air Force does not assess personnel requirements for OOTWs under FORSIZE. According to Air Force officials, defense guidance assumes that the existing force requirements developed for the two MRCs can accomplish OOTW deployments without posing additional requirements. Nonetheless, headquarters Air Force and Air Combat Command officials are concerned about the high operations tempo OOTWs have on certain units, and believe the Air Force must closely manage its OOTW taskings to ensure certain units are not used excessively. Due to growing concern about the impact of OOTWs, the Air Combat Command sponsored a study of fiscal year 1994 deployment taskings. The study concluded that the Air Force did not need to increase personnel levels due to contingency operations, but noted that some functional areas were more impacted by contingency deployments than others. The study also concluded that commands and installations need to place more emphasis on accuracy and completeness of data reported for deployment requirements and actual deployments to promote a fairer distribution of taskings throughout units and across commands. Air Force data shows that, with the exception of one type of unit in the Air Force Special Operations Command, most units that exceeded the Air Force goal of being deployed no more than 120 days per year are in the Air Combat Command. Figure 3.2 shows the Air Combat Command units that exceeded the 120-day goal in 1995 and 1996. 1995 1996 Air Force standard To reduce the impact of OOTWs on certain units, the Air Force has implemented a policy to balance the workload throughout the Air Force, reduce taskings where appropriate, and make more use of reserve forces. For example, in 1995, the Air Combat Command chose not to send A/OA—10 aircraft to fiscal year 1996 National Training and Joint Readiness Training Centers exercises in order to reduce temporary travel for these units. The Air Force has also activated associate reserve squadrons for KC-135 refueling and E-3 Airborne Warning and Control System aircraft. Additionally, both the Air Force Reserve and the Air National Guard are now supporting a greater share of OOTW and other contingency taskings and have increased their participation in Joint Chief of Staff-sponsored exercises. This has been possible primarily due to the Air Force success in encouraging reservists to volunteer for such duty. DOD stated that although FORSIZE identified an active shortage of 19,600 personnel, this shortage could be addressed with a variety of sources, including technology, civilians, contractors, and Air National Guard and Air Force Reserve personnel. We agreed with DOD’s position. Our report reflects that the Air Force has identified several ways to compensate for these wartime shortages. Potential exists to reduce the number of active duty Air Force personnel significantly below the congressional floor of 381,000. In fiscal year 1998, the Air Force plans to seek statutory authority to reduce the number of active duty personnel to about 371,600 or 9,400 below the current floor. In addition, a preliminary air staff review of its infrastructure force has identified a potential to reduce the active force by as much as 75,000 beyond fiscal year 1998 by contracting out some functions now performed by military personnel and converting some military positions to civilian. Our prior work indicates that savings can occur by contracting out functions in lieu of using military personnel, and significant opportunities exist to convert military positions to less costly civilian positions. Some opportunities may also exist to reduce mission forces. Our prior work has shown the Air Force could reduce active personnel requirements by increasing the size of its fighter squadrons and transferring some bombers to the reserves. In addition, several ongoing defense studies such the Deep Attack Weapons Mix Study and the Quadrennial Defense Review could affect the Air Force’s future active duty personnel requirements. On the basis of the Air Force’s fiscal year 1998 budget proposal provided to OSD, the Air Force plans to seek statutory authority to reduce its active military end strength to about 371,600 or 9,400 below the current congressional floor. Air Force officials stated the planned personnel reductions will not lessen the Air Force’s war-fighting capability, since they are primarily in infrastructure-related functions. Our analysis of the planned decrease shows that 1,125, or 12 percent, are in mission forces and 8,415, or 88 percent, are in infrastructure forces as shown in table 4.1. The planned decrease in mission forces results primarily from three initiatives. First, the final drawdown of intercontinental ballistic missiles under the first Strategic Arms Reduction Treaty will reduce mission forces by 1,014 personnel. Second, the Air Force plans to retire the EF-111 electronic support aircraft in fiscal year 1998, which would reduce active military personnel by 525. However, the Air Force is concerned the Navy may not assume the electronics warfare mission within the planned time frame, which could delay these planned reductions. Finally, the Air Force plans to retire 8 C-130 aircraft, which will eliminate 360 positions. This reduction is based on a Joint Staff study that showed the Air Force has excess intra-theater airlift capacity. The decreases in mission forces are largely offset by increases related to funding six additional B-1B bombers for training and combat operations from the reconstitution reserve, activating an unmanned aerial vehicle squadron, and adding one Joint Surveillance Target Attack Radar System (E-8) and one Rivet Joint (RC-135). Air Force officials noted the additional Joint Surveillance Target Attack Radar System and Rivet Joint aircraft will help alleviate the high personnel tempo in these units. The planned infrastructure decreases are based primarily on the Air Force’s plans to either have civilian employees or contractors perform installation support and communication functions now performed by about 2,500 military personnel. The Air Force determined that these positions do not require military personnel because they do not deploy and are needed to support overseas rotation. Therefore, it plans to study the cost-effectiveness of contracting out the function or using civilian employees. The planned decrease in installation support also includes 360 positions providing base operating support at Howard Air Force Base in Panama. The Air Force assumes that all military personnel will withdraw from Panama after the United States turns control of the Panama Canal over to Panama in 1999. However, the State Department has recently announced an effort to study the possibility of keeping some U.S. military personnel in Panama after the transfer, which may impact the Air Force’s plans. The decrease in central medical personnel represents the start of an effort to align peacetime staffing with wartime requirements. A study by the Office of the Air Force Surgeon General showed the Air Force only needed 86 percent of its projected fiscal year 1999 medical personnel to meet wartime medical needs. The Air Force has programmed a 4.5-percent reduction (1,748 personnel) through fiscal year 2003. According to Air Force officials, it will take up to 12 years to eliminate the remaining positions to minimize personnel turbulence and impact on peacetime patient care. An Air Force official stated that even though the OSD study on post-Cold War medical requirements has not been completed, officials in the Office of the Air Force Surgeon General believe the study will recommend that the services reduce the number of medical personnel. Thus, these officials believe it is prudent to start reducing the number of medical personnel now. The need for such reductions must be certified by the Secretary of Defense under 10 U.S.C. 129c. The decrease in central personnel represents a decline in the number of personnel in transit. An Air Force official stated that the smaller force has reduced the number of permanent change-of-station moves, accessions, and training requirements, which reduces the number of people in transit. The changes in force management are caused primarily by decreases in the number of positions in the Air Weather Service, support to the Defense Finance and Accounting Service, and headquarters activities. Finally, the decrease in acquisition and the increase in central logistics are due to the transfer of base operations functions at test centers from the acquisition category to central logistics. Congress has directed DOD to prepare a plan to reduce the number of military and civilian personnel involved in acquisition by 25 percent over a period of 5 years beginning in fiscal year 1996. Air Force officials stated that they have not programmed this additional decrease because OSD and the services have not agreed on the definition of the acquisition workforce nor the baseline for measuring the reductions. The Air Force has not yet fully assessed the potential for substituting less costly civilian employees or contractors for some of the active duty personnel currently assigned to infrastructure activities. In the past, the Air Force has not periodically reviewed all of its positions to determine whether they must be filled by military personnel. However, the Air Force has recently begun an effort to identify such savings to help fund force modernization. Three ongoing Air Force studies have identified the potential for eliminating a significant number of active duty personnel. Two studies involve the potential to contract out commercial activity functions now being performed by military and civilian personnel and another involves the potential for converting military positions in inherently governmental functions to civilian positions. The Air Force’s ability to reduce the number of military positions identified in the ongoing studies could be constrained by DOD goals for reducing civilian positions. DOD Directive 1100.4 requires the services to staff positions civilian personnel unless the services deem that positions must be filled by military personnel for one or more of the following reasons, including combat readiness, legal requirements, rotation, security, training, and discipline. In addition, Office of Management and Budget Circular A-76 classifies government activities as either inherently governmental functions or commercial activities. Inherently governmental functions—those intimately related to the public interest such as fund control—must be done by federal employees. A commercial activity can be an entire organization or part of an organization that provides a product or service obtainable from a commercial source. Commercial activities include functions such as vehicle and facilities maintenance, automated data processing, and administrative support. Circular A-76 sets forth the procedures for agencies to study whether the functions could be done more economically by contractors. An ongoing Air Force study has identified about 52,600 active military positions allocated to functions that could potentially be performed by contractors or civilian employees. These positions have tentatively been identified as not military essential because their personnel do not deploy, support the rotation of forces to overseas bases and operations, or perform unique military missions or functions. The Air Force study is scheduled to be completed by the end of April 1997. The functional areas under review consist of all military positions in commercial activities within the Air Force’s major commands in the continental United States and some overseas locations. The Air Force has about 160,400 military positions in commercial activities. The Air Force has deemed that 82,700 of these positions must be filled by military personnel because they would deploy during wartime; about another 25,100 of these positions are in military-unique functions such as headquarters activities, recruiting, basic military training, and those personnel needed to maintain an overseas rotation base. Once these positions were eliminated from consideration, the Air Force was left with about 52,600 military in commercial activities that could be studied for possible conversion, as shown in table 4.2. To further assess the potential to contract out or use civilian employees for these positions, the Air Staff has provided each major command with the number of positions within their respective commands that are candidates for conversion. Each command is currently identifying the positions by base and organization to determine how many functions could be studied further to determine the relative cost savings associated with replacing military personnel with either contractors or civilian employees. The major commands are also required to identify barriers to contracting and recommend ways to overcome them. For example, current Air Force procedure exempts such units from being studied as candidates for conversion, if some personnel in the unit are expected to deploy. Air Staff officials noted the major commands may be able to identify ways around this problem in some cases, such as reorganizing units or transferring functions between bases. In a November 1996 letter to the Secretary of Defense, the Secretary of the Air Force stated that DOD’s existing civilian workyear policy needs to be modified so the Air Force can achieve savings by replacing military personnel assigned to positions that are not military essential with civilians. The letter noted that the Air Force’s experience has shown that 40 percent of the cost comparison studies performed since 1979 determined that an in-house civilian workforce was more cost-effective than contractors. When a function that was predominantly performed by military personnel remains in-house, the Air Force may have to increase the number of civilian employees, which runs counter to DOD’s efforts to reduce its civilian workforce. For example, the maintenance training function at Altus Air Force Base was performed by 1,444 personnel, of whom 1,401 were military and 43 were civilian employees. The cost comparison showed that an in-house civilian workforce would be more cost-effective than using the private sector. Thus, the Air Force had to increase the number of civilian employees by 692 in order to achieve the projected savings. The Secretary of the Air Force stated that the goals for civilian downsizing pose a disincentive for accomplishing work in the least costly manner and that some consideration should be given to relaxing civilian downsizing goals in such cases. The Air Force is also conducting a study to determine if there are opportunities to consolidate its 126 precision measurement electronic laboratories and have the work performed by civilian employees or contractors. There are about 1,200 military personnel in 50 labs in the active force, and the remaining labs are operated by contractors or are in the guard and reserve forces. These personnel are not included in the universe of military positions in commercial activities that could potentially be performed by civilian employees or contractors. According to an Air Force official, the preliminary study results indicate that the Air Force could consolidate from 126 to around 50 labs. This official noted that the final report, scheduled to be issued in April 1997, will contain a plan to consolidate the labs as well as for conducting cost comparison studies. The Air Force reviewed all military positions in inherently governmental functions to determine if military personnel are required. Military personnel were considered necessary if the position deployed, supported overseas rotation, was required by law, or was in a unique military function such as the honor guard or recruiting. On the basis of this criteria, the Air Force identified approximately 21,600 military positions that are not military essential and can potentially be converted to civilian positions as shown in table 4.3. Air Force officials told us they were preparing a briefing for senior Air Force leadership on the issues concerning military to civilian conversions. These officials stated that some of the major commands believe that many of the positions should remain military. For example, the Air Force Materiel Command believes all the acquisition positions should remain military because military personnel assigned to these positions bring operational and flightline experience, which is invaluable to developing new systems. However, we believe there is a good basis for studying the potential to replace some military personnel assigned to acquisition functions with civilian employees. According to DOD’s fiscal year 1997 FYDP, 41 percent of the Air Force’s acquisition workforce is military while only 12 percent of the Army’s is military. An Air Force official stated that the senior Air Force leadership will decide which, if any, positions will be converted from military to civilian. This official stated that no date has been set for the briefing. In October 1996, we reported that the Air Force could save $69 million by converting 6,800 officer positions in such fields as acquisition and financial management to civilian positions because they are not military essential. We found that civilian employees cost between $1,261 and $15,731 less annually than military personnel depending on the grade and rank. In October 1994, we reported that similar opportunities exist for converting enlisted support positions to civilian employees. Both of our reports noted that a number of impediments exist to military to civilian conversions. For example, guidance provides commanders with wide latitude in justifying the use of military personnel, and local commanders are perceived to prefer military rather than civilian employees in certain positions. Nonetheless, we noted these barriers can be overcome with active participation of senior managers. DOD concurred with our reports and agreed to convene a panel of senior managers within OSD, the joint staff, and the military services to examine the issue of military to civilian conversions. An OSD official stated that the issues concerning military to civilian conversions will be addressed as part of the Quadrennial Defense Review. Until recently, Air Force fighter wings were predominantly organized in three squadrons of 24 aircraft. However, the Air Force has decided to reduce its squadron size to 18, which also reduced its wing size to 54. This change in unit size increased the number of wings and squadrons to more than would have been needed had the squadron size stayed at 24. In May 1996, we reported that the Air Force’s arguments for using smaller squadrons do not justify the additional cost. Air Force officials maintain that more squadrons are needed to provide the Air Force flexibility to respond to numerous potential conflicts across the globe. Although the Air Force considers smaller fighter squadrons beneficial, it had not performed any analysis to justify its decision. We developed several options for consolidating the fighter force that would permit the Air Force to maintain the same number of aircraft but carry out its missions with fewer active duty personnel. Our options could eliminate between two and seven squadrons, and also eliminate a wing and/or fighter base and reduce operating costs up to $115 million annually. The Air Force’s requirements for active duty personnel could also be affected by several ongoing initiatives and studies. These include an Air Force study of the active/reserve force mix, DOD’s Deep Attack Weapons Mix Study, and the Quadrennial Defense Review required by the National Defense Authorization Act for Fiscal Year 1997. The Air Force is assessing options to transfer some functions now performed by the active force to the reserves. The Air Force plans to examine changes to the mix of active to reserve forces after the Quadrennial Defense Review is completed. Moreover, in our September 1996 report on DOD’s bomber force, we reported one option for restructuring the bomber force would be to place more B-1Bs in the Air National Guard. This option would reduce the cost to maintain DOD’s bomber force while maintaining DOD’s force of 95 B-1Bs. In 1993, DOD reported to Congress that placing B-1Bs in the Air National Guard would result in no loss of war-fighting capability. A major benefit of transferring bombers to the reserve component is that reserve units have traditionally been less expensive to operate than their active duty counterparts. These savings are attributed to two factors. First, DOD expects that an Air National Guard squadron will require fewer flying hours than an active squadron because Air National Guard units are able to recruit more experienced pilots who require less frequent training to maintain their proficiency. Personnel costs are the second major factor that account for the Air National Guard’s lower cost. In comparison with active squadrons that consist primarily of active duty military personnel, Air National Guard units rely heavily on less-costly civilians and part-time Guard personnel. In addition, DOD’s ongoing Deep Attack Weapons Mix Study could change DOD’s requirements for fighters and bombers, which would impact Air Force military personnel requirements. The Commission on Roles and Missions recommended that DOD conduct a DOD-wide cost-effectiveness study to determine the appropriate number and mix of deep attack capabilities currently fielded and under development by all the services. The first part of the study, which was to be completed in late 1996, was expected to analyze weapons mix requirements for DOD’s planned force structure in 1998, 2006, and 2014 and determine the impact of force structure changes on the weapon systems mix. As of February 1997, OSD was reviewing the results of this first phase and had not made the results public. The second part of the study will analyze trade-offs among elements of the force structure, such as bombers and tactical aircraft, for the same years and is to be completed in early 1997. The study should provide DOD with an opportunity to identify options to reduce some of its extensive ground attack capabilities, which could impact requirements for active duty personnel. The National Defense Authorization Act for Fiscal Year 1997 requires the Secretary of Defense to conduct a quadrennial review of the defense program. The first review, now underway, is scheduled to be completed in May 1997. It will examine defense strategy, force structure, force modernization and infrastructure and develop a defense strategy to the year 2005. The legislation also established a National Defense Panel to provide an independent assessment of DOD’s quadrennial review as well as to develop alternative force structures that could meet anticipated threats to the national security of the United States. The results of these studies could also impact the number of active duty military personnel. Potential exists to replace active military personnel with contractors or civilian employees. These potential reductions should not impact the Air Force’s ability to implement the national military strategy, since they are in the infrastructure forces rather than in the forces that deploy during wartime. The actual number of active military positions that could be eliminated depends on the results of several ongoing initiatives as well as senior Air Force leadership commitment to reduce infrastructure to fund force modernization. We believe that it is important for the Air Force to move as quickly as possible to complete its studies and make the conversions to contractor and civilian employees in view of the recurring savings that could be achieved. Developing a plan and time frames for such cost comparisons and conversions would permit the Air Force leadership to monitor efforts to reduce infrastructure. DOD has stated it must reduce infrastructure costs in order to modernize its force. Several ongoing Air Force studies have identified potential to replace military personnel with contractors or civilian employees. Therefore, we recommend that, once the ongoing studies are completed, the Secretary of the Air Force develop a plan that identifies time frames to study whether it is more cost-effective to transfer commercial activities now performed by military personnel to civilian employees or private contractors and includes time frames to convert military positions in inherently governmental functions to civilian positions. DOD fully concurred with two parts of our recommendation and partially concurred with one part. DOD stated that an existing system already tracks the services’ progress in completing cost comparison studies and converting positions, so there is no need to establish an additional system. We agreed with DOD and have modified the recommendation accordingly.
GAO reviewed the Air Force's personnel reduction efforts, focusing on: (1) how the size and composition of the active Air Force has changed since 1986; (2) whether the Air Force has sufficient numbers of personnel to meet wartime requirements; and (3) whether there is potential to further reduce the active force that could result in a more efficient force. GAO noted that: (1) between fiscal year (FY) 1986 and 1997, the Air Force will reduce its active military personnel from over 600,000 to 381,100, or by 37 percent; (2) mission forces have been reduced at a much greater rate than infrastructure forces during the last decade; (3) as a result, approximately two-thirds of the Air Force's 381,100 active duty personnel are now allocated to infrastructure functions such as installation support and acquisition; (4) further, today's smaller force has a higher ratio of officers than in 1986; (5) potential exists to reduce the active Air Force below the 381,100 minimum level set by Congress, without adversely affecting the Air Force's war-fighting capability; (6) in May 1996, GAO suggested options to consolidate fighter squadrons which, if implemented, would permit the Air Force to maintain the same number of aircraft but carry out its missions with fewer active duty personnel; (7) GAO has also reported that the Air Force could achieve savings by replacing military personnel in some administrative and support positions with civilian employees; (8) for FY 1998, the Air Force plans to seek statutory authority to reduce the active force by about 9,400 below the current minimum; (9) GAO's analysis shows the majority of these planned decreases are in infrastructure functions; (10) prompted by the Secretary of Defense's goal to reduce infrastructure to free funds for force modernization, the Air Force has recently identified a potential to reduce the active force by as many as 75,000 additional military personnel beyond FY 1998; (11) the Air Force is reviewing options for replacing military personnel assigned to infrastructure functions with civilian employees or contractors that may be able to perform some functions at less cost than military personnel; (12) the actual number of active personnel that will ultimately be replaced will depend on the results of continuing Air Force analysis to determine whether such substitutions will be organizationally feasible and cost-effective; (13) the Air Force projects it would have an active wartime shortage of about 19,600 personnel if two major regional conflicts occurred; (14) however, the Air Force does not need additional active personnel to cover this wartime shortage because it has identified ways to compensate for the shortage; and (15) moreover, this shortage would present little risk in carrying out the military strategy since it primarily affects forces that would provide operating support for bases in the United States rather than in the forces that would deploy to war.
The U.S. Commission on Civil Rights was established to serve as an independent, bipartisan, fact-finding agency whose mission is to investigate and report on the status of civil rights in the United States. It is required to study the impact of federal civil rights laws and policies with regard to discrimination or denial of equal protection of the laws. According to its statutory mission, the Commission also serves as a national clearinghouse for information related to its mission and investigates charges of citizens being deprived of the right to vote because of color, race, religion, sex, age, disability, or national origin. For the purpose of carrying out its mission, the Commission may hold hearings and has the power to administer oaths, issue subpoenas for the attendance of witnesses and the production of written materials, take depositions, and use written interrogatories to obtain information about matters that are the subject of a Commission hearing or report. However, because the Commission lacks enforcement powers that would enable it to apply remedies in individual cases, the Commission refers specific complaints to the appropriate federal, state, or local government agency for action. Its operations are also governed by the provisions of the Sunshine Act, which requires the Commission to open most of its meetings to the public. By statute, the structure of the Commission has three key components— the Commissioners, the Staff Director, and the state advisory committees: The Commission is directed by eight part-time Commissioners who serve 6-year staggered terms. Four Commissioners are appointed by the President, two by the President Pro Tempore of the Senate, and two by the Speaker of the House of Representatives. With the concurrence of a majority of the Commission’s members, the President also designates a Chairperson and Vice Chairperson from among the Commissioners. No more than four Commissioners can be of the same political party. The Staff Director is appointed by the President with the concurrence of a majority of the Commissioners. A full-time employee, the Staff Director serves as the administrative head of the Commission. All Commission offices and senior staff report directly to the Staff Director. The Commission has established 51 state advisory committees composed of private citizens appointed by the Commission who volunteer to assist the agency by identifying local civil rights issues, some of which may become important at the national level. Each committee has a minimum of 11 members. The state advisory committees are supported by six regional offices whose primary function is to assist the state committees in their planning, fact-finding, and reporting activities. The Commission’s annual appropriation has averaged about $9 million for more than 10 years, with salaries and benefits constituting about 73 percent. Because of level funding since fiscal year 1995, the total number of full-time equivalent employees steadily declined from 95 in fiscal year 1995 to 64 in fiscal year 2004. As of January 1, 2006, the number of staff had further declined to 46 full-time staff nationwide, excluding the Commissioners; 9 of the 46 staff were professionals in regional offices. After December 2004, when a new Chair, Commissioner, and Staff Director were appointed, the Commission began to reevaluate its product development policies and matters related to the operations of its state advisory committees. Because the Commission has no enforcement authority, the “force of its work derives from its scholarly reports.” The Commission’s work was intended from the outset to be “objective and free from partisanship . . . broad and at the same time thorough,” as the Attorney General noted when he transmitted the legislative proposal that established the Commission—the Civil Rights Act of 1957. The primary written product produced by the Commission’s national office is a statutorily required annual report on federal civil rights enforcement efforts. This statutory report, which is transmitted to the President and Congress, contains findings, conclusions, and recommendations and is published by the national office. In addition, the national office produces other studies, such as reports on federal funding for civil rights programs and letters to agencies or members of Congress on civil rights issues. The Commission also invites speakers, such as attorneys and scholars, to brief the Commissioners on civil rights issues upon request at the Commission’s regular (generally, monthly) public meetings. Such briefings can also serve as the basis for Commission reports that include the speakers’ written statements. The Commission has also conducted public hearings with witnesses as part of its investigative and fact-finding mission. The Commission’s professional staff researches and writes its national office reports and organizes Commission briefings and fact-finding hearings. In addition to the Commission’s national office products, the state advisory committees produce written reports that are based on fact- finding hearings and other public meetings. State advisory committee members propose civil rights topics for study, participate in state and local hearings and public meetings that they sponsor, review draft reports, and vote to approve state advisory committee reports to be sent to the Commission. Fact-finding reports may contain findings, conclusions, and recommendations for action. State advisory committees also issue reports that summarize speakers’ presentations at conferences and public hearings held by the committee. The Commission’s regional staff provide support to the state committees by organizing and attending their meetings, hearings, and other public events, and by researching and drafting reports for the committees. The Staff Director and Commissioners play key roles in approving the Commission’s products. The Staff Director is responsible, among other duties, for approving all national office project proposals, project designs, and draft products before they are forwarded to the Commissioners for review. The Staff Director also approves all state advisory committee activities, project proposals, and reports. Commissioners vote to approve national office products at key stages, such as project proposals and final drafts, and they also receive all state advisory committee final reports but do not vote to accept or reject them. The Commission’s quality assurance policies for its national office and state advisory committee products are set forth in its Administrative Manual, Legal Sufficiency and Defame and Degrade Manual, and Hearing Manual. In addition, the Commission’s quality assurance policies for its state advisory committee products are set forth in the Commission’s State Advisory Committee Handbook, published by the Commission in February 1998. The Commission’s policies for its state advisory committees provide guidance for developing and approving project proposals and reports and conducting fact-finding hearings and public meetings. Some of the Commission’s regional offices also have issued memorandums and other documents on policies affecting their products. (See apps. V and VI for further information on the Commission’s policies and processes for developing and approving national office and state advisory committee products.) The Commission’s state advisory committees were established to function as the “eyes and ears” of the Commission on civil rights issues. The Commission’s statute authorizes the creation of advisory committees and directs the Commission to establish at least one advisory committee in every state and the District of Columbia. Each state committee has a charter that enables it to operate and identifies its members. Each charter is valid for a term of 2 years, and the committee terminates if the charter is not renewed at the end of the term. The Commission is responsible for renewing state advisory committee charters. The mission of the state advisory committees is to investigate within their states any subject that the Commission itself is authorized to investigate and provide advice to the Commission in writing about their findings and recommendations. The committees must confine their studies to the state covered by their charters. They are not limited to subjects chosen by the Commission for their study but may study any subjects within the purview of the Commission’s statute. More specifically, the state advisory committees advise the Commission about (1) any alleged denials of the right to vote due to discrimination or fraud, (2) any matters related to discrimination or denial of equal protection of the law and the effect federal laws and policies have with respect to equal protection of the laws, and (3) any matters of mutual concern in the preparation of reports of the Commission to the President and Congress. Advisory committees are also charged to receive reports, suggestions, and recommendations from individuals, public and private organizations, and public officials upon matters pertinent to advisory committee inquiries; assist the Commission in the exercise of its clearinghouse function; and, attend, as observers, any open hearing or conference that the Commission may hold within their state. To carry out their mission to gather information and to advise the Commission on state and local civil rights issues, state advisory committees are authorized to hold fact-finding meetings and invite government officials and private persons to provide information and their views on various subjects. Advisory committee meetings are open to the public, and a designated federal employee must be present at all meetings. Any person may submit a written statement at any business or fact-finding meeting of an advisory committee and, at the discretion of the designated federal employee, may make an oral presentation. The Commission’s relations with its state advisory committees are guided and regulated by FACA. Enacted in 1972, FACA prescribes certain ground rules that govern all federal advisory committees, including the Commission’s 51 advisory committees. Under the act, GSA established a Committee Management Secretariat, which is tasked with prescribing administrative guidelines and management controls for advisory committees and providing advice, assistance, and guidance to advisory committees to improve their performance. In turn, FACA requires each agency head to establish uniform administrative guidelines and management controls for its advisory committees that are consistent with the Secretariat’s directives. Under FACA, advisory committees are to have a balanced representation of views and adequate funding and support, and to exercise independent judgment without inappropriate influence from the appointing agency or any other party. The Commission has some policies designed to ensure the quality of its products. However, it does not have policies for ensuring an objective examination of the issues or ensuring accountability for the decisions made on its products. The Commission’s policies for developing and approving its products do not contain criteria to be used by the Staff Director or Commissioners and do not provide for the representation of diverse perspectives or the use of experts as external reviewers. In addition, the Commission’s policies do not provide transparency for the decisions made in regard to its national office products, and the Commission has not obtained the services of an Inspector General, as we previously recommended, to strengthen its accountability. In contrast, the Commission’s policies for its state advisory committees are more comprehensive than those for its national office. The Commission has policies for developing and approving its national office products—reports, briefings, and hearings—that provide some safeguards for the quality of these products, but it lacks policies for ensuring their objectivity. More specifically, the Commission does not have a policy requiring the inclusion of balanced and varied perspectives in its national office reports, briefings, and hearings, nor does it have a policy on the use of external reviewers. In addition, although the Commission requires the Staff Director and Commissioners to approve its national office products at key junctures in their development, its policies do not include criteria for their assessment of these products. The Commission’s policies on the quality of its national office products are fairly general, requiring the reports to be accurate, well written, and timely. For example, it is Commission policy to issue “well-written documents that meet high standards of accuracy and timeliness,” according to the Commission’s policy manual. Similarly, the offices that develop Commission products are responsible for ensuring that the draft report is “well written, accurate, and of high quality” before the report is published, and staff must “double-check sources” in draft reports “for accuracy and conformance with the appropriate rules of citation.” In addition to these general policies, the Commission requires four independent reviews of draft reports primarily designed to ensure their accuracy: (1) an editorial review; (2) a legal sufficiency review; (3) a “defame and degrade” review to ensure that, if reports cast aspersions on any persons named in them, those persons will be offered an opportunity to respond; and (4) if needed, a review by agencies affected by the report. (See app. V for further information on the Commission’s policies and processes for developing and approving national office products.) The Staff Director and Commissioners exercise considerable control in carrying out these policies. The Staff Director plays a pivotal role in approving all interim documents, such as proposals, outlines, discovery plans, and draft reports, throughout their development. The Staff Director must approve all documents before they can be sent to the Commissioners for approval. Under new policies effective in May 2005, the Commissioners are required to approve Commission products at all key stages, from proposal development through final report stages, and their approval requires a majority vote. If there are any significant changes to a product at any stage, the Staff Director and Commissioners are required to approve these changes as well. This change marks a significant improvement over previous Commission policy, in which the Commissioners had limited involvement in the development of its products. The previously limited role was a source of considerable concern to some Commissioners and led to our 2003 recommendation that the Commission provide for increased involvement of the Commissioners in planning and implementation. The Commission has issued four reports and conducted several briefings under the new policy requiring greater Commissioner involvement. Two of these reports were based on briefings made to the Commissioners. From July 2005 to February 2006, the Commission conducted five briefings with invited speakers presenting their perspectives on specific civil rights issues, such as the reauthorization of expiring provisions of the Voting Rights Act and racial disparity studies. The papers that speakers submitted for these briefings provide the basis for briefing reports published by the Commission. The Commission does not have a policy requiring the representation of varied perspectives in its national office reports, in contrast to its policies for state advisory committee reports, which are required to “represent a variety of different and opposing views.” For example, the initial draft of the Commission’s 2005 report, Federal Procurement after Adarand (the Adarand report)—the most significant report recently issued by the Commission because it was the statutorily required annual report— reflected a range of research and perspectives on a controversial issue involving the application of racial considerations in federal contracting. The Commissioners had agreed upon this range of perspectives when they voted to approve the report’s outline in April 2005. However, in response to comments from a few Commissioners, the Staff Director removed major sections of the report that supported one perspective, that “race conscious” strategies are still needed to increase minority businesses’ participation in federal contracts. As a result, the main text of the final published report reflected only one point of view, that federal agencies have not sufficiently developed “race neutral” approaches to increase the participation of small and disadvantaged businesses in federal contracting. We also found that the Commission does not have a policy for determining when to use external reviewers and how reviewers should be selected for its national office reports. For example, for the Commission’s 2005 Adarand report on affirmative action in federal contracting, the Staff Director hired a single reviewer whose work is cited in the report and who is widely known for his opposition to affirmative action. The contractor’s functions were to review the draft report and provide his “opinions, revisions, comments and suggestions,” based on his expertise in federal contracting and race-neutral alternatives. Some of the Commissioners and the staff responsible for preparing the report said that they did not know that an external reviewer had been hired, how he had been selected, what changes the reviewer had recommended, or which changes were included in the final report. Agency staff noted that the external reviewer added some material to the report that critiqued the work of a federal agency and that the Commission did not provide the agency with an opportunity to comment, as required by Commission policy. In addition, the Commission did not acknowledge the external reviewer’s participation in the published report. Although the Commission does not have a policy on using external reviewers, other nationally recognized research organizations, such as the National Academies and the Congressional Budget Office, use external reviewers to assess the completeness, balance, and objectivity of their reports. For both the Academies and CBO, the general principle is that the more controversial the topic, the greater the number of reviewers they use. The Academies’ extensive external review process includes preparing a slate of names of possible reviewers, having the names approved at two levels of the organization, and establishing a review coordinator. The Academies then recruit independent experts with a range of views and perspectives to comment on the draft report, and their comments are provided anonymously. In addition, to ensure that the reviewers’ comments are appropriately incorporated, the Academies require the review coordinator to document that the report adequately addressed the reviewers’ comments. Similarly, CBO uses external reviewers from the academic community and other agencies in order to obtain a wider range of views and twice yearly draws on the advice of a panel of experts to review and comment on the agency’s preliminary economic forecasts. Although briefings and briefing reports are becoming increasingly frequent Commission products, the Commission does not have a policy specifying how speakers for the briefings are to be identified or requiring that briefing panels be balanced and include a variety of perspectives. For example, the Commission held a briefing in October 2005 to discuss expiring provisions of the Voting Rights Act of 1965, a controversial topic of immediate interest. Three of the four speakers at the Commission briefing opposed reauthorization of a key provision of the act. One Commissioner we interviewed told us he thought the briefing panel was biased and unbalanced. According to the Staff Director, the way speakers are identified and the basis for their selection vary with each briefing, depending on the topic, but the Commission does not have a written requirement for ensuring varied perspectives in briefing panels. Some invited speakers have declined to participate in Commission briefings because they were unavailable on the proposed briefing dates or because they believed their professional roles precluded them from taking a stance on the issues to be discussed. However, the Staff Director also told us that the Commission often has difficulty obtaining speakers who represent different perspectives on controversial topics. For example, in one instance an invited speaker declined in part because he had no confidence in the Commission’s receptivity to the evidence and other points of view. In addition, although the Commission’s new policies require the Staff Director and Commissioners to approve national office products at several stages, these policies do not include criteria designed to ensure that the products are objective. The Staff Director’s and Commissioners’ decisions to review and approve each stage of a product’s development—such as proposal, outline and methodology, discovery plan, and draft report—are not guided by written criteria, such as requiring reviewers to assess whether the methodology provides sufficient and relevant evidence to achieve the product’s objectives. According to the Staff Director, in addition to the Commission’s general policy guidance, his reviews of draft reports are largely guided by his judgment on whether the reports are likely to be approved by a majority of the Commissioners. The Staff Director made a similar point at a July 2005 public meeting, stating that several Commissioners had indicated that they would dissent from a draft report, and that his goal in removing chapters from the final report was to ensure that a majority of the Commissioners would vote to approve it. The Commission does not use some checks and balances to ensure Commissioner involvement and its policies do not provide transparency for the decisions made in regard to its products, and the Commission has not obtained the services of an Inspector General to strengthen its accountability, as we previously recommended. The Commission does not use some of the checks and balances needed to provide accountability for the decisions made on its products. Although its new policies involve the Commissioners far more extensively in decisions on its products than in the past, the Commission still does not routinely include all Commissioners in its deliberations as required. This problem predates the Commission’s new policies. For example, our 2003 report noted the complaints of several Commissioners that they were often unaware of the content of Commission products until they were published or released to the public. This pattern of not including all Commissioners in its deliberations was especially evident with regard to the decisions made on the Adarand report. For example, in an early stage of the development of this report, the Staff Director did not consult with all of the Commissioners or obtain their agreement before he changed the focus of the questions used to collect essential data from federal agencies for the report. These questions—called interrogatories—significantly altered the report’s direction after the Commission’s staff had completed much of their research. However, the Commissioners were not made aware of this change until a Commissioner pointed out discrepancies between the original focus as approved by the Commission in 2003 and the interrogatories that went out in 2005. At a public meeting of the Commission, three Commissioners objected to the fact that the interrogatories had gone forward without the expressed authority of the Commissioners and that these changes were made autonomously by the Staff Director and the Chair. At the meeting, the Chair agreed that the interrogatories should not have been sent without the other Commissioners’ approval of the changes. In another example of decisions being made without the knowledge of all of the Commissioners, the Chair made changes to a draft briefing report on campus anti-Semitism based on his legal interpretation of an issue and private conversations with officials from the Department of Education. At the Commission’s February 2006 meeting, the Vice Chair said that she did not understand the rationale for the changes and objected to the methods used to obtain information on the issue. Other Commissioners questioned the Chair’s legal interpretation and the accuracy of the changes he made. Although they had planned to vote on the report at this meeting, the Commissioners postponed the vote because of disagreements about these changes and their implications for the report’s recommendations. Similarly, the Chair and several Commissioners sent a letter to the Secretary of the Department of Education (Education) disagreeing with a civil rights organization’s report that had criticized the department and commending Education for its commitment to civil rights. However, the Commissioner, who, at that time, was the sole Democrat, noted in a separate dissenting letter that he was not informed about the majority’s letter until after it was drafted and that he did not understand the other Commissioners’ impetus for writing the letter. In several recent instances, Commissioners have also complained about not receiving key documents for review or receiving them too late to help them in their deliberations. For example, at the Commission’s monthly public meeting in January 2006, several Commissioners complained that they had not received transcripts of Commission meetings since October 2005. Among other things, the transcripts contained information on a briefing that Commission staff had used to draft a briefing report on reauthorization of the Voting Rights Act. However, because the Commissioners had not received copies of the transcript used to prepare this report, they postponed a vote to approve the report for publication. In addition, the Commissioners postponed a vote accepting a state advisory committee report for publication because they had not received it in time to review it. Similarly, in July 2005, the Commissioners were sent a final draft of the Adarand report for review on the same day that they voted on its publication, despite the fact that it contained comments from an affected federal agency and an external reviewer that required fresh review. In addition, the Commissioner who was the sole Democrat at that time said that he did not receive additional changes to the report that were sent to all of the other Commissioners. The Commission’s decisions on the content of its products lack transparency because, in some cases, they are not discussed publicly or documented. For example, there was no documentation of the basis for the Staff Director’s decision to remove several sections of the Adarand report in response to comments received from several Commissioners during their initial review of the draft report. In addition, in accordance with the Commission’s new policies, the Commissioners’ individual reviews of the draft report were not discussed in a public meeting. Two Commissioners said that they were unaware of the changes made to the report until after the decision had been made to remove the sections of the report from the draft. In a public meeting afterward, the Staff Director stated that he had removed the sections because it had become clear to him that with these sections, the report would not receive enough votes to be approved for publication. One Republican Commissioner told us that although he agreed with the analysis in the Adarand report, he had abstained from voting on the final report because he objected to the report process and because he did not want a biased report to be issued by the Commission. Another means of documenting the quality of products is the use of checklists. Although the Commission does not have checklists for assessing the quality of its national office reports, it does have such checklists for assessing the quality of state advisory committee reports. The checklists include a section to be completed by the Office of the Staff Director that documents the office’s assessment of the balance, writing, and report conclusions of state advisory committee reports before transmitting these reports to the Commissioners. However, the Commission does not appear to use the checklists for state advisory committee reports, since they were not always completed or were missing. For example, although we requested copies of the completed checklists for nine state advisory committee reports issued since 2002, the Commission could not provide us with copies of any completed checklists. Finally, the Commission has not obtained independent oversight, as we recommended in 2004 to address long-standing concerns about its management and accountability. Specifically, we recommended that the Commission seek the services of an existing Inspector General to help keep the Commission and Congress informed of problems and to conduct and supervise necessary audits and investigations of the Commission’s operations. In 2005, the Commission acted to implement our 2003 recommendation to increase Commissioners’ involvement in the development of its national office products and also began to implement our recommendations on other matters, such as financial management. According to the Staff Director, he contacted officials from some Offices of Inspectors General, including GSA, but they declined to provide their services, noting that most of the Commission’s problems would take too much of their staff time. The Staff Director also told us that the Commission had contracted with an accounting firm for advice on how to correct problems identified in their recent financial audit. This action, however, will not address the weaknesses we identified in the Commission’s policies, or provide reasonable assurance of the objectivity of its products and accountability for the decisions made on these products. For state advisory committee products, which are researched and written principally by the Commission’s regional office staff, the Commission has quality assurance policies that are generally more comprehensive than its policies for its national office products. More specifically, Commission policy explicitly requires state advisory committees to incorporate balanced, varied, and opposing perspectives in their hearings and reports, in contrast to national office products, which do not have such a requirement. According to the Commission’s administrative policy manual, state advisory committees “must seek to hear a variety of points of view and opinions” in conducting their work. This policy also notes that “balance does not mean that the conclusions of a State Advisory Committee agree with or include all positions, only that the research and opinions listened to represent a variety of different and opposing views on the topic at hand.” To reinforce this focus, the checklist for transmitting state advisory committee proposals to the national office for approval asks the Staff Director’s office to determine whether the sources to be used represent a variety of opinions on the issues. Similarly, the checklist for transmitting state advisory committee reports to the national office requires the Office of the Staff Director to determine whether varied and opposing views were identified and discussed in the report. The national office does not have such quality assurance checklists for assessing its own products. In addition, state advisory committee members are required to review draft committee reports for their clarity, substance, objectivity, and conclusions, unlike Commissioners, who do not have criteria for reviewing Commission products. The regional directors are also responsible for ensuring that state advisory committee reports meet appropriate methodological, organizational, and balance standards. State advisory committee products are also subject to the four reviews required for all Commission products: the editorial review, legal sufficiency review, defame and degrade review, and affected agency review. (See app. VI for further details on the process for approving state advisory committee reports.) The state advisory committees have played a key role in accomplishing the work of the Commission, but most committees cannot currently conduct any work because the Commission has not renewed their charters. The Commission has also instituted new membership criteria for the committees and has required all of the committees whose charters have expired to redraft their applications for renewal to comply with the new criteria. Furthermore, over the past 5 years, the activities of the state advisory committees have been significantly limited, in part because the Commission, working under budget restraints, has reduced the resources available to conduct their work and also because it has delayed reviewing and accepting their reports for publication. In addition, the Commission has not sought the views of state advisory committee members in its strategic planning process or on key decisions that affect the committees. Finally, although many of these are long-standing issues, the Commission has not provided for independent oversight of its policies and practices for state advisory committees. The Commission’s state advisory committees have operated as a unique national network intended to provide the Commission with information on local civil rights issues that can be used in its work at the national level. The state advisory committees have identified and examined issues through a variety of activities and provided information to the Commission and the public in written reports. Since 1980, the state advisory committee issued 200 of the 254 reports published by the Commission. Other activities conducted by the state advisory committees include open forums, public meetings, and formal hearings that have provided avenues for the public to communicate their civil rights experiences and for the committees to define current local civil rights issues that may not yet be on the national agenda. Some of the committees’ reports have prompted action by the Commission. For example, in 1973, the California State Advisory Committee held hearings on the concerns of the Asian American and Pacific Islander communities. These hearings resulted in two state advisory committee reports: Asian American and Pacific Peoples: A Case of Mistaken Identity (February 1975) and A Dream Unfulfilled: Korean and Philipino Health Professionals in California (May 1975). These reports were the first studies conducted by the Commission on these issues, according to agency officials. The Commission issued national office reports on these issues in 1986, 1988, and 1992. More recently, after the terrorist attack on September 11, 2001, the Commission asked the state advisory committees to gather information on the status of Muslim, Arab American, and others perceived to be from these communities in their states. Twenty state committees held information-gathering events—such as town hall meetings—at which the public was invited to speak about experiences that may have threatened the civil rights of members of the Muslim community. As a result, nine state advisory committees’ reports were issued by the Commission on the civil rights of Muslims and other communities in their states, and the Commission issued a statement that summarized the results of these activities and reports. State advisory committee reports also have had an impact on their states’ operations, including state legislation. Members of several committees told us about legislation that had passed or state offices that had been affected through their efforts. For example, officials with one of the Commission’s regional offices told us that in 2003, one of its states formed a multi-agency state task force to work on an issue reported by the Nevada State Advisory Committee on the educational opportunities of Native American Indian students in the state’s public schools. As a result of the task force’s efforts, the state enacted legislation designed to improve educational outcomes of Native American Indian students. State advisory committee members also noted that just conducting activities, without issuing a report, can have an effect. For example, members of one committee told us that they visited a local prison after receiving allegations of sexual abuse of female detainees being held on account of their illegal entry into the United States. Local newspapers were present during the committee’s visits, bringing the issue, which was not well known, to the public’s attention. In addition, state advisory committee members reported that they participated in activities that gave them a voice in their states’ civil rights operations. For example, they reported working with the state civil rights offices to inform them of local issues and assist in writing proposed legislation, giving testimony to state legislatures, and training state and local officials on current civil rights issues. In responding to our survey, one chairman reported that on the basis of work conducted by the state advisory committee, he testified before his state’s Joint House and Senate Committee on several minority issues, including racial harassment in schools and discrimination in hiring. Another advisory committee chair wrote that following a report the committee issued on hiring practices and appointments to state commissions and boards, the governor committed to improving state practices and asked the committee for assistance in identifying minorities to serve on state boards and commissions. State advisory committee members also reported being well connected to their local communities because of their professions. Those we interviewed included a state legislator, several university professors and lawyers, a director of a county Equal Employment Opportunity Commission office, the administrator of the regional office of a federal agency, and a church minister—roles that allowed them to influence civil rights issues in their communities. (See appendix VII for a summary of the profiles of state advisory committee members as listed in their most recently approved charters. To view the details by state of all 51 committee membership profiles, see an electronic supplement at http://www.gao.gov/cgi-bin/getrpt?GAO-06-551SP.) Consistent with the requirements in FACA, state advisory committees that do not have an approved charter cannot meet or conduct any business. However, as of February 2006, 38 of the 51 state advisory committees did not have an approved charter, and 13 of them had not had an approved charter for at least 2 years. Only 13 state advisory committees currently have approved charters as of February 2006, and their charters are due to expire late in 2006. (See table 1.) The primary reason for the current delays in renewing the state advisory committees’ charters is that the Commission recently initiated significant changes in the criteria for membership in the state advisory committees. In addition, the Commission chose to cancel pending applications for renewal until members could be chosen to serve on the rechartered committees that reflect the new membership criteria, further delaying the process of establishing active new charters for the committees. The new membership criteria were first proposed as a regulatory change in November 2005. As of February 2006, one portion of the criteria had been incorporated in a new regulation for the Commission; the remaining criteria had not been finalized. The proposed new membership criteria are substantially different from the previous criteria and could result in major changes in state advisory committee membership. First, the Commission’s new policy requiring nondiscrimination in the selection of committee members was published in February 2006 as a new regulation. It supersedes the Commission’s previous regulation requiring the membership of each state advisory committee to reflect and be representative of the state’s population. It also replaced a 1990 administrative policy that required minority group membership to be no less than 40 percent or more than 65 percent of the state advisory committee. Secondly, the proposed new criteria would require the Commission to consider selecting members with more academic technical skills, such as knowledge of law and statistical analysis, instead of having general skills and a diversity of experience and knowledge from business, labor, and other perspectives. Finally, the proposed new criteria would require each advisory committee to include “members of both political parties.” If adopted, this will replace the previous regulation requiring the committees to reflect the political affiliation proportional to the demographics. In addition, the criteria do not refer to members who are politically independent, although independents currently make up about one-quarter of the committees’ membership. See table 2 for a comparison of the previous and proposed new criteria. The proposed new criteria require both political parties to be represented, and FACA requires that federal advisory membership be fairly balanced in terms of the points of view represented and the functions to be performed. However, it is not yet clear how the Commission intends to achieve this balance. According to the Staff Director, having one person of a minority party on an 11-member state advisory committee would meet a new criterion for each committee to have members of both political parties. According to the Commission’s Chair, the new membership criteria were developed in order to, among other things, move away from racially and ethnically based representation toward greater diversity in expertise and ideas. For example, according to the Staff Director, the proposed new membership criteria are intended to increase the diversity of skills among committee members. One reason for this is that because of the shortage of staff in the regional offices, the Commission is considering having state advisory committee members contribute to the writing of reports themselves, a course of action that, in the view of the Staff Director, would require committee members to have the expertise needed for such an undertaking. In addition, according to the Chair, limiting members’ terms to 10 years or five 2-year terms will promote the selection of more new members with new ideas. The Commission received several objections to its decision to suspend the charter approval process until the membership criteria had been finalized. In July 2005, the chairs of 32 state advisory committees sent a letter to the Commission Chair requesting that pending charter applications be approved and stating that there was no justification for not approving charters pending policy formulation. During a Commission meeting in August 2005, at which this issue was raised, one Commissioner made a similar proposal, adding that this would also allow the Commissioners more time to consider whether to change the membership criteria. However, the majority of the Commissioners voted not to extend the state advisory committees’ charters or conditionally approve charter renewal applications that had already been filed. Although, under FACA, state advisory committees that do not have an approved charter cannot meet or conduct any business, we found that— both in the past and recently—the committees have continued their activities while their applications for renewal were being considered. In the past, many state advisory committees continued working without a charter, according to agency officials we interviewed. Until recently, when we questioned the Commission about the current delays in approving the committees’ charters, we found that state advisory committees in several states have routinely continued their work and meetings. For example, representatives of two advisory committees told us that they generally operate normally, except for the actual publishing of reports, when they do not have an approved charter. However, in December 2005, after the Commission consulted with its solicitor, the Staff Director informed the state advisory committees that holding meetings and engaging in other activities were not permissible under FACA in the absence of a charter. Since 2000, the number of state advisory committee reports that have been published has declined considerably, partly because limited funding has contributed to a reduction in regional staff, travel, and other committee activities, and also because of the Commission’s delays in approving state advisory committee reports. According to the Commission’s policy, state advisory committees should complete one project every 2 years if funding and staffing permit. With 51 state advisory committees, committee reports have been the mainstay of the Commission’s publications, and state advisory committees have produced 200 of the 254 Commission reports published since 1980. In the past 5 years, the committees have produced 38 reports. As shown in figure 1, since 2001, the number of reports issued by the state advisory committees each year has steadily declined. Over the years—especially in the past 15 years—the number of staff in the regional offices has declined considerably because of office closures, attrition, and voluntary separations. According to Commission officials, in 1980, there were 10 regional offices and each office had a director, attorney, editor, and three or four civil rights analysts. In 1985, the number of regional offices was reduced from 10 to 3, their legal functions were moved to the national office, and the number of staff in each office was also reduced. In 1991, the Commission opened 3 additional regional offices, bringing the total up to 6 offices, but the number of staff in each office continued to decline. During the most recent 5-year period, as the agency’s budget remained flat, these declines continued, with staff decreasing from 19 staff in 2000 to 9 in 2006. Currently, each of the 6 regional offices has only 1 or 2 professional staff—a total of 9 as of January 2006—and each regional office supports several state advisory committees, ranging from 6 to 14 committees for each office. Furthermore, the Commission has approved a plan to reduce the number of regional offices to 4 offices in fiscal year 2007 because of budgetary concerns. (See table 3.) This decline in the number of professional regional staff affects the ability of state advisory committees to carry out their work. The state advisory committees depend on regional staff to arrange meetings and hearings, conduct interviews and research, and write and process their reports. Because federal advisory committees cannot hold a meeting without having a designated federal official, a regional staff person must attend every state advisory committee meeting for every state in the region. In our survey of state advisory committee chairs, 75 percent of chairs who responded reported that they were unable to hold meetings in the period 2000 to 2005 because no regional staff was available to attend. In addition, because the work performed by regional staff on the state advisory committee reports is extensive, it is difficult for the regional staff to work on more than one or two reports at one time. The members of one state advisory committee told us their regional office had established a “take turns” policy, where the one regional analyst works with one state advisory committee at a time. In addition, members of another state committee said that they were not able to produce reports with critical analyses because no regional staff with the appropriate expertise was available to conduct the work. As a result, the committee issued a “Statement of Concern” to the Commission, a document that does not have the impact of a report, instead of producing the analytical report that the committee had wanted on the issue. The state advisory committees have also seen declines in their activities because they have rarely been able to travel or hold meetings. For example, of the chairs who responded to our survey, 85 percent reported that fact-finding and reporting activities were not undertaken because of budgetary constraints. In March 2005, the Commission told its regional offices and state advisory committees that no funds were available for travel, meetings, or hearings because of budget shortfalls. The agency’s annual appropriation has remained at about $9 million since 1995, resulting in several cost reduction measures throughout the agency. In January 2006, the Commission allowed some travel, telling state advisory committees with approved charters that a limited number of meetings could be held in fiscal year 2006. Since then, according to the Commission’s comments on our draft report, 10 state advisory committees conducted meetings or briefings between February and April 2006. In addition, regional office and advisory committee expenses cannot currently be tracked separately from the Commission’s other activities, a fact that has made it difficult to determine the level of support provided by the Commission. FACA requires agencies to ensure that advisory committees have adequate staff, quarters, and funds for the committees to conduct their business. The Commission’s statute also directs the establishment of at least one advisory committee in each state. Prior to 2002, the Commission had designated a specific portion of its budget— generally about $2.5 million annually—for regional office and committee activities. However, since 2003, the Commission has not identified specific funds for the regional offices and state advisory committees but, instead, has combined their expenses with other agency expenses, according to agency officials. The Commission’s policies require state advisory committee reports to go through an agency approval process that could negatively affect the committees’ independence. Such policies include a requirement for the Staff Director’s approval of all state advisory committee activities and reports. Specifically, according to the Commission’s policies, the Staff Director must approve proposals for nearly all types of state advisory committee activities, as well as any significant changes to these proposals. In addition, when state advisory committees send approved reports to the national office for editorial and legal reviews, the Staff Director’s office determines whether the evidence, testimony, and research in these reports support the conclusions. Finally, according to Commission policy, “under no circumstances” can state advisory committee reports “be released to the public or forwarded to the Commissioners without the Staff Director’s approval.” In our discussions with state advisory committee members and regional office staff, many complained about the Commission holding up or attempting to interfere with committee products. For example, members of several state advisory committees told us that, in the past few years, they had sent completed reports approved by the committees to the Staff Director’s office, but the reports were not published or given to the Commissioners and the committees were not told what happened to them. Members of another state advisory committee told us that, because of the long time it takes for the national office reviews and approvals, it has taken 2 to 3 years for a report to be published. In addition, slightly over half of the survey respondents reported that they were dissatisfied or very dissatisfied with the national office’s timeliness in approving their reports. Some state advisory committee members told us that, at times, the window of opportunity for making an impact has passed by the time the national office publishes a state advisory committee report. For example, one committee chairman stated that it took the Commission 4 years to publish the committee’s report on limited English proficiency. He noted that “When it was released, the information was so stale as to render our effort meaningless…” Another state committee chair commented that issuing reports so late is “an exercise in hindsight.” This is not a new problem. In 1986, in response to concerns about delays in the issuance of state advisory committee reports, among other things, Congress held hearings on the subject. We testified on the decline in the number of state advisory committee reports and noted that the Commission had released two committee reports in 1985 but not as official Commission documents. In February 2006, the Commission changed its policy for the Commissioners’ review of state advisory committee reports. According to the new policy, Commissioners will receive all state advisory committee reports, but will no longer be asked to vote to accept or reject the reports, as they had done in the past. The intention in making this change was to allow the public access to the state advisory committees’ work without necessarily conveying the impression that the Commission endorses their findings. However, the new policy leaves in place the role of the Staff Director (or his designee) in ensuring the reports’ adherence to the Commission’s procedural and legal criteria for state advisory committees. Reports that have satisfied these criteria will be printed with a disclaimer stating: “The views expressed in this report and the findings and recommendations contained herein are those of a majority of the members of the state advisory committee and do not necessarily represent the views of the Commission, its individual members, or the policies of the United States government.” According to two Commissioners in their comments on our draft report, the Commission will be reviewing project procedures for state advisory committee products as it did previously for national office products. Until recently, the current Commission officials have not generally considered the views of the state advisory committees when planning future national office work. For example, in developing the agency’s new draft 5-year strategic plan, the Commission did not solicit the perspective of the state advisory committees on their role in accomplishing the agency’s strategic goals. As of January 2006, the state advisory committees had not been involved in developing the agency’s draft strategic plan, although they are key stakeholders in accomplishing the Commission’s goals. The first draft of the strategic plan that was submitted for stakeholder review in October 2005 scarcely mentioned the role of the state advisory committees, despite their statutory role or their many contributions to the Commission’s work over time. The congressional staff who reviewed the draft plan asked the Commission to include more information on the role of the state advisory committees in the plan, among other comments. Although the Commission obtained the perspectives of two regional directors who participated in a working group on the strategic plan, the Commission did not solicit the views of state advisory committee members. According to the Staff Director, the Commission is now working to include goals that incorporate the role of the state advisory committees in its strategic plan, including obtaining the views of the state advisory committees on the Commission’s goals and their role in accomplishing these goals. In February 2006, the Staff Director solicited the input of the state advisory committees in identifying possible topics for the Commission’s 2008 statutory report. The Commission has also not generally obtained the views of the state advisory committees when making organizational changes that directly affect the committees. For example, when the Staff Director proposed in April 2005 to close two regional offices in fiscal year 2006 as part of a larger plan to reduce agency expenses, the Commissioners approved the proposed closures without obtaining any input from the state advisory committees on how closures would affect their ability to conduct their work. In addition, according to the Chairman of the Commission, the state advisory committees did not participate in the development of the new criteria for state committee membership until after the criteria had been proposed. After receiving comments on the Commission’s failure to consult with the state advisory committees from several members of Congress, outside civil rights organizations, and others, the Staff Director held a meeting by conference call with all of the regional directors to discuss the proposed membership criteria and proposed office closures. However, in January 2006, the Staff Director reported that he had sought the perspectives of state advisory committee members in the 13 states with active charters on whether to ask Congress to extend the terms of the charters and the chairs to 4 years instead of the current 2 years. The Commission received comments on the proposed request from about half of the active committees. According to the Staff Director, most of them agreed with the proposal to extend the terms of the committees’ charters and the chairs to 4 years. Another indication of the Commission’s failure to involve the state advisory committees in its planning and decision-making efforts is its poor communication with the committees. For example, when the Staff Director and Commissioners agreed to close two of the regional offices, they did not inform the regional directors, who are the liaisons to the state advisory committees, until 3 days later, according a regional director. Instead, two regional directors learned about the decision from sources outside the Commission, including the local newspaper. In addition, only 22 percent of state committee chairs who responded to our survey reported that they were satisfied with the quality of their communication with the national office. Furthermore, several state committee members we interviewed told us that there should be more communication between the state advisory committees and the Commission and that they believed that the Commission did not understand the role of the committees. More specifically, members told us that they thought the Commission could make more effective and efficient use of the advisory committees if they knew what issues the Commission saw as priorities and how they could contribute to the Commission’s vision and goals. For example, several state advisory committee members said they thought joint reports prepared by more than one committee would be more efficient and allow the Commission to obtain more comprehensive views on a particular issue. In our survey, respondents identified several civil rights issues they had in common, such as housing, education, and employment for immigrants and various elements of the justice system. The Commission has not provided for independent oversight of its policies and practices for state advisory committees, despite the long-standing nature of many of the issues we identified regarding the Commission’s lack of consultation and communication with the state advisory committees, delays in renewing charter applications, and lack of timeliness and other issues in approving state advisory committee reports. Obtaining the services of an Inspector General, as we recommended in our 2004 report, could provide this oversight. Without having policies in place for ensuring the objectivity of its reports, briefings, and hearings, the Commission cannot provide adequate assurance that it is achieving its mission as an independent, bipartisan fact-finding agency by informing often controversial debates over civil rights issues for the public’s benefit. It is therefore important for the Commission’s credibility that its Commissioners and Staff Director base their work on sound criteria and that the Commission’s reports and other products include varying perspectives so as to be recognized as fair and impartial. The Commission’s briefings and hearings also run the risk of appearing biased, rather than objective, in the absence of a policy for identifying and selecting speakers and witnesses who can bring to bear a range of perspectives and expertise. Furthermore, by using an external reviewer for its reports without having a process for considering the use of such reviewers, the Commission risks introducing one-sided commentary on its products and is not availing itself of an important avenue for helping to ensure the objectivity of its analyses. The absence of such policies leaves the Commission less accountable to the public for its decisions related to its reports. Moreover, its credibility and independence could be compromised by the failure to engage all of the Commissioners in its decisions and to document substantive decisions made outside of public view. As the eyes and ears of the Commission, the state advisory committees are critical to the work of the Commission. However, a variety of problems inhibit them from successfully carrying out their important function. These include continued delays in renewing charters as well as declines in regional staff and other forms of support for the state advisory committees. The Commission’s budgeting practices make it difficult to gauge the level of funding provided to the committees in order to use this information to analyze trends, such as the comparison of funding for reports for the national office versus the advisory committees and to make decisions about priorities. Furthermore, the potential impact and usefulness of state advisory committee reports can be significantly reduced if they are not reviewed and issued speedily or if the Commission’s review policies constrain the reports’ direction or findings. The Commission also cuts the line of communication on important civil rights issues from the local level to the national level if it does not seek the perspectives of state advisory committees in planning its work, determining long-term goals and strategies, or making organizational decisions that directly affect the committees. However, despite the long- standing nature of many of these issues, the Commission has not obtained independent oversight by an Inspector General of its policies and practices for state advisory committees. In a time of large budget deficits and fiscal constraints, addressing these issues would allow the Commission to better leverage its resources by drawing upon this nationwide network of volunteers who could enrich the national perspective on civil rights and allow for more informed decisions by the President and Congress. (1) In order to better ensure the quality of the Commission’s national office products, the Commission should require that its written products consider varied and opposing perspectives, and that the process for achieving this be well documented; develop a process for using external reviewers that includes criteria for determining when to use external reviewers, identifying a range of appropriate reviewers, and ensuring that the selection process is impartial and transparent to the Commissioners and the public; and include criteria for Commissioner and Staff Director reviews of national office reports—from project proposal through final draft—in its policies and require substantive decisions and changes to be documented. (2) In order to ensure that relevant information and perspectives are covered in a comprehensive manner during briefings and hearings, the Commission should require that the selection of speakers for briefings and witnesses for hearings include balanced, varied, and opposing perspectives, and that this process be well documented. (3) In order to ensure that the Commission can provide advice to Congress and make the most effective use of the state advisory committees, it should develop and implement a formal process for approving state advisory committee charters with specific timetables to ensure their approval in a timely manner and for appointing and seating advisory committee members promptly after charter approval; renew its practice of separately identifying funds for the regional offices and state advisory committees to better evaluate the adequacy of funding for supporting the committees, given budgetary constraints; establish required time frames for Staff Director reviews in order to ensure that state advisory committee reports are published in a timely manner; and integrate the state advisory committees’ mission and work in its strategic planning and decision-making processes, including articulating how the national office will use the state advisory committees’ findings on state and local civil rights issues to inform the Commission’s national goals and strategies. (4) In order to ensure that the Commission’s processes are well documented and its policies are followed, the Commission should establish an external accountability mechanism, such as seeking the services of an existing Inspector General from another agency. We provided the U.S. Commission on Civil Rights with a draft of this report for review and comment. In the agency’s response, the Staff Director did not comment on our conclusions or recommendations but instead described actions the Commission had taken to improve its management and financial controls, the operations of the state advisory committees, the role of the Commissioners, and internal review procedures for its reports and briefings. We had already discussed most of these actions in our report, and we added information on recent state advisory committee activities that the Staff Director provided in his comments. While many of these actions are positive steps, they do not address the matters upon which we based our recommendations, such as the Commission’s lack of a process for using external reviewers for its national office reports. Therefore, we continue to believe that further actions are needed. Our recommendations identify the specific steps we believe should be taken to strengthen the quality of the Commission’s products and make better use of its state advisory committees. The Staff Director’s comments are contained in appendix II. Although we did not solicit comments from the Commissioners, the Staff Director provided them with an opportunity to respond to our draft report, and three of the seven Commissioners provided us with comments. One Commissioner agreed with the contents of the draft report. In his letter, he stated that the report’s findings and recommendations provide a framework for improving the Commission’s procedures and enhancing the credibility, balance, and transparency of the Commission’s work. He also noted that, although the Commission had implemented many of the recommendations in our previous reports, it has not updated its strategic plan nor retained the services of an Inspector General as we recommended in 2004 and in this report. The Vice Chair and one Commissioner strongly disagreed with the draft report’s approach, tone, and conclusions and asserted in their joint letter that the report was biased and unbalanced. We believe that our report is balanced and unbiased. As described in our scope and methodology, we designed and conducted this engagement in accordance with generally accepted government auditing standards. The Commissioners’ major concerns included the following: The Commissioners stated that it was misleading for us to criticize the Commission’s reports for their lack of objectivity. This view appears to emanate from a misunderstanding of our audit objectives. It was not within the scope of our study to assess the objectivity of the Commission’s reports. As noted in the report objectives, our purpose was to analyze the Commission’s policies for ensuring the quality, including objectivity, of its reports and other products. When we discussed specific Commission reports or briefings, we did so to illustrate issues that arose when the Commission did not have written policies, not to provide an assessment of the content of individual products. The Commissioners asserted that the draft report was biased because we did not interview all of the Commissioners. Our evaluation focused on assessing the adequacy of the Commission’s policies and the role of the state advisory committees and was not contingent upon obtaining the views of every Commissioner about these policies or the committees’ role. Therefore, we disagree with the Commissioners’ assertion that, because we did not interview all of the Commissioners, the report is biased in its assumptions and conclusions. The Commissioners stated that our finding that the Commission did not include the state advisory committee members in its strategic planning process was an unwarranted attack. We do not agree with this characterization. Our recommendation is intended to provide a constructive suggestion for improving the Commission’s strategic planning. At the time of our review, Commission officials told us and the congressional committees that provide oversight of the Commission that they did not solicit the input of the state advisory members in developing its draft strategic plan. In addition, our report recognizes that the draft strategic plan had not yet been completed. To the extent that the final plan includes the perspectives of and better defines the role of the state advisory committees, it will be a more complete plan. This is the basis for our recommendation. Finally, the Commissioners stated that the report did not sufficiently acknowledge the significant policy and other changes made by the Commission and that the previous leadership of the Commission was responsible for many of the issues discussed in our report. We acknowledged throughout the report the current leadership’s changes to the Commission’s reporting and state advisory committee policies. Our findings and conclusions are based on the policies and operations of the current Commission, including the new policies. Comments from the three Commissioners and our responses are contained in appendixes III and IV. We incorporated clarifications and updates in the report as appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of the report. At that time, we will send copies of this report to the U.S. Commission on Civil Rights and other interested parties. We will also make copies available to others upon request. It will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-9889 or at robertson@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VIII. Our objectives in this study were to assess (1) the adequacy of the Commission’s policies for ensuring the quality of its products and (2) the role of the state advisory committees in contributing to the Commission’s work. To address these objectives, we reviewed documents such as relevant statutes, regulations, and administrative policies of the Commission; transcripts and minutes of Commission meetings; and recent Commission and state advisory committee reports. We interviewed Commission staff, including the Staff Director, and three Commissioners— the Chair, one Republican member, and one Democrat. We also attended monthly meetings of the Commission, including briefings. In addition, to analyze the quality assurance policies for its products, we reviewed the Commission’s administrative policies for its reports, briefings, and hearings. We also reviewed the policies used by the National Academies of Sciences and the Congressional Budget Office (CBO) to ensure the quality of their products and guidance from the Office of Management and Budget on ensuring the quality and objectivity of information disseminated by federal agencies, in addition to considering GAO’s own policies. We also interviewed officials from the Academies and CBO. In addition, we reviewed the Commission’s files for a selection of recent national and state advisory committee reports and interviewed national and regional office staff. In addition, we conducted site visits to all six regional offices, where we interviewed regional staff to determine the support they provide to the committees. To analyze the state advisory committees’ role in accomplishing the Commission’s fact-finding and reporting mission, we conducted a survey of the 51 committee chairs and we received responses to this survey from state advisory committee chairs and former chairs in 36 states. We conducted site visits to all 6 regional offices, where we interviewed regional staff to determine the support they provide to the state advisory committees. We also interviewed the state advisory committee chairs and members in 11 states to understand how they operate and their experiences with the Commission’s national and regional offices. We interviewed officials at the General Services Administration who administered the Federal Advisory Committee Act and reviewed related documentation. In addition, we reviewed the most recently approved state advisory committee charters, interviewed Commission officials who work with the regional offices, and reviewed state advisory committee regulations and policies. We conducted our work from April 2005 to March 2006 in accordance with generally accepted government auditing standards. One of our methods for determining the adequacy of the Commission’s policies and the role of the state advisory committees was to survey the chairs or former chairs of each state advisory committee. We sent a questionnaire to all state advisory committee chairs in each state, including the chair of the District of Columbia’s committee. We conducted the survey from July 7, 2005, through August 31, 2005. We received responses from state advisory committee chairs and former chairs in 36 states. To prepare the questionnaire, we asked knowledgeable officials from the state advisory committees and survey professionals to comment on the questionnaire, and we pretested the questionnaire to ensure that the questions were clear and unambiguous, terminology was used correctly, it did not place an undue burden on the respondents, the information was feasible to obtain, and it was comprehensive and unbiased. We pretested the questionnaire with state advisory committee chairs in a geographically diverse group of states by means of telephone and face-to-face interviews. On the basis of the feedback from these pretests, we made changes to the content and format of the questionnaire. The questionnaire asked a combination of open- and close-ended questions about each state advisory committee and the activities it had undertaken in the previous 5 years. The questionnaire also asked the chairs to comment on their experiences working with the Commission’s regional and national offices. To ensure an adequate and appropriate response to our questionnaire, we sent an e-mail in advance to establish the correct respondent. We also sent two reminder letters and followed up with telephone calls to those who had not yet responded. All respondents who had not sent in a survey after approximately 4 weeks were telephoned by GAO and asked to participate. The majority of respondents completed the survey electronically but some faxed copies of their answers to GAO. In these cases, the faxed responses were entered into a database by contractors hired by GAO. Quality assurance steps were taken to ensure the accuracy of the data entry. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or were analyzed can introduce unwanted variability into the survey results. We took steps in the development of the questionnaire, the data collection, and the data analysis to minimize these nonsampling errors. For example, social science survey specialists designed the questionnaire in collaboration with GAO staff with subject matter expertise. Then, the draft questionnaire was pretested with a number of state officials to ensure that the questions were relevant, clearly stated, and easy to comprehend. When the data were analyzed, a second, independent analyst checked all computer programs. In several cases, we contacted respondents to clarify their responses to the questions, but we did not otherwise independently verify the information they provided. 1. The Commissioners stated that the report does not acknowledge the significant changes that have taken place at the agency and its efforts at reform. To the contrary, the report acknowledged numerous changes to the Commission’s reporting and state advisory committee policies by the current leadership. For example, we noted that, after the arrival of new leadership, the Commission began to reevaluate its policies on product development and state advisory committee matters and, in discussing the Commission’s quality assurance policies for its products, we reported on the increased involvement of the Commissioners in product development, describing it as a “significant improvement over previous Commission policy.” In addition, we devoted a considerable portion of two appendixes to the Commission’s process for developing and approving national and state advisory committee products, including policy changes. With regard to the state advisory committees, we similarly analyzed the Commission’s policy changes to the committees’ membership criteria and publication of state advisory committee reports. 2. The Commissioners asserted that it was misleading for us to criticize the Commission’s reports because they lack objectivity. This view appears to emanate from a misunderstanding of our audit objectives. It was not within the scope of our study to analyze the objectivity of the Commission’s reports. As noted in our report, our purpose was to analyze the Commission’s policies for ensuring the quality of its reports and other products. When we discussed specific Commission reports or briefings, we did so to illustrate issues that arose when the Commission did not have written policies, not to assess the content of individual reports. We observed that the Commission lacks several policies for its product development that could help ensure the objectivity of its reports and briefings. We focused especially on policies that other organizations, such as the National Academies, the Congressional Budget Office, and GAO, consider important to ensuring the quality of their reports and other products. 3. Our evaluation focused on assessing the adequacy of the Commission’s policies and the role of the state advisory committees. Our assessment was not contingent upon obtaining the views of every Commissioner about these policies or the committees’ role. Therefore, we disagree with the Commissioners’ assertion that, because we did not interview all of the Commissioners, the report is biased in its assumptions and conclusions. 4. The Commissioners stated that we did not define the term “objectivity” in criticizing the Commission’s work. We added a definition of objectivity to our report. 5. We disagree with the Commissioner’s statements regarding our review of the Commission’s statutory reports. First, the scope of our report did not include a review of the pending 2006 report on the reauthorization of the Voting Rights Act. Second, in discussing the 2005 statutory report on Adarand, we noted that the project was originally approved in 2003 under the previous leadership. However, we also noted that the current leadership had a significant hand in revising the direction of the research questions that were sent to federal agencies to obtain information for the report. The current Commission also approved an outline for the Adarand report and had several opportunities to comment on the draft report. Furthermore, contrary to the Commissioners’ statement, the current Commission proposed and approved the October 2005 briefing on the Voting Rights Act, not the previous leadership. This briefing was the subject of our discussion on the Commission’s speaker selection policies. In appendix V of our report, we referred to, but did not otherwise discuss, the Commission’s pending 2006 statutory report on the Voting Rights Act. 6. Our report acknowledges the new policy on the Commissioners’ role in reviewing state advisory committee reports and publication requirements that was approved in February 2006. We added a note to our report acknowledging that the Commission will be reviewing its project approval procedures for state advisory committee products as it did previously for national office products. 7. As noted earlier, we did not review the content of either national office reports or individual state advisory committee reports to assess their quality. However, if the Commissioners have major concerns about the quality of the state advisory committee reports, they should take a hard look at the committees’ adherence to the Commission’s quality assurance policies for these reports. 8. In our report, we did not state that the process for the state advisory committees’ operations should become the model for the national office. However, we did examine the Commission’s policies for ensuring the quality of its products and found that the Commission has some policies governing the committees’ work that are more comprehensive than those for national office products. 9. Regardless of whether one agrees or disagrees with the content or quality of the chapters and other material that were removed from the draft Adarand report, we continue to believe that the manner in which they were removed is of concern. For example, although the current Commission voted to approve a report outline in April 2005 that reflected a range of perspectives, the removal of several sections of the draft report shifted the balance towards one perspective. In addition, there was no documentation of the basis for this decision. Afterwards, two Commissioners said that they were unaware of the changes until after the decision had been made, and one of them abstained during the Commission’s final vote because he objected to how these changes had been made, even though he said that he agreed with the content of the report. 10. We disagree with the Commissioners’ assertion that its new procedural policies for briefings will necessarily provide balance and a variety of perspectives to its briefing panels. The new policy requires the Commissioners to approve briefing topics and panel of speakers at least one month in advance of the briefing. However, as we reported, this new policy does not require that briefing panels be balanced or include a variety of perspectives. 11. In their comments, the Commissioners stated that we criticized the inclusion of political parties in the membership criteria for state advisory committee members, but this is incorrect. In our report, we described the previous and proposed membership criteria, both of which included members’ political affiliation. Because the proposed criteria call for each committee to have members of “both” political parties, we also expressed uncertainty about how the Commission would consider candidates who are politically independent and how it would ensure balance in the points of view represented, as required under FACA. 12. According to the Commissioners, our report repeatedly refers to the state advisory committees as the “eyes and ears” of the Commission. This description of the state advisory committees’ role is not our term. Rather it appears in the Commission’s State Advisory Committee Handbook and in the Commission’s October 2005 and December 2005 draft strategic plans developed under the current leadership. 13. The Commissioners also stated that our finding that the Commission did not include the state advisory committee members in its strategic planning process was a “ludicrous and unwarranted attack.” We disagree with this characterization. Our recommendation is intended to provide a constructive suggestion for improving the Commission’s strategic planning process. At the time of our review, Commission officials told us and the staff of the congressional committees that provide oversight of the Commission that they did not solicit the input of the state advisory committee members in developing the draft strategic plan. In addition, our report recognized that the draft strategic plan had not been completed. If the final plan includes the perspectives of the state advisory committees and better defines their role, it will be a more complete plan. This is the basis for our recommendation. In addition to its general policies on quality assurance, Commission policy also requires four independent reviews of its draft products to ensure the accuracy and adequacy of the information in them. These reviews include the following: 1. Editorial review: The purpose of this review is to determine the adequacy and accuracy of the substantive information in the draft document, according to the Administrative Manual. This includes conceptual soundness, adherence to Commission policy, quality of research, argumentation, and documentation of major points. However, Commission officials we interviewed generally agreed that the editorial board review more often focused on issues such as grammatical correctness, inconsistencies, and clarity, rather than on substantive issues such as the adequacy of evidence. The Staff Director appoints the members of the editorial review, usually consisting of three staff members. According to the Staff Director, editorial reviewers should be able to provide a fresh perspective, be familiar with the Commission’s standards and style manual, have strong editorial and writing skills, and should not work in the same office that wrote the draft product. 2. Legal sufficiency review: The purpose of this review, which is conducted by the Office of General Counsel, is to ensure the accurate interpretation and citation of legal materials and compliance with statutory requirements. 3. “Defame and degrade” review: The purpose of this review is to ensure that Commission products do not defame or degrade persons named in them. It is performed by the Office of General Counsel concurrently with the legal sufficiency review. Although agencies typically require or have legal sufficiency reviews for their products, it is unusual for an agency to also review its products for their potential to defame or degrade individual persons. 4. Affected agency review: The purpose of this review is to provide a government agency or, if appropriate, a nongovernmental organization mentioned in the draft report with pertinent sections of the draft for the agency’s review on the accuracy of the material contained in them. The Commission’s detailed procedures require the development of interim documents such as concept papers, proposals, outlines, discovery plans, and draft reports. The Staff Director plays a pivotal role in approving all stages of the products’ development, including follow-up plans after a report’s issuance. However, until May 2005, the Commissioners had limited involvement in the development of its products: Essentially, Commissioners could approve proposals and design summaries at the beginning and approve final reports at the end. The Commissioners’ limited role was a source of considerable concern to some Commissioners, as reported in our 2003 study. This concern led to our recommendation that the Commission provide for increased involvement of the Commissioners in planning and implementation. In May 2005, the Commission made significant changes to its quality assurance policies by increasing Commissioners’ involvement in the development of its national office products. Under these new policies, Commissioners are required to review and approve Commission products at five key stages: (1) proposal and concept paper development, (2) background research and outline development, (3) discovery, (4) draft report, and (5) final report stages. The Commissioners’ review and approval at three of these stages are new—background research and outline development, discovery, and draft report stages—and provide Commissioners with considerably greater opportunities to comment on and guide the direction of Commission products than previously. At most of these stages, approval by the majority of the Commissioners is necessary before moving on to the next stage. Under the new policies, the independent reviews–editorial, legal, defame and degrade, and affected agency reviews—occur between the Commissioners’ initial and final reviews of the draft product instead of before the Commissioners’ review, as was previous practice. The new policy does not require Commissioners’ votes on draft reports and final reports to occur in a public meeting. The new policies were formally incorporated into the Commission’s administrative policy manual in January 2006. (See fig. 2.) In May, 2005, the Commission adopted additional quality assurance policies for national office products that provide them with greater control over the substance of draft products: First, Commissioners can now vote to approve substantive changes to previously approved projects and may reassess priorities if budgetary changes occur during the year. Second, instead of having to vote on an entire draft of a final report, the Commission may vote on sections, and only portions of the report that receive a majority vote would become part of the final Commission document. The Commission also agreed to add a policy formally allowing statements of dissent. Commissioners can submit a statement of dissent after a report has been approved, and this dissenting statement can be integrated within the body of the report if the Staff Director and dissenting Commissioner agree. Before this change, there was no written Commission policy on dissenting statements. The new Commission policies have not been fully implemented for some national office projects that were initiated under the previous Commission before the new policies became effective, according to the Staff Director. These projects include the Commission’s 2005 report Federal Procurement after Adarand (Washington, D.C.: September 2005), which satisfied the Commission’s statutory requirement for that year; its 2005 report, Funding Federal Civil Rights Enforcement: The President’s 2006 Request (Washington, D.C.: September 2005); and its report on the Voting Rights Act, which is planned for publication in 2006 to satisfy the Commission’s annual statutory requirement. In May 2005, the Commissioners clarified the Commissioners’ role in approving briefings and hearings. According to the Commission’s new policies, Commissioners must approve briefing topics and the panel of speakers for briefings at a monthly Commission meeting at least 1 month in advance of the briefing itself. In addition, in order to hold a hearing, a majority of the Commission or a majority of the members present at a meeting with a quorum must vote to approve the hearing. Commission briefings and hearings are usually part of specific national office projects, which include distinct stages such as the concept, proposal, and design. The Commission’s Hearing Manual provides detailed administrative and legal procedures for conducting hearings and for posthearing activities. For example, the manual describes the process for selecting team members to prepare for hearings and the process for verifying hearing transcripts following hearings. The manual also notes that the final decision to hold a hearing belongs to the Commissioners. The Commission has not held any hearings since 2002. According to the Commission’s policies, the state advisory committees provide their advice on civil rights issues by submitting committee reports and other written products to the Commission. The Commission’s Administrative Manual and State Advisory Committee Handbook have policies and procedures for developing and approving these state committee products. State advisory committee members generally propose a civil rights topic and vote to approve it for development. After regional staff researches the topic and drafts a formal proposal, the committee votes to approve it. The approved proposal is then forwarded to the national office for the Staff Director’s approval. Following approval of the proposal, the regional staff conducts research, such as conducting interviews and inviting speakers to public meetings in local communities, to help the committee in its fact-finding process. The regional staff also writes a draft report using interviews, background research, and transcripts of the speakers’ comments that were made at public community meetings. After the committee reviews and votes on the draft report, the regional director sends the approved committee report to the national office for review, a procedure that is also followed for any dissenting statements. The state advisory committee votes to approve the final report. The regional office sends the final approved committee report to the Commissioners, who, under a new policy approved in February 2006, receive all state advisory committee reports that the Staff Director has approved as having satisfied the procedural and legal criteria for such reports. However, the Commissioners are not asked to accept or reject the committee reports. The Commission also prints all state advisory committee reports that have satisfied the criteria for such reports. (See fig. 3.) State advisory committee charters provide, among other things, biographical and demographic data on the members of the committees. All of the information presented in the electronic supplement to this report reflects the membership criteria that existed prior to the new criteria proposed in 2005. [For detailed data by state see electronic supplement at http://www.gao.gov/cgi-bin/getrpt?GAO-06-551SP .] As shown by the state charters, state advisory committees are generally composed of a demographically diverse group of individuals. The size of committees varies from 11 to 26 members, though 73 percent of the committees have between 11 and 14 members. Overall, these committees’ members are reflective of the state populations they represent, though the committees generally rely more heavily on minority populations, such as persons of color and religious minorities. State advisory committees tend to have a fairly equal gender distribution, though committee members as compared to 2000 census data are, on the whole, older than the general population. For example, 43 percent of Americans 18 or over are under 40 years old, whereas only 23 percent of advisory committee members are in this age range. This trend is consistent throughout most of the regions considered in this study. Overall, most committee members fall into the 40-59 age range, while approximately a fifth of members are 60 or over, a proportion that closely parallels that of the general population. Racial minorities constitute a large percentage of state advisory committee membership, with black members holding the most minority committee positions. While whites constitute 72 percent of the nation’s population, white committee members hold only 35 percent of committee positions nationwide. Black members are the second most populous demographic on the committees, constituting 29 percent of state advisory committees. Hispanic members also play a prominent role; 15 percent of committee members across the nation consider themselves Hispanic, a proportion that is comparable to that of the general adult U.S. population. The Midwest has the largest gap in terms of parity—for example, 32 percent of committee members are black, compared to 9 percent of the regional adult population for blacks. Persons with disabilities are reasonably well represented on the committees. While 19 percent of citizens were identified in the census nationwide as disabled, these individuals constitute approximately 16 percent of committee members. However, there are wide disparities apparent among committees—for example, while one committee has no representatives with disabilities; several committees have 5 or 6 disabled members. Each committee had at least one Republican and one Democrat. However, committee membership tended to be more Democratic than their respective states’ populations; 46 percent of members consider themselves Democrats, in contrast to an estimated 31 percent nationwide. Independent members also constitute a large share of the state advisory committee membership (27 percent), a trend that is most prominent in the Northeast. Religious affiliations also differ among regions and many committee members do not categorize themselves as Catholic, Protestant, or Jewish. In fact, most committee members in the western and northeastern states do not identify with one of the three main religions, while committees in the Midwest and South have mostly Protestant members. State advisory committee members hold a variety of occupations, from homemaker to university president. The most common occupations held by committee members include professor/assistant professor, attorney at law, and executive-level positions within nonprofit or governmental entities, such as social services organizations or county commissions. In addition, many committee members are elected officials, teachers, community activists, business owners, students, or private sector employees. Many members participated in organizations such as regional or state civil rights organizations, which promote civil rights advancement. Robert E. Robertson, (202) 512-9889 or robertsonr@gao.gov. Revae E. Moran, Assistant Director, and Deborah A. Signer, Analyst in Charge, managed all aspects of the assignment. Mary E. Roy made significant contributions to this report and Kyle Browning also provided key assistance in collecting data for the report. In addition, Margaret L. Armen, Richard P. Burkard, Susan C. Bernstein, Jessica A. Lemke, Walter K. Vance, and Monica L. Wolford provided essential legal and technical assistance.
The U.S. Commission on Civil Rights (the Commission) was established by the Civil Rights Act of 1957 to serve as an independent, bipartisan, fact-finding agency whose mission is to investigate and report on the status of civil rights in the United States. Since its inception, the Commission has conducted hearings and issued reports highlighting critical, controversial civil rights issues, including racial segregation, impediments to voting rights, and affirmative action. To carry out its fact-finding and reporting mission, the Commission is required to submit at least one report annually to the President and Congress on federal civil rights enforcement efforts, among other requirements. Because the Commission has no enforcement power, the key means for achieving its mission lies in its credibility as an independent and impartial fact-finding and reporting organization. To complement this national fact-finding and reporting effort, separate state advisory committees were also authorized in 1957 to advise the Commission and serve as its "eyes and ears" on state and local civil rights issues. State advisory committees are composed of volunteers appointed by the Commission in every state who conduct public hearings on state and local civil rights issues and issue reports to the Commission on their findings. The Commission's national office reports are researched and written by national office staff and approved by the Commissioners, and the state advisory committee reports are researched and drafted by the Commission's regional office staff under the direction of the state advisory committees. We were asked to assess the Commission's quality assurance policies for its national and state advisory committee reports and other products and the role of the state advisory committees in fulfilling the Commission's fact-finding and reporting mission. More specifically, our objectives were to assess (1) the adequacy of the Commission's policies for ensuring the quality of its products and (2) the role of the state advisory committees in contributing to the Commission's work. The Commission has some policies that provide adequate quality assurance for its products; however, it lacks policies for ensuring the objectivity of its national office reports, briefings, and hearings and providing accountability for decisions made on its national office products. Among its key policies, the Commission requires its national office products to be reviewed for legal sufficiency and provides affected agencies an opportunity to comment on the accuracy of information in its draft reports. In addition, under new Commission policies, Commissioners have an increased role in the development of its products, as we previously recommended. However, the Commission lacks several key policies that could help ensure objectivity in its national office products. The state advisory committees have played a key role in the Commission's work by identifying and reporting on local civil rights issues, but most committees do not have current charters giving them authorization to operate, and the Commission has not fully integrated the committees into the accomplishment of its mission. Traditionally, the committees have gathered data on state and local civil rights issues by holding hearings, forums, and briefings and communicated their findings to the Commission and the public through reports. Since 1980, the state advisory committees have accounted for 200 of the 254 reports published by the Commission. Currently, however, 38 of the 51 state advisory committees cannot conduct any work because they do not have approved charters. In late 2005, the Commission began revising the criteria for state advisory committee membership in order to, among other things, move away from racially and ethnically based representation toward great diversity in expertise and ideas. It also decided that the committees' applications for new charters would not be accepted until they had been redrafted to include only members who meet the new criteria. Several other actions by the Commission have limited the activities of the state advisory committees. First, since the 1990s, because of budgetary constraints, the Commission has significantly reduced the number of regional office staff, who provide extensive support to the state committees in conducting their activities and producing reports. In addition, the Commission has reduced funding for the state advisory committees, including money needed to hold public meetings. Furthermore, draft reports prepared by the state advisory committees are often not reviewed or published by the Commission in a timely manner. For example, most of the state advisory committees we visited told us the national office had not reviewed and accepted their reports in a timely manner, and less than a quarter of the state advisory committee chairs who responded to our survey reported that they were satisfied with the national office's timeliness in processing their reports. The Commission has also not incorporated the work of the state advisory committees into its strategic planning and decision-making processes, including articulating how the national office will use the state advisory committees' findings on state and local civil rights issues to inform the Commission's national goals and strategies. For example, the Commission did not obtain input from the state advisory committees in developing its new draft strategic plan, although the committees play an important role in accomplishing the agency's goals. Finally, although many of these are long-standing issues, the Commission has not provided for independent oversight of its policies and practices for the state advisory committees.
The United States has the largest, most extensive aviation system in the world with over 19,000 airports ranging from large commercial transportation centers handling millions of passengers annually to small grass airstrips serving only a few aircraft each year. Of these, roughly 3,300 airports are designated by FAA as part of the national airport system and thus are eligible for federal assistance. The national airport system consists of two primary types of airports— commercial service airports, which have scheduled service and enplane 2,500 or more passengers per year, and general aviation (GA) airports, which have no scheduled service and enplane fewer than 2,500 passengers annually. FAA divides commercial service airports into primary airports (enplaning more than 10,000 passengers annually) and commercial service nonprimary airports. The 395 current primary airports are classified by hub type—large-, medium-, small-, and nonhub—based on passenger traffic. Passenger traffic is highly concentrated: 88 percent of all passengers in the United States enplaned at the 63 large- or medium-hub airports in 2013 (see fig. 1). More than 2,900 airports in the national system are designated as GA airports. These airports range from large business aviation and cargo shipment centers that handle thousands of operations a year to small rural airports that may handle only a few hundred operations per year but may provide important access to the national transportation system for their communities. Generally, the level of aviation activity, whether commercial passenger and cargo or general aviation business and private aircraft, helps to generate the funds that finance airport development. The three primary sources of funding for airport development are Airport Improvement Program (AIP) grants, PFCs, and locally generated revenue. All three sources of funds are linked to passenger aviation activity. AIP is supported by the Airport and Airway Trust Fund (AATF), which is funded by airline ticket taxes and fees; GA flights contribute to the AATF through a tax on aviation jet fuel. Airports included in FAA’s NPIAS are eligible to receive AIP entitlement (apportionment) grants based on airports’ size and can also compete for AIP discretionary grants. AIP grants can only be used for eligible capital projects, generally those that enhance capacity, safety, and environmental conditions, such as runway construction and rehabilitation, airfield lighting and marking, and airplane noise mitigation. The amount made available in AIP appropriations totaled $3.35 billion in fiscal year 2014. The grants generally require matching funds from the local match ranging from 10 to 25 percent depending on the size of the airport and type of project. PFCs, another source of funding for airport development projects, are a federally authorized, statutorily-capped, airport-imposed fee of up to a maximum of $4.50 per enplaned passenger per flight segment, and a maximum of $18 per round trip ticket. The PFC is collected by the airline on the passenger ticket and remitted to the airports (minus a small administrative fee retained by the airline). Introduced in 1991, and capped at $3.00 per flight segment, PFC collections can be used by airports for the same types of projects as AIP grants, but also to pay interest costs on debt issued for those projects. Since its inception, landside development projects—including, for example, new terminal projects—and interest payments on debt used to finance eligible projects have each accounted for 34 percent of total PFC collections spent. The maximum level of PFCs was last increased in Collections totaled almost $2.8 billion in calendar year 2014. 2000.According to FAA, 358 commercial service airports are collecting PFCs as of February 2015. Airports also fund development projects from revenues generated directly by the airport. Airports generate revenues from aviation activities such as aircraft landing fees and terminal rentals, and non- aviation activities such as concessions, parking, and land leases. Aviation revenues are the traditional method for funding airport development and, along with PFCs, are used to finance the issuance of local tax-exempt debt. Because of the size and duration of some airport development projects—for example, a new runway can take more than a decade and several billion dollars to complete—long-term debt can be the only way to finance these types of projects. FAA’s main planning tool for identifying future airport-capital projects is the NPIAS. FAA relies on airports, through their planning processes, to identify individual projects for funding consideration. According to FAA officials, FAA reviews input from individual airports and state aviation agencies and validates both eligibility and justification for the project over the ensuing five-year period. Because the estimated cost of eligible airport projects that airports plan to perform greatly exceeds the available grant funding available for these projects, FAA uses a priority system based on airport and project type to allocate the available funds. The Airports Council International-North America (ACI-NA), a trade association for airports, also estimates the cost of planned airport capital projects. While almost all airport sponsors in the United States are states, municipalities, or specially created public authorities, there is still a significant reliance on the private sector for finance, expertise, and control of airport assets. For example, we have previously reported that the majority of airport employees at the nation’s major airports are employed by private sector firms, such as concessionaires, and some airports are also operated by private companies. Pursuant to statutory authorization, since 1996, FAA has been piloting an airport privatization program that relaxes certain restrictions on the sale or lease of airports to private entities. A variety of factors has had a substantial impact on the airline industry. We reported in June 2014 that economic issues such as volatile fuel prices and the economic recession have affected the industry as have airlines’ consolidation and an adoption of business models that focus For instance, the 2007-2009 recession more on capacity management.combined with a spike in fuel prices, helped spur industry mergers and a change in airline business models. Specifically, Delta acquired Northwest in 2008, United and Continental merged in 2010, Southwest acquired AirTran in 2011, and US Airways and American Airlines merged in 2014. Although passenger traffic has generally rebounded as the economy has recovered, the number of commercial aircraft operations has not returned to 2007 levels as airlines are flying larger and fuller aircraft. In June 2014, we found that one outcome of economic pressures and industry changes had been reductions in U.S. passenger aircraft operations as measured by scheduled flight operations. Many airports lost both available seats and flights since 2007 when aircraft operations last peaked. However, medium- and small-hub airports had proportionally lost more service than large-hub or nonhub airports, as major airlines merged and consolidated their flight schedules at the largest airports. In June 2014, we found—based on our analysis of Department of Transportation’s (DOT) data—that there were about 1.2 million fewer scheduled domestic flights in 2007 as compared to 2013 at large-, medium-, small-hub, and nonhub airports. The greatest reduction in scheduled flights occurred at medium-hub airports, which decreased nearly 24 percent from 2007 to 2013, compared to a decrease of about 9 percent at large-hub airports and about 20 percent at small-hub airports. Medium-hub airports also experienced the greatest percentage reduction in air service as measured by available seats (see fig. 2). While 2014 passenger activity as represented by the number of passengers onboard aircraft departing U.S. airports has rebounded nearly back to 2007 levels (down 4 percent), the total number of commercial passenger and cargo aircraft departures (operations) in 2014 is still down 18.5 percent since 2007. Declining operations reduces pressure on airports’ airside capacity, while rebounding passenger traffic could put pressure on airports’ terminals and gates to accommodate passengers. We found in June 2014 that air service to small airports, which generally serve small communities, has declined since 2007 due, in part, to volatile fuel costs and declining populations in small communities. According to a study by the Massachusetts Institute of Technology (MIT), regional aircraft—those mostly used to provide air service to small communities— are 40 to 60 percent less fuel efficient than the aircraft used by mainline carriers at larger hub airports. Further, from 2002 to 2012, fuels costs quadrupled and became the airlines’ largest expense at nearly 30 percent of airlines’ operating costs. While more recently oil prices have dropped, it remains uncertain whether currently low oil prices will continue. The second major factor affecting small community service is declining population in many regions of the country over the last 30 years. As a result, in previous work, we have found that population movement has decreased demand for air service to certain small communities. For example, geographic areas, especially in the Midwest and Great Plains states, lost population from 1980 through 2010, as illustrated in figure 3 below. As a result, certain areas of the country are less densely populated than they were 35 years ago when the airlines were deregulated and the Essential Air Service (EAS) was created. For small communities located close to larger cities and larger airports, a lack of local demand can be exacerbated by passengers choosing to drive to airports in larger cities to access better service and lower fares. The EAS program was created in 1978 to provide subsidies to some small communities that had service at the time of deregulation. We reported last year that EAS has grown in cost but did help stem the declines in service to those communities as compared to other airports. In June 2014, we reported that GA activity has also declined since 2007, particularly affecting airports that rely on general aviation activity for a large share of their revenue. For GA airports—which generate revenues from landing fees, fuel sales, and hangar rents—the loss of traffic can have a significant effect on their ability to fund development. A 2012 MIT study that examined trends for GA operations at U.S. airports with air- traffic control towers indicated that from 2000 to 2010, total GA operations dropped 35 percent. According to the MIT study, the number of annual hours flown by GA pilots, as estimated by FAA, has also decreased over the past decade. Numerous factors affect the level of GA operations including the level of fuel prices, the costs of owning and operating personal aircraft, and the total number of private pilots and GA aircraft. For example, we recently reported on the availability of airline pilots and found that the GA pilot supply pipeline has decreased as fewer students enter and complete collegiate pilot-training programs and fewer military pilots are available than in the past. Earlier this year, FAA reported on airport capacity needs through 2030. The focus of FAA’s analysis was not on the broad range of investments airports make to serve passengers and aircraft, but on the capacity of airports to operate without significant delay. Therefore, the primary focus was on airside capacity, especially runway capacity. To do this, FAA modeled recent and forecasted changes in aviation activity, current and planned FAA investments in air-traffic-control modernization, and airport investments in infrastructure, such as new runways, to determine which airports are likely to be congested or capacity constrained in future years.with previous studies in 2004 and 2007 following a similar methodology. The most recent study found that the number of capacity-constrained airports expected in the future has fallen dramatically from the number projected in earlier reports, referred to as FACT1 and FACT2 (see fig. 4). For example, in 2004, FAA projected that 41 airports would be capacity constrained by 2020 unless additional investment occurred. However, in the 2015 report, FAA projected that 6 airports will be capacity constrained in 2020. FAA attributed this improvement to changes in aviation activity, investment in air-traffic-control modernization, and the addition of airport runways. In the September 2014 NPIAS, FAA estimated that airports have roughly $33.5 billion in planned development projects for the period 2015 through 2019 that are eligible for federal support in the form of AIP grants.estimate is roughly 21 percent less than FAA’s previous estimate of $42.5 billion for the period 2013 through 2017 (see fig. 5). FAA reported a decrease in estimated needs for most hub-airport categories and all types of airport development except projects to reconstruct or rehabilitate airport facilities, security related infrastructure projects, and safety projects (see fig. 6). Notably, according to FAA, planned capacity-related development decreased to $4.9 billion, a 50-percent decrease. Planned terminal-related development also saw a major decline, down by 69 percent from the previous estimate. The ACI-NA also estimated airports’ planned development for the 2015 through 2019 period for projects both eligible and not eligible for AIP funding. According to ACI-NA, the total estimated planned-development cost for 2015 through 2019 is $72.5 billion, more than twice FAA’s estimate for just AIP eligible projects.percent over its prior estimate of $68.7 billion for the prior 2013–2017 ACI-NA’s estimate increased 6.2 estimating period. According to ACI-NA, the difference in the respective estimates is attributable to ACI-NA’s including all projects rather than just AIP-eligible projects like the NPIAS, as well as including projects with identified funding sources, which the NPIAS excludes. For example, ACI- NA’s estimate includes AIP-ineligible projects such as parking facilities, airport hangars, and commercial space in large passenger terminal buildings. ACI-NA attributed more than half of the development costs to the need to accommodate growth in passenger and cargo activity. ACI- NA estimated that 36 percent of planned development costs were for terminal projects. We are currently analyzing FAA and ACI-NA’s most recent plan estimates and will be reporting later this year on the results. In Fiscal Year 2015, Congress made $3.35 billion available in appropriations acts for AIP funding, a reduction from the annual appropriations of $3.52 billion for fiscal years 2007 through 2011. The President’s 2016 budget proposal calls for a reduction in annual AIP funding to $2.9 billion in conjunction with an increase in the PFC cap. As we testified in June 2014, if amounts made available in appropriations acts for AIP fall below the $3.2 billion level established in the Wendell H. Ford Aviation Investment and Reform Act for the 21st Century of 2000and no adjustments are made, under the 2000 Act the amount of AIP entitlement grants would be reduced, but more AIP discretionary grants could be made as a result. The larger amount of AIP funding that would go to discretionary grants would give FAA greater decision-making power over the development projects that receive funding. Previous proposals have considered changing how GA airports are allocated their share of AIP funds, which represented approximately one- quarter of total AIP funds in fiscal year 2014. For example, in 2007, the Administration’s FAA reauthorization proposal suggested changing the funding structure for GA airports. Specifically, FAA would have tiered GA airports’ funding based on level of and type of aviation activities. AIP entitlement funding would then range, based on the tier, up to $400,000. While this proposal was not adopted, FAA recently undertook an exercise to classify GA airports based on their activity levels. reported that 281 airports remained unclassified because they did not meet the criteria for inclusion in any of the new categories, thus having no clearly defined federal role.airports with few or no based aircraft. According to the most recent NPIAS report, many of these 227 airports have received AIP funding in the past and may be considered for future funding if and when their activity levels meet FAA’s criteria for inclusion. In a 2012 report, FAA categorized GA airports as National (84), Regional (467), Local (1,236), and Basic (668). In addition, another 497 GA airports were unclassified. Federal Aviation Administration, General Aviation Airports: A National Asset (ASSET 1), May 2012. large- and medium-hub airports collecting PFCs must return a portion of their AIP entitlement grants, which are then redistributed to smaller airports through the AIP. As previously noted, 68 percent of PFCs have been used to pay for landside development (terminals) and interest charges on debt. In addition, many airports’ future PFC collections are already committed to pay off debt for past projects, leaving little room for new development. For example, at least 50 airports have leveraged their PFCs through 2030 or later, according to FAA data. The President’s fiscal year 2016 budget proposal and airports have called for increasing the PFC cap to $8—which is intended to account for inflation since 2000, when the maximum PFC cap was last raised—and eliminate AIP entitlements for large-hub airports. reported on the effects of increasing PFCs on airport revenues and passenger demand. Specifically, we found that increasing the PFC cap would significantly increase PFC collections available to airports under the three scenarios we modeled but could also marginally slow passenger growth and therefore the growth in revenues to the AATF. We modeled the potential economic effects of increased PFC caps for fiscal years 2016 through 2024 as shown in figure 7 below. Under all three scenarios, trust fund revenues, which totaled $12.9 billion in 2013 and fund FAA activities, would likely continue to grow overall based on current projections of passenger growth; however, the modeled cap increases could reduce the growth in total AATF revenues by roughly 1 percent because of reduced passenger demand if airlines pass the full amount of the PFC increase along to consumers in the form of increased ticket prices. Airport trade associations, the ACI-NA and the American Association of Airport Executives, have made prior proposals to raise the PFC cap to $8.50 with periodic adjustments for inflation. Pub. L. No. 112-95, § 112, 126 Stat. 11, 18 (2012). totaled $5.2 billion, while nonaviation revenues were just over $5 billion. According to ACI-NA, non-aviation revenue has grown faster than passenger growth since 2004, over 4 percent on average for non-aviation revenue versus 1.5 percent average growth in passenger boardings over the same period. Further, some airports have developed unique commercial activities with stakeholders from local jurisdictions and the private sector to help develop airport properties into retail, business, and leisure destinations. Some examples include: Non-aviation development on airport property: Airports have turned to an increasing range of unique developments on airport property, including high-end commercial retail and leisure activities, hotels and business centers, and medical facilities for non-aviation revenues. For example, airports in Denver, Miami, and Indianapolis have built cold storage facilities on airport property in an effort to generate revenue by leasing cold storage space to freight forwarders and businesses that transport low-volume, high-valued goods, including pharmaceuticals, produce, and other time-sensitive or perishable items. Public-private partnerships: Airports can fund airport improvements with private sector participation. Public-private partnerships, involving airports and developers, have been used to finance airport development projects without increasing the amount of debt already incurred by airports. For example, the Port Authority of New York and New Jersey has recently received responses for its request for proposals for the private sector to demolish old terminal buildings and construct, partially finance, operate, and maintain a new Central Terminal Building for LaGuardia Airport in New York City. Privatization: FAA’s Airport Privatization Pilot Program (APPP), which was established in 1997 to reduce barriers to airport privatization that we identified in 1996, has generated limited interest from the public and private sectors. As we reported in November 2014, 10 airports have applied to be part of the pilot program and one airport—San Juan Luis Muñoz Marín International Airport in Puerto Rico—has been privatized (see fig. 8). In our report, we noted that several factors reduce interest in the APPP—such as higher financing costs for privatized airports, the lack of state and local property tax exemptions, and the length of time to complete a privatization under the program. Public sector airport owners have also found ways to gain some of the potential benefits of privatization without full privatization, such as entering airport management contracts and joint development agreements for managing and building an airport terminal. In conclusion, last year commemorated one century since the first commercial airline flight,commercial aviation has grown at an amazing pace to become an and in that relatively short time span ubiquitous and mature industry in the United States. While commercial aviation still has many exciting growth prospects for its second century, it also faces many challenges—among them how to ensure that the aviation system can accommodate millions of flights and hundreds of millions of passengers every year in the midst of shifting aviation activity and constrained federal funding. Despite recent declines in airport operations, it remains important for airports to be maintained as well as upgraded to maintain safety and accommodate future growth. Declines in airport operations have reduced demands on AIP, but rebounded passenger activity could continue to put pressure on PFCs to finance terminal and other projects. Developing airports will require the combined resources of federal, state, and local governments, as well as private companies’ capital and expertise. Effectively supporting this development involves focusing federal resources on FAA’s key priorities of maintaining the world’s safest aviation system and providing adequate system capacity, while allowing sufficient flexibility for local airport sponsors to maximize local investment and revenue opportunities. In deciding the best course for future federal investment in our national airport system, Congress is faced with weighing the interests of all aviation stakeholders, including airports, airlines, other airport users, and most importantly passengers, to help ensure a safe and vibrant aviation system. Madam Chair Ayotte, Ranking Member Cantwell, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For further information about this testimony, please contact Gerald L. Dillingham at (202) 512-2834 or dillinghamg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony include Paul Aussendorf (Assistant Director), Amy Abramowitz, David Hooper, Delwen Jones, Josh Ormond, Melissa Swearingen, and Russell Voth. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
U.S. airports are key contributors to the national and regional economies, providing mobility for people and goods, both domestically and internationally. Since 2007 when GAO last reported on airport funding, airports of all sizes have experienced significant changes in aviation activity. Financing for airport capital improvements is based on a mix of federal AIP grants, federally authorized but statutorily-capped PFCs, and locally generated aviation-related and non-aviation-related revenues. As deliberations begin in advance of FAA's 2015 reauthorization, Congress is faced with considering the most appropriate type, level, and distribution of federal support for development of the National Airspace System. This testimony discusses trends in (1) aviation activity at airports since 2007, (2) forecasted airport capacity needs and airports' planned development costs, and (3) financing for airport development. This testimony is based on previous GAO reports issued from June 2007 through December 2014, with selected updates conducted through April 2015. To conduct these updates, GAO reviewed recent information on FAA's program activities and analyses outlined in FAA reports, including the 2015 aviation forecast, and the 2015–2019 planned airport-development estimates. Economic factors, since 2007, have led to fewer scheduled commercial flights, a trend more pronounced for some types of airports. These economic factors include not just the volatile fuel prices and the 2007 to 2009 recession but also evolving airline practices, such as airline mergers and the adoption of business models that demonstrate capacity management. For example, as GAO reported in June 2014, the number of scheduled flights at medium- and small-hub airports has declined at least 20 percent from 2007 to 2013, compared to about a 9 percent decline at large-hub airports. General Aviation (GA) has also declined in activity, as measured by the number of GA aircraft operations and hours flown, due to similar economic factors. In recent years, however, passenger growth has rebounded. According to the Federal Aviation Administration's (FAA) projections, U.S. airline passenger growth is predicted to grow 2 percent per year through 2035—a growth rate that is slightly lower than that of past forecasts. According to FAA estimates, the number of airports that require additional capacity to handle flight operations to avoid delays has declined since 2004. Similarly, the future cost of planned airport development has also declined in recent years. Earlier this year, FAA projected that 6 airports will be capacity constrained in 2020 compared to 41 in the 2004 projection. Even with this improvement, some airports—like those in the New York City area region—will remain capacity constrained, according to FAA. The overall improved capacity situation is also reflected in reduced estimates of future airport-development costs that are eligible for federal grants. In September 2014, the FAA estimated that for the period 2015 through 2019, airports have about $33.5 billion in planned development projects eligible for federal Airport Improvement Program (AIP) grants—a 21 percent reduction from the $42.5 billion estimate for the time period 2013 through 2017. The biggest decline in planned development costs among project categories is in capacity projects such as new runway projects. However, an airport industry association estimated planned airport capital project costs, both those eligible and not eligible for AIP, of $72.5 billion for 2015 through 2019, an increase of 6.2 percent from the association's prior 5-year estimate for 2013 through 2017. As traditional funding sources for airport development have generally declined, airports have increasingly relied on other sources of financing. Specifically, federal AIP grants and Passenger Facility Charges (PFC) are two primary sources of federally authorized funding for airports. The amount made available for AIP decreased from over $3.5 billion for fiscal years 2007 through 2011 to less than $3.4 billion for fiscal year 2015. Further the President's 2016 proposed budget calls for additional reductions in AIP, though it would be offset with a proposed increase in the PFC cap, which is currently $4.50 per flight segment. Airports have sought additional opportunities to collect non-aviation revenues. As a result, according to FAA, non-aviation revenue has increased each year from 2008 through 2014. For example, airports have 1) partnered with the private sector to fund airport improvements; 2) identified new business ventures on airport property including the development of commercial retail, leisure activities, and medical facilities; and 3) explored options for privatization.
The JWST project continues to report that it remains on schedule and budget with its overall schedule reserve currently above its plan. However, the project is now entering a difficult phase of development— integration and testing—which is expected to take another 3.5 years to complete. Maintaining as much schedule reserve as possible is critical during this phase to resolve known risks and unknown problems that may be discovered. Being one of the most complex projects in NASA’s history, significant risks lie ahead for the project, as it is during integration and testing where problems are likely to be found and as a result, schedules tend to slip. As seen in figure 1, only two of five elements and major subsystems—ISIM and OTE—have entered the integration and testing phase. Integration and testing for the spacecraft and sunshield and for the ISIM and OTE when they are integrated together begins in 2016 and the entire observatory will begin this phase in late 2017. In December 2014, we reported that schedule risk was increasing for the project because it had lost schedule reserve across all elements and major subsystems. As a result, all were within weeks of becoming the critical path of the project and driving the project’s overall schedule. Figure 2 shows the different amounts of schedule reserve remaining on all elements and major subsystems, their proximity to the critical path, and the total schedule reserve for the critical path at the time of our review. The proximity of all the elements and major subsystem schedules to the critical path means that a delay on any of the elements or major subsystems may reduce the overall project schedule reserve further, which could put the overall project schedule at risk. As a result, the project has less flexibility to choose which issues to mitigate. While the project has been able to reorganize work when necessary to mitigate schedule slips thus far, with further progression into subsequent integration and testing periods, flexibility will be diminished because work during integration and testing tends to be more serial, as the initiation of work is often dependent on the successful and timely completion of the prior work. This is particularly the case with JWST given its complexity. Challenges with the development and manufacturing of the sunshield and the cryocooler were the most significant causes of the decline in schedule reserve that we reported on in December 2014. The sunshield experienced a significant manufacturing problem during the construction of the large composite panel that forms part of the sunshield’s primary support structure. The cryocooler compressor assembly—one component of the cryocooler—delivery is a top issue for the project and its development has required a disproportionate amount of cost reserves to fund additional work, caused in part, by issues such as a manufacturing error and manufacturing process mistake that caused delays to the schedule. The development of the cryocooler has been a concern for project officials as far back as 2006. Since that time, the cryocooler has faced a number of technical challenges, including valve leaks and cryocooler underperformance, which required two subcontract modifications and significant cost reserves to fund. The contractor and subcontractor were focused on addressing valve problems, which limited their attention to the cooling underperformance issue. This raised questions about the oversight of the cryocooler and why it did not get more attention sooner before significant delays occurred. In August 2013, the cryocooler subcontract was modified to reflect a 69 percent cost increase and that the workforce dedicated to the cryocooler effort at the subcontractor increased from 40 staff to approximately 110 staff. Since we issued our December 2014 report, JWST schedule reserve continued to decline: project schedule reserve decreased by 1 month, leaving 10 months of schedule reserve remaining, and the critical path switched from the cryocooler to the ISIM. The project is facing additional challenges with the testing of the ISIM and OTE and the manufacturing of the spacecraft in addition to continuing challenges with the cryocooler compressor assembly that further demonstrates continued schedule risk for the project. For example, after the second test for the ISIM—the element of JWST that contains the telescope’s four different scientific instruments—electronic, sensor, and heat strap problems were identified that impact two of the four instruments. Mitigating some of these issues led to a 1.5-month slip to the ISIM schedule and made ISIM the current critical path of the project to allow officials time to replace the unusable and damaged parts. As a result, ISIM’s third and final cryovacuum test scheduled to begin in August 2015 has slipped until September 2015. The OTE and spacecraft efforts are also experiencing challenges that may impact the schedules for those efforts. For example, it was discovered that over 70 harnesses on the OTE potentially had nicks on some wires and the majority will need to be repaired or rebuilt. The effects of these challenges on the project’s schedule are still being determined. Finally, the cryocooler compressor assembly has yet to be delivered and will be more than 16 months late if the current delivery date holds. Since our December 2014 report, the cryocooler compressor assembly’s delivery slipped almost an additional 2 months due to manufacturing and build issues and for an investigation of a leak to a joint with the pulse tube pre-cooler. Currently, the cryocooler compressor assembly is expected to be delivered in mid-June 2015 and is only 1 week off of the project’s critical path. Entering fiscal year 2015, the JWST project had limited short-term cost reserves to address technical challenges and maintain schedule. We reported the project had committed approximately 40 percent of the fiscal year 2015 cost reserves before the start of the fiscal year. As a result, one of the project’s top issues for fiscal year 2015 is its cost reserve posture, which the project reported is less than desired and will require close monitoring. At the end of February, project officials had committed approximately 60 percent of the fiscal year 2015 cost reserves and noted that maintaining fiscal year 2016 reserves needed close watching. The types of technical problems JWST is experiencing are not unusual for a project that is unique and complex. They are an inherent aspect of pushing technological, design, and engineering boundaries. What is important when managing such a project is having a good picture of risks, which can shift from day to day, and having effective tools for mitigating risks as they surface. Using up-to-date and thorough data on risks is also integral to estimating resources needed to complete the project. Given the cost of JWST, its previous problems with oversight, and the fact that the program is entering its most difficult phases of development, risk analysis and risk management have been a key focus of our work. JWST officials have taken an array of actions following the 2011 replan to enable the program to have better insight into risks and to mitigate them. For instance, we reported in 2012 that the project had implemented a new risk management system after it found the previous system lacked rigor and was ineffective for managing risks. The project instituted meetings at various levels throughout NASA and its contractors and subcontractors to facilitate communication about risks. The project also added personnel at contractor facilities, which allowed for more direct interaction and quicker resolution of issues. However, we reported in December 2014 that neither NASA nor the prime contractor had updated the cost risk analysis that underpinned the cost and schedule estimates for the 2011 replan. A cost risk analysis quantifies the cost impacts of risks and should be used to develop and update a credible estimate that accounts for all possible risks—technical, programmatic, and those associated with budget and funding. Moreover, conditions have changed significantly since the replan. For example, the delivery of the cryocooler compressor assembly is one of the project’s top issues and was not an evident risk when the cost risk analysis was conducted in 2011. On the prime contract, our analysis found that 67 percent of risks tracked by Northrop Grumman in April 2014 at the time of our analysis were not present in September 2011 at the time of the replan. We determined that a current and independent cost risk analysis was needed to provide Congress with insight into JWST’s remaining work on the Northrop Grumman prime contract—the largest (most expensive) portion of work remaining. A key reason for this determination was and continues to be the significant potential impact that any additional cost growth on JWST would have on NASA’s broader portfolio of science projects. To provide updated and current insight in to the project’s cost status, we took steps to conduct an independent, unbiased analysis.were, however, unable to conduct the analysis because Northrop Grumman did not allow us to conduct anonymous interviews of technical experts without a manager present. In order to collect unbiased data, interviewees must be assured that their opinions on risks and opportunities remain anonymous. Unbiased data would have allowed us to provide a credible assessment of risks for Northrop Grumman’s remaining work. NASA and the JWST project disagreed that an independent cost risk analysis conducted by an outside organization at this point in the project would be useful. Neither believed that an organization external to NASA could fully comprehend the project’s risks. Further, they noted that any such analysis would be overly conservative due to the complexities of the risks and not representative of the real risk posture of the project. GAO’s best practices call for cost estimates to be compared to independent cost estimates in addition to being regularly updated. Without an independent and updated analysis, both the committee members’ and NASA’s oversight and management of JWST will be constrained since the impact of newer risks have not been reflected in key tools, including the cost estimate. Moreover, our methodology would have provided both NASA and Northrop Grumman with several opportunities to address concerns with our findings, including concerns about conservatism. After we were unable to conduct the cost risk analysis, NASA decided to conduct its own cost risk analysis of the Northrop Grumman remaining work. However, a NASA project official said that they did not plan to use data from the cost risk analysis to manage the project. Instead, they indicated that they planned to use the information to inform committee members of the project’s cost risk and would continue to rely on other tools already in place to project the future costs of the project, such as earned value management (EVM) analysis. To maintain quality cost estimates over the life of a project, best practices state that cost risk analyses should be updated regularly to incorporate new risks and be used in conjunction with EVM analysis to validate cost estimates. While EVM is a very useful tool for tracking contractor costs and potential overruns, the analyses are based on past performance that do not reflect the potential impact of future risks. We reported that if the project did not follow best practices in conducting its cost risk analysis or use it to inform project management, the resulting information may be unreliable and may not substantively provide insight into JWST’s potential cost to allow either Congress or project officials to take any warranted action. To better ensure NASA’s efforts would produce a credible cost risk analysis, in December 2014, we recommended that officials follow best practices while conducting a cost risk analysis on the prime contract for the work remaining and update it as significant risks emerged. Doing so would ensure it provided information to effectively manage the program. NASA partially concurred with our recommendation, again noting that it has a range of tools in place to assess all contractors’ performance, the approach the project has in place is consistent with best practices, and officials will update the cost risk analysis again when required by NASA policy. We found that NASA best practices for cost estimating recommend updating the cost risk analysis while a project is being designed, developed, and tested, as changes can impact the estimate and the risk assessment. Since our report was published, NASA completed its analysis and provided the results to us. We are currently examining the analysis to assess its quality and reliability and the extent to which it was done in accordance with NASA and GAO best practices. Our initial examination indicates the JWST project took the cost risk analysis seriously and took into account best practices in the execution of the analysis. The project has also recently begun conducting a new analysis of EVM data which they term a secondary estimate at completion analysis for two of its largest contractors–Northrop Grumman and Exelis—on work to go. This analysis should provide the project additional insight on the probabilities of outcomes while incorporating current risks against the cost reserves that remain. The initial analysis we have reviewed indicates that both contracts are forecasted to generally cost more at completion than the information produced using EVM analysis alone, but within the JWST life- cycle cost. However, we still have work to do to understand how NASA is analyzing the information and what assumptions it is putting into its analysis. In conclusion, with approximately 3.5 years until launch, project officials have made much progress building and testing significant pieces of hardware and are currently on schedule—achieving important milestones—and on budget. They have also taken important steps to increase their insight and oversight into potential problems. What is important going forward is having good insight into risks and preserving as much schedule reserve as possible—particularly given the complexity of the project, the fact it is entering deeper into its integration and testing cycle, and the fact that it has limited funds available in the short term to preserve schedule. Any cost growth on JWST may have wider implications on NASA’s other major programs. While we are concerned about NASA’s reluctance to accept an independent cost risk assessment, particularly in light of past problems with oversight, we are also encouraged that NASA took steps to conduct an updated risk analysis of Northrop Grumman’s work and that NASA has sustained and enhanced its use of other tools to monitor and manage risk. As we undertake this year’s review of JWST, we will continue to focus on risk management, the use of cost reserves, progress with testing, as well as the extent to which its cost risk analysis followed best practices. We look forward to continuing to work with NASA on this important project and reporting to Congress on the results of our work. Chairman Palazzo, Ranking Member Edwards, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to answer questions related to our work on JWST and acquisition best practices at this time. For questions about this statement, please contact Cristina Chaplain at (202) 512-4841, or at chaplainc@gao.gov. Contact points for our Offices of Congressional Relationship and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony were Shelby Oakley, Assistant Director; Karen Richey, Assistant Director; Jason Lee, Assistant Director; Patrick Breiding; Laura Greifner; Silvia Porres; Carrie Rogers; Ozzy Trevino; and Sylvia Schatz. James Webb Space Telescope: Project Facing Increased Schedule Risk with Significant Work Remaining. GAO-15-100. Washington, D.C.: December 15, 2014. James Webb Space Telescope: Project Meeting Commitments but Current Technical, Cost, and Schedule Challenges Could Affect Continued Progress. GAO-14-72. Washington, D.C.: January 8, 2014. James Webb Space Telescope: Actions Needed to Improve Cost Estimate and Oversight of Test and Integration. GAO-13-4. Washington, D.C.: December 3, 2012. NASA’s James Webb Space Telescope: Knowledge-Based Acquisition Approach Key to Addressing Program Challenges. GAO-06-634. Washington, D.C.: July 14, 2006. NASA: Assessments of Selected Large-Scale Projects. GAO-15-320SP. Washington, D.C.: March 24, 2015. NASA: Assessments of Selected Large-Scale Projects. GAO-14-338SP. Washington, D.C.: April 15, 2014. NASA: Assessments of Selected Large-Scale Projects. GAO-13-276SP. Washington, D.C.: April 17, 2013. NASA: Assessments of Selected Large-Scale Projects. GAO-12-207SP. Washington, D.C.: March 1, 2012. NASA: Assessments of Selected Large-Scale Projects. GAO-11-239SP. Washington, D.C.: March 3, 2011. NASA: Assessments of Selected Large-Scale Projects. GAO-10-227SP. Washington, D.C.: February 1, 2010. NASA: Assessments of Selected Large-Scale Projects. GAO-09-306SP. Washington, D.C.: March 2, 2009. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
JWST is one of the National Aeronautics and Space Administration's (NASA) most complex and expensive projects. At an anticipated cost of $8.8 billion, JWST is intended to revolutionize understanding of star and planet formation, advance the search for the origins of the universe, and further the search for earth-like planets. Since entering development in 1999, JWST has experienced significant schedule delays and increases to project costs and was rebaselined in 2011. With significant integration and testing planned during the approximately 3.5 years until the launch date in October 2018, the JWST project will need to address many challenges before NASA can conduct the science the telescope is intended to produce. GAO has reviewed JWST for the last 3 years as part of an annual mandate and for the last 7 years as part of another annual mandate to review all of NASA's major projects. Prior to this, GAO also issued a report on JWST in 2006. This testimony is based on GAO's third annual report on JWST ( GAO-15-100 ), issued in December 2014, with limited updated information provided where applicable. That report assessed, among other issues, the extent to which (1) technical challenges were impacting the JWST project's ability to stay on schedule and budget, and (2) budget and cost estimates reflected current information about project risks. To conduct that work, GAO reviewed monthly JWST reports, interviewed NASA and contractor officials, reviewed relevant policies, and conducted independent analysis of NASA and contractor data. James Webb Space Telescope (JWST) project officials report that the effort remains on track toward the schedule and budget established in 2011. However, the project is now in the early stages of its extensive integration and testing period. Maintaining as much schedule reserve as possible during this phase is critical to resolve challenges that will likely surface and negatively impact the schedule. JWST has begun integration and testing for only two of five elements and major subsystems. While the project has been able to reorganize work when necessary to mitigate schedule slips thus far, this flexibility will diminish as work during integration and testing tends to be more serial, as initiating work is often dependent on the successful and timely completion of the prior work. a The cryocooler chills an infrared light detector on one of JWST's four scientific instruments. In December 2014, GAO reported that delays had occurred on every element and major subsystem schedule, each was at risk of driving the overall project schedule, and the project's schedule reserve had decreased from 14 to 11 months. As a result, further delays on any element or major subsystem would increase the overall schedule risk for the project. At the time of the report, challenges with manufacturing of the cryocooler had delayed that effort and it was the driver of the overall project schedule. Since the December report, the project's overall schedule reserve decreased to 10 months as a result of several problems that were identified following a test of the Integrated Science Instrument Module (ISIM), which contains the telescope's scientific instruments. ISIM is now driving the overall project schedule. Furthermore, additional schedule impacts associated with challenges on several other elements and major subsystems are still being assessed. At the time of the December 2014 report, the JWST project and prime contractor's cost risk analyses used to validate the JWST budget were outdated and did not account for many new risks identified since 2011. GAO best practices for cost estimating call for regularly updating cost risk analyses to validate that reserves are sufficient to account for new risks. GAO recommended, among other actions, that officials follow best practices while conducting a cost risk analysis on the prime contract and update the analysis as significant risks emerged. NASA partially concurred, noting that it has a range of tools in place to assess performance and would update the analysis as required by policy. Since then, officials completed the analysis and GAO is currently examining the results.
The percentage of older workers in the total workforce—those aged 55 and older—is growing faster than that of any other age group, and they are expected to live longer than past generations. Labor force participation for this cohort has grown from about 31 percent in 1998 to 38 percent in 2007. In contrast, labor force participation of workers under the age of 55 has declined slightly (see fig. 1). Many factors influence workers’ retirement and employment decisions, including retirement eligibility rules and benefits, an individual’s health status and occupation, the availability of health insurance, personal preference, personal wealth, and the employment status of a spouse. The availability of suitable employment, including part-time work or flexible work arrangements, may also affect the retirement and employment choices of older workers. In addition, under current law, many federal retirees face a financial disincentive if they decide to return to the federal workforce— their salaries would be reduced by the amount of their annuities. In the late 1980s, the federal government phased in a new retirement system, replacing the Civil Service Retirement System (CSRS), a defined benefit plan, with the Federal Employees Retirement System (FERS), which combines a defined benefit plan with Social Security and a defined contribution plan. Under CSRS, a worker can retire with full benefits at age 55 with 30 years of service and at older ages with fewer years of service. Under FERS, workers can receive their full annuity at age 60 with 20 years of service. But, because of Social Security rules, workers under FERS do not receive full Social Security benefits until they are 65 or older; the specific age at which workers receive their full Social Security benefits depends on their date of birth. In addition, because the Thrift Savings Plan is an important part of a retiree’s income under FERS, balances in an individual’s Thrift Savings Plan also may help dictate when an individual chooses to retire. While we do not know the total effect of this shift in retirement plans on retirement decisions, it is possible older workers under FERS will work longer than their CSRS colleagues in order to increase their retirement earnings. As the government’s human capital leader, OPM is responsible for helping agencies develop their human capital management systems and holding them accountable for effective human capital management practices. For example, one such practice is strategic human capital planning, which addresses two critical needs: (1) aligning an organization’s human capital program with its current and emerging mission and programmatic goals and (2) developing long-term strategies for acquiring, developing, and retaining staff to achieve these goals. In developing these strategies, we have reported that leading organizations go beyond a succession-hiring approach that focuses on simply replacing individuals. Rather, they engage in broad, integrated succession planning and management efforts directed at strengthening both current and future organizational capacity. In implementing its personnel policies, federal agencies are required to uphold federal merit system principles in recruiting, engaging, and retaining employees. Among other provisions, the merit system principles require agencies to recruit and select candidates based on fair and open competition, as well as treat employees fairly and equitably. Federal agencies can recruit skilled or experienced workers, many of whom tend to be older, to fill full- or part-time career positions requiring demonstrated expertise. OPM is also responsible for providing guidance and information about federal governmentwide hiring and retention authorities and flexibilities to agencies to help them address workforce challenges. In some cases, agencies can use options as they choose without prior approval from OPM. For example, federal agencies can engage workers, including older workers, as consultants to meet temporary or intermittent needs or contract with individuals or organizations to obtain needed skills. In other cases, OPM serves as the gatekeeper by approving or disapproving an agency’s request to use particular options (see table 1). Of these options, only one, rehiring annuitants without reducing their salaries, exclusively affects older workers. OPM is also responsible for administering retirement, health benefits, and other insurance services to civil service government employees, annuitants, and beneficiaries. It also develops regulations when Congress makes new options available to federal employees and often advocates for new legislation. We and others have highlighted the need to hire and retain older workers to address the challenges associated with an aging workforce. In so doing, we have called upon the federal government to assume a leadership role in developing strategies to recruit and retain older workers. At our recommendation, the Department of Labor (Labor) convened a task force composed of senior representatives from nine federal agencies and issued its first report in February 2008. The report provides information on strategies to support the employment of older workers, strategies for businesses to leverage the skills of an aging labor pool, individual opportunities for employment of older workers, and legal and regulatory issues associated with work and retirement. In March 2008, the task force formed work groups for each strategy that continue to meet and develop implementation plans, and in June, OPM joined the task force and two of the work groups. While the task force’s focus is the private sector, some of the strategies it identified are relevant for federal agencies as well—for example, providing flexible work arrangements and customized employment options that include alternative work schedules and part-time work. Across the federal government, the proportion of older and retirement- eligible workers is growing. About one-third of federal workers will be eligible to retire by 2012, although most federal employees do not retire immediately upon becoming eligible. The percentage of federal workers nearing retirement eligibility varies across federal agencies. For example, within 5 years, 20 percent of employees at the Department of Homeland Security (DHS) will be able to retire, while 46 percent of employees at HUD and Transportation will be eligible. Governmentwide, the proportion of senior executives and supervisors eligible to retire by 2012 exceeds the overall average, with 64 percent of senior executives and 45 percent of supervisors eligible to retire within 5 years. In the current economic situation, projections of how many federal workers will actually retire remain unclear, but historically, the majority of federal workers stay well past their initial retirement eligibility date. Of the federal workers who first became eligible to retire between 1997 and 2002, at least 40 percent were still in the federal workforce 5 years later. Beyond retaining their current federal workforce, federal agencies’ efforts to bring in additional older workers to help meet workforce needs currently focus more on hiring them as new employees than on bringing back federal retirees. In 2007, federal agencies hired almost 14,000 new workers aged 55 and older. In comparison, in 2007, federal agencies tapped 5,364 rehired annuitants for service. The average age of federal workers is inching up, and older workers, many of whom have passed their retirement eligibility age, now comprise a larger proportion of the federal workforce than in the past. Based upon OPM’s data for 2007, the last year for which data are available, the average age of federal workers is now about 48, somewhat older than in 1998, when the average age of federal workers was about 45. Similarly, the proportion of federal workers aged 55 or older has also increased to about 24 percent, up from about 20 percent in 1998. However, the proportion of older workers varies widely across federal agencies. For example, in fiscal year 2007, the percentage of career federal employees in the 24 CFO agencies aged 55 or older ranged from about 9 percent at the Department of Justice (Justice) to about 37 percent at HUD and SBA (see fig. 2). Similarly, the proportion of federal workers eligible or nearly eligible to retire is increasing and varies across federal agencies. Thirty-three percent of federal career employees in the workforce at the end of fiscal year 2007 will be eligible to retire by 2012. This is an increase from 10 years earlier when about 20 percent of federal career employees in the workforce at the end of fiscal year 1997 were projected to be eligible to retire by 2002. The effects of retirement will likely differ across agencies, as many workers projected to be eligible to retire by 2012 are concentrated within certain agencies. These proportions range from a high of 46 percent at four agencies—HUD, SBA, Transportation, and USAID—to a low of 20 percent at DHS (see fig. 3). Within agencies, the estimated proportion of workers eligible to retire also varies at the component level—that is, by bureau or unit. Thus, even those agencies that have relatively low overall percentages of retirement-eligible employees may have components that have higher percentages of retirement-eligible staff. This, in turn, could affect the accomplishment of mission tasks and strategic goals for agency components and for the agency as a whole. For example, although the Department of Homeland Security (DHS) has a low proportion—20 percent—of workers eligible to retire by 2012, the proportion of workers eligible to retire now or by 2012 in its Federal Emergency Management Agency is currently about 33 percent, and agency projections indicate that about 58 percent of the senior executives in this agency will be eligible to retire by 2012. Certain occupations have particularly high proportions of workers eligible to retire by 2012. Fifty percent or more of workers in 24 occupations that have 500 or more staff are eligible to retire by 2012. Several of these occupations, such as air traffic controllers, customs and border agents, and administrative law judges, are considered mission critical. In addition, federal law requires mandatory retirement at specified ages for some occupations, such as air traffic controllers who must retire at age 56 (see fig. 4). Moreover, the proportion of federal executives and supervisors who will reach retirement eligibility exceeds that of the overall proportion of federal employees. Of the approximately 7,200 career executives in the to federal workforce at the end of fiscal year 2007, 41 percent were eligible retire in 2008 and 64 percent will be eligible by 2012. For supervisors who are not career executives, 25 percent were eligible to retire in 2008, and 45 percent are projected to be eligible by 2012. In comparison, 17 percent of all federal workers were eligible to retire in 2008 and 31 percent are projected to be eligible by 2012. Agencies have a variety of options to tap older, experienced workers to fill workforce needs, including retaining workers past initial retirement eligibility, hiring new older workers, and bringing back retired federal annuitants. Although no data are available on the effects of specific retention strategies, most federal workers do not retire immediately upon being eligible, according to OPM’s data, and many stay in the federal workforce more than 5 years past their initial retirement eligibility da the more than 31,000 workers who became eligible to retire in 1997, only about 20 percent retired within the first year and over 40 percent were stil in the federal workforce after 5 years. These retention trends have remained relatively stable since 1997 (see fig. 5); the current economic situation may result in even higher retention rates. The growing proportion of federal workers who are eligible to retire now or in the near future presents challenges and opportunities for federal agencies. While distinct in many ways, the agencies we reviewed—HUD, SSA, and USAID—share common workforce challenges. Like other federal agencies, HUD, SSA, and USAID have large proportions of employees nearing retirement. Also, according to agency officials, all three agencies have relatively few staff at midlevel positions to help pass down institutional knowledge and skills to less experienced employees due to past budget cuts and hiring freezes. Finally, they are all challenged in their efforts to attract qualified staff with specialized skills. Consequently, when their older workers eventually retire, HUD, SSA, and USAID will face critical gaps in leadership, skills, and institutional knowledge. To address these challenges, these agencies rely on older workers in different ways and use selected governmentwide human capital flexibilities in addition to their own strategies, to hire and retain older workers. We also found that other federal agencies have developed their own approaches to hire and engage older workers to meet their workforce challenges, but these approaches have not been widely shared with other federal agencies. The three case study agencies we reviewed—HUD, SSA, and USAID—have very different missions that lead to different staffing needs and solutions. HUD, for example, assists millions of individuals through programs that help to encourage home ownership, provide housing assistance to low- and moderate-income families, and promote economic development. The agency employs approximately 9,600 individuals—two-thirds of whom work in 81 field offices across the United States. In addition to federal employees, HUD also relies on thousands of third-party entities—such as private lenders, contractors, nonprofit organizations, and local governments—to administer many of its programs, including rental assistance and community development programs. A large portion of HUD’s employees are located in the Office of Housing and work in a variety of positions, including specialists in contract oversight, loan servicing, and public housing revitalization. In comparison, SSA administers retirement and income support programs for millions of disabled or older individuals and their dependents. SSA is a large agency, employing approximately 62,600 individuals—many of whom are located in field offices and work directly with customers. In 2007, the more than 1,200 of SSA’s field offices served approximately 42 million customers. Generally SSA seeks to hire individuals with strong interpersonal and general analytical skills who are then trained for direct service positions, such as social insurance specialists and contact representatives. Different from both HUD and SSA, USAID employees often work outside of the United States to provide humanitarian and economic assistance to about 100 developing countries. Of the approximately 2,400 USAID employees, about half belong to the foreign service and the rest are civil service. Many of USAID’s foreign service employees work as foreign service officers. Entry level qualifications for this position include having an advanced degree and relevant international professional experience. In addition to its federal employees, USAID relies heavily on contractors and grantees to implement its overseas development projects, including Food for Peace initiatives in South Asia and Democracy and Governance programs in the Middle East. Table 3 below highlights some of these agencies’ characteristics. Despite their differences, HUD, SSA, and USAID share common workforce challenges. Like the federal government as a whole, HUD, SSA, and USAID have large portions of their workforces nearing retirement, and these older, experienced workers may leave behind gaps in leadership, skills, and institutional knowledge. Both HUD and USAID risk losing close to half of their current workforces to retirement by 2012. Similarly, SSA is expected to lose more than one-third of its employees to retirement by 2012—a time when the agency expects to experience a dramatic increase in workload due to the aging baby boom generation. Close to half of the current workforces at HUD and USAID will be eligible to retire in that same time period. Officials believe that many of the retiring employees will leave gaps in institutional knowledge and technical skills, especially in mission-critical occupations—those that agencies consider key to carrying out their missions. For example, SSA officials reported that between about 14 and 64 percent of its employees in mission-critical positions were eligible to retire in fiscal year 2008. This includes 64 percent of its administrative law judges and about 40 percent of its supervisors, paralegal specialists, claims assistants and examining specialists. In addition, HUD officials told us that half of its employees in mission-critical occupations—2,057—are currently eligible to retire. While these retirement eligibility rates suggest that large portions of HUD, SSA, and USAID’s current employees might retire by 2012, most employees do not retire as soon as they become eligible. We have previously reported that employment options—such as having the ability to work part-time or having flexible work schedules—may affect workers’ decisions to stay employed. HUD, SSA, and USAID offer these employment options, as well as other strategies, that may help retain older workers. In addition, officials from all three agencies told us that many employees stay past retirement eligibility because they are dedicated to their work and the mission of the agency. SSA officials reported that about 1,500 SSA employees have been with the agency for at least 40 years—tenures that extend well beyond the average retirement age. HUD officials told us that less than 5 percent of its total workforce has retired in the past 2 years. USAID officials told us that on average their employees continue working 5 years after they become eligible to retire. We found that in 1997, 1999, and 2002, between 38 and 52 percent of employees at HUD, SSA, and USAID remained in the federal workforce 5 years after first becoming eligible to retire—somewhat above the overall governmentwide averages of between 41 and 45 percent in those same years. These retention trends may be heightened in the near term, as the recent downturn in the economy may cause many in the nation’s workforce—including federal employees—to postpone their retirements. Another challenge that these agencies face is that they have relatively few staff in midlevel positions (GS-12 to GS-15 for the civil service and FS-4 to FS-2 for the foreign service) to pass down institutional knowledge and skills to less experienced staff. According to agency officials, budget cuts and hiring freezes of the 1990s kept HUD, SSA, and USAID from filling many of their entry-level positions with junior staff who would today be considered experienced, midlevel employees. For example, between 1993 and 2007, HUD’s overall staffing levels decreased by about 30 percent, and USAID’s decreased by about 40 percent. For USAID, not having enough midlevel staff is made even more complicated as the agency has begun to grow. Recent appropriations have allowed the agency to significantly increase its foreign service workforce over the next several years with primarily entry-level staff—employees who could generally benefit from the knowledge and skills of experienced staff. While SSA’s staffing levels have declined in recent years, the size of its workload has increased with the growing number of individuals eligible for SSA’s services. We reported in May 2008 that recent staffing declines may have been a factor in reducing SSA’s ability to complete all of its work while providing quality customer service. In addition, these agencies also are challenged in their efforts to attract qualified staff with specialized skills that are either uncommon or in high demand. USAID officials told us that many of its foreign service positions require specialized and uncommon skills—such as fluency in foreign languages and in-depth knowledge of cultures in remote regions. The limited pool of qualified and experienced individuals that the agency hires for these positions typically is drawn from other federal agencies, such as Treasury and State, as well as nongovernmental organizations. However, more often, USAID relies on less experienced individuals with strong interests in and aptitude for the foreign service and prepares these individuals for the highly specialized positions by providing them with several months of intensive language and overseas training. SSA and HUD also have hard-to-fill positions that require specialized skills. One in particular is the administrative law judge—a mission-critical position that both agencies find hard to fill. As of July 2008, all three of HUD’s administrative law judge openings remained unfilled, and SSA had to seek approval from OPM to hire back eight retirees for this hard-to-fill position. These agencies also need individuals with other specialized skills that are in high demand by other employers. For example, HUD’s financial analyst, auditing, and information technology positions are similar to the positions in many private firms that pay higher salaries in comparison to the federal government. Consequently, these positions at HUD have been significantly understaffed—by up to 47 percent in some offices within HUD—for the past several years because, according to officials, the agency cannot offer salaries to attract qualified individuals away from the private sector. For the same reason, SSA is challenged to fill a number of its specialized positions, such as those for accountants, attorneys, and information technology technicians. To address their workforce challenges, HUD, SSA, and USAID rely on older workers in different ways, sometimes through the use of selected governmentwide flexibilities that are attractive to all workers, including older workers. Human capital flexibilities represent policies and practices that an agency may use in managing its workforce to accomplish its mission and achieve its goals. These flexibilities—with appropriate safeguards—allow agencies to take actions related to recruitment, retention, compensation, work arrangements, and work-life policies. Depending on their individual workforce needs and goals, agencies tailor the use of these flexibilities. In addition, we learned that other factors influenced the case study agencies use of human capital flexibilities, including the potential negative consequences they have on an agency’s budget or workforce and the ease with which these flexibilities are adopted. Compared to other flexibilities, the flexibilities that help employees strike a work-life balance, including telework and alternative work schedules, may be less complex to adopt since each agency is responsible for establishing its own policies within certain broad guidelines. In addition, these work-life flexibilities may have little to no negative impact on an agency’s budget or workforce. For example, a recent OPM survey found that telework has, in fact, improved productivity and morale among many staff. We recently reported that these work-life flexibilities are often extremely important to older workers. For example, some research indicates older workers want to set their own hours and to be able to take time off to care for relatives when needed. In addition, older workers nearing retirement may prefer a part-time schedule as a means to retire gradually. Figure 7 below outlines some of the factors that influence case study agencies’ use of selected human capital flexibilities. Overall, we found that these work-life flexibilities were popular options for many employees, including older workers, at HUD, SSA, and USAID. For example, USAID officials told us that almost all of their employees have flexible schedules. While they are popular, these flexibilities do not fit well with every individual or every job. HUD officials told us that while most of its employees have the opportunity to work a compressed schedule so that they have a day off during a pay period, supervisors and managers are not allowed to use this flexibility because the agency values having management in the office 5 days a week to supervise program functions. Similarly, SSA officials reported that many frontline employees at SSA do not telework because SSA’s primary service delivery structure requires staff to be physically present at the field offices, working directly with its customers. To help ensure that these flexibilities are appropriately used, agencies typically require supervisory or managerial approval. Other flexibilities, in contrast, can have substantial consequences on a portion of an agency’s budget or workforce. For this reason, certain flexibilities have safeguards in place to help regulate their use. For example, in order to hire federal retirees without reducing their salaries by the amount of their annuities, most agencies must submit a request to OPM. In 2007, OPM approved waivers to rehire only 22 annuitants across HUD, SSA, and USAID. Agencies have the option of rehiring federal retirees without using the dual compensation waiver, but the retirees’ salaries are reduced by the amount of their annuities. Perhaps as a result, a relatively small number of retirees—210—across these three agencies elected to return to federal service when their salaries were to be reduced by their annuities. In addition to the governmentwide flexibilities, HUD, SSA, and USAID employ other strategies to involve older workers to help meet their workforce needs. While all three agencies rely on older workers to pass down institutional knowledge and critical skills to less experienced staff, HUD officials told us this is the primary way they actively involve older workers. One way HUD does this is through its formalized mentoring program, which allows senior staff to share their experiences, insights, and professional wisdom with junior staff. The agency has also developed a 2-year training program in which newly hired employees rotate through various positions throughout the agency and work with a variety of experienced employees to learn critical skills and knowledge. Officials told us that they use this program to help train new employees in order to fill positions that become available; and that they do not use recruiting or retention activities directed at older workers with particular skills or experience. According to officials, these mentoring relationships not only help transfer knowledge to less experienced workers, they also help retain older workers with the strong professional relationships the senior staff build with junior employees. In meeting workforce needs, SSA depends, in part, on its historically high retention rate to ensure the right skill levels. Over the past several years, however, SSA has increasingly used information technology in strategic workforce planning and has taken certain actions to recruit and retain older workers. For example, to better understand where to place its human capital efforts, SSA has developed a complex statistical model that uses historical data to project future retirements. Specifically, this model projects who is likely to retire, and SSA uses these projections to estimate gaps in mission-critical positions and to identify what components of the agency could be most affected by the upcoming retirements. With these estimates, the agency develops action plans focused on hiring, retention, and staff development. As a result of using these models, SSA has developed targeted recruitment efforts that reach out to a broad pool of candidates, some of whom are older workers and who have valuable leadership experience and skills. SSA is also beginning to reach out to older workers in order to achieve one of its diversity goals—attracting a multigenerational workforce. These steps have included developing recruiting material featuring images of older and younger workers. In addition, SSA has two other efforts specifically designed to retain older workers. One is a phased retirement program, which allows employees to work on a part-time basis rather than fully retiring. The other is a trial retirement program, which allows workers to return to work within a year of retiring if they repay the annuities they have received. However, SSA officials told us that the programs have been used rarely because of the financial penalty workers would face. SSA has developed programs, including elder and kinship care referral services and financial literacy services, designed to help retain workers. Agency officials told us that USAID tends to bring back its retirees as contractors to fill short-term job assignments and to help train and develop the agency’s growing number of newly hired staff. The agency uses various staffing mechanisms, including personal services contracts, to bring back retired foreign service officers to meet short-term workforce needs and to mentor newly hired foreign service officers. However, the agency does not specifically focus on older workers in its recruiting or retention activities. USAID officials told us that their retirees play a key role in helping new staff learn institutional knowledge and new skills. All newly hired foreign service officers have a mentor, who is typically a retired foreign service officer. These mentors work closely with the junior officers during their new staff orientation and initial training. Once the junior staff receives his or her overseas assignment, the mentor continues contact with the newly hired employee through telephone calls and occasional visits. USAID officials told us that their mentor program greatly helps junior staff become acclimated to the foreign service and it is an effective means to engage retirees who have essential skills and knowledge to pass down to new staff. In addition, retired foreign service officers help the agency meet short-term workforce needs. In one example, officials told us that a foreign service officer had to leave an assignment in Haiti several months before a replacement could arrive. USAID brought back a retiree who had experience with and knowledge about Haitian culture to fill the job assignment temporarily. According to officials, these retirees help the agency quickly acquire critical skills and pass down institutional knowledge. Because the federal retirees are contractors, the agency is able to begin and end their service relatively easily. We interviewed officials in several agencies that have developed other approaches to hiring and engaging older workers. Identifying and recruiting retirees with critical skills by using technology. State has developed two databases to match interested foreign service and civil service retirees with short-term or intermittent job assignments that require their skill sets or experiences. One database—the Retirement Network, or RNet—contains a variety of information, including individuals’ job experiences, foreign language abilities, special skills, preferred work schedules, and favored job locations. To identify individuals with specific needed skill sets, officials match information from RNet with another database that organizes and reports all available and upcoming short-term job assignments. For instance, in 2004, the agency identified current and retired employees familiar with Sumatra’s culture and language and sent many of them to Indonesia to help with the tsunami relief efforts. According to officials, this technology has allowed State to identify individuals with specialized skills and specific job experiences within hours. Before these systems were in place, the search for individuals with specific skills and experiences would have taken days or weeks, and even then, the list of individuals would have been incomplete. Because different personnel rules apply to foreign service and civil service positions, the agency typically brings civil service retirees on as contractors—nonfederal employees without any reduction to earnings or annuities—and, in certain circumstances, may hire foreign service retirees as federal employees who may earn their full salaries while receiving their annuities. Hiring older workers through nonfederal approaches. EPA has designed a program that places workers aged 55 and over in administrative and technical support positions within EPA and other federal and state environmental agencies nationwide. Instead of hiring older workers directly into the government as federal employees, EPA has cooperative agreements with nonprofit organizations to recruit, hire, and pay older workers. Under these agreements, workers are considered program enrollees, not federal employees. EPA’s Senior Environmental Employment (SEE) program started as a pilot project in the late 1970s and was authorized by the Environmental Programs Assistance Act in 1984. According to EPA, there are approximately 1,525 SEE enrollees—many of whom come from long careers in business and government service—who offer valuable knowledge and often serve as mentors to younger coworkers. Depending on their skills and experience, program enrollees’ wages vary, starting at $7.09 per hour and peaking at $17.72 per hour. Using the SEE program as a model, the Department of Agriculture’s (Agriculture) Natural Resources Conservation Service recently developed a pilot project called the Agriculture Conservation Enrollees/Seniors (ACES) program. Officials from both EPA and the Natural Resources Conservation Service (NRCS) told us that their programs are crucial in helping agencies meet workload demands and providing older workers with valuable job opportunities. Partnering with private firms to hire retired workers. In partnership with IBM and the Partnership for Public Service, Treasury is participating in a pilot project that aims to match the talent and interest of IBM retirees and employees nearing retirement with Treasury’s mission-critical staffing needs. Working together, the three organizations are designing a program that intends to send specific Treasury job opportunities to IBM employees with matching skill sets and experience; help create streamlined hiring processes; provide career transition support, such as employee benefits counseling and networking events; and encourage flexible work arrangements. Officials are developing the pilot project within existing governmentwide flexibilities that do not require special authority from OPM. As a senior official suggested, designing such a project may reveal the extent to which existing federal flexibilities allow new ways of hiring older workers. Agency officials at Agriculture, EPA, State, and Treasury told us that in developing their promising practices, they faced significant challenges, including negotiating new relationships with private entities and obtaining legislative authority for the program. In overcoming these obstacles, agency officials told us that they learned valuable lessons that could be shared with other agencies to help these agencies adopt similar strategies with less time and effort. To help federal agencies hire and retain skilled workers, OPM provides guidance, planning tools, and training, and often advocates changes in human capital programs by developing legislative proposals for Congress to consider. As components of these efforts, OPM has taken action that address three areas of concern for applicants, particularly older workers. First, it has begun to streamline the complex federal job application process. In addition, it has developed two legislative proposals—one would eliminate barriers to rehiring federal annuitants, and another would make it easier for certain federal workers to work part-time at the end of their careers. While these two proposals were incorporated into legislation in 2007, efforts stalled before passage, and it is unclear whether they will be reintroduced. Despite OPM’s efforts, the agency could do more to facilitate information sharing between federal agencies. OPM provides guidance, planning tools, and training to help federal agencies hire and retain the best qualified workers to fill positions. OPM’s efforts are focused on positions and merit, not people, as it helps agencies attract workers who possess the right skills and experience to meet agencies’ workforce needs without regard to age or other demographic variables such as sex or ethnicity. OPM encourages federal agencies to market career opportunities available in the federal government to talented individuals from all segments of society, including older workers, as part of their overall recruitment efforts. In its role as human capital leader, OPM provides agencies with guidance and technical support on how to use available hiring programs and flexibilities, many of which are attractive to older workers, to ensure the federal government has an effective civilian workforce. For example, OPM has developed a handbook—Human Resources Flexibilities and Authorities in the Federal Government—that identifies the many human capital flexibilities and authorities currently available and how agencies can address workforce challenges. In addition, OPM has developed a guide called Career Patterns that is intended to help agencies recruit a diverse, multigenerational workforce and has posted the guide on its Web page. This guide presents career pattern scenarios that characterize 10 segments of the general labor market according to career related factors, such as commitment to a mission and experience. The guide lists characteristics of the work environment that some cohorts may find particularly attractive and related human capital policies that agencies could use to recruit and retain potential employees. For example, according to the guide’s “Retiree Scenario,” this cohort finds flexible work schedules, camaraderie, and work aligned with their interests very attractive. Consequently, to recruit and retain this segment, the guide advises agencies to offer part-time work, flexible work scheduling, and telework, and to provide opportunities for mentoring and meaningful assignments. Officials from two of our three case study agencies stated they found information in Career Patterns useful and inserted language from it in their job announcements, but officials in the other agency said they did not find it especially helpful. A senior human capital official in that agency reported to us that Career Patterns did not provide the type of information that was needed to develop new strategies in hiring a multigenerational workforce. In developing and disseminating guidance, OPM officials work with the Chief Human Capital Officers Council (the Council), a group of chief policy advisors on human capital issues representing each of the 24 CFO agencies. OPM officials told us the need for guidance often evolves from requests for information made by the Council and OPM’s agency liaisons. For example, inquiries from the Council about how to request a waiver to rehire annuitants without reducing their salaries led OPM officials to develop a template for agencies to use in submitting these requests. OPM relies upon the Council to communicate OPM policy and other human capital information throughout their agencies. OPM officials see their relationship with the Council and the agencies they represent as a partnership and believe that they have a shared responsibility to ensure that the latest guidance and promising practices are disseminated throughout each agency. To help agencies implement its guidance, OPM has developed several support tools and has instituted training programs. For example, in fall 2005, OPM made a decision-support tool available online to assist agencies in assessing which hiring flexibilities would best meet their needs. Known as the Hiring Flexibilities Resource Center, this tool provides in-depth information on a variety of flexibilities. With respect to training, OPM, in coordination with the Council, conducts a Council Academy—a forum for council members to discuss federal human capital issues. This academy meets several times each year to address topics generated by the Chief Human Capital Officers (CHCO) and their assistants. OPM also provides briefings and policy forums as well as information on its Web site about a range of human capital issues, including the use of flexibilities. By reducing the burden associated with the federal hiring process and by proposing legislation to make it easier to rehire annuitants and to allow certain employees to work part-time at the end of their careers, OPM has taken steps that would address problems in three areas that have caused difficulties for older workers. Frustration with the federal hiring practice has been well documented and spans all age groups, including older workers. The Partnership for Public Service reported that 57 percent of the older workers it surveyed reported that applying for a federal job is fairly or very difficult. The report noted that the federal job application process is bureaucratic and confusing, with federal job announcements that often run 10, 20, or more pages and require applicants to submit college transcripts in very short periods of time. Similarly, based on a recent survey of recently hired upper-level federal employees, the U.S. Merit Systems Protection Board found that 39 percent of these new hires said they did not apply for other federal jobs that they were interested in because of burdens and complexities associated with the hiring process. The issues cited included having to rewrite or reformat their resumes or the descriptions of their knowledge, skills, and abilities, and having to respond to lengthy questionnaires. Results of the survey also indicated that the process was very lengthy, with 75 percent of new hires reporting it took longer to be hired for their present civil service position than their previous position. In 2008, OPM began to implement its End-to-End Hiring Roadmap Initiative, which will re-engineer the federal hiring process that has frustrated job applicants. As part of this initiative, OPM created a streamlined job announcement template for governmentwide, entry-level accounting and secretarial positions and is in the process of creating additional templates for other positions. The template will provide agencies with standardized language and formats to guide the development of announcements while allowing opportunities for customization. The new templates will reduce the complexity and length of traditional announcements for certain occupational communities by eliminating many requirements that called for information beyond that which is usually included in a resume. This initiative also includes developing a process that ensures job announcements and instructions are clear and understandable, notifies applicants that their application has been received, and updates applicants on the status of their application as significant decisions are reached. Other parts of the initiative address integrating human capital activities such as workforce planning, recruiting, hiring processes, security processing, and orienting new employees into federal organizations. OPM is also involved in other projects that address impediments in the federal hiring process. For example, an OPM team is working with the Partnership for Public Service on a project called the Extreme Hiring Makeover. This project has united experts from the private and public sectors to work with Education, the National Nuclear Security Administration, and the Centers for Medicare and Medicaid Services. These three agencies agreed to rely on private sector and public sector firms to diagnose problems in recruiting and hiring processes and to implement solutions. With regard to rehiring annuitants, OPM submitted a legislative proposal that would allow the heads of all federal agencies to rehire retired federal employees on a temporary basis without reducing their salaries or annuities and without obtaining prior OPM approval. To advance this purpose, bills were introduced in Congress in 2007, but were stalled before final passage. Like the legislative proposal, the bills limit the amount of time that the waiver may cover to 520 hours of service performed during the period ending 6 months after the date on which the annuity begins; 1,040 hours of service performed during a 12 month period; or 6,420 hours of service performed during the lifetime of the annuitant. It is unclear whether this proposal will be reintroduced in the new Congress. While the potential cost of the proposal has been debated, neither OPM nor the Congressional Budget Office have estimated its cost. Officials in several agencies have indicated that bringing retirees back on a part-time basis to fill certain positions is less costly than hiring new employees, largely because agencies do not need to cover retirees’ benefits costs. Also, these officials noted that rehired annuitants can “hit the ground running,” without orientation or training. Despite these potential savings, other experts believe that the additional costs associated with the higher salaries earned by retirees, compared to those typically earned by newer workers, might outweigh the benefits. These experts also see training and associated activities as investments that will help agencies address future, as well as present, workforce needs. OPM has also taken steps that could make it more attractive for certain federal workers to work part-time at the end of their careers—an option of particular importance for workers interested in a phased retirement. While all federal employees experience reduced annuities if they choose to work part-time—an equitable outcome because they work fewer hours over the course of their career—some workers are disproportionately penalized. For those individuals who have full-time federal service prior to April 6, 1986, and who work part-time at the end of their careers, the annuity calculation does not give full credit to the pre-1986 service. OPM’s proposal would address this inequity in the way federal annuities are calculated by fully crediting work preformed on a full-time basis before 1986. For the past several sessions of Congress, bills have been introduced to enact this change, but none have passed. Although OPM has taken steps to address areas of concern, it could do more to disseminate information across the federal government on agency-developed promising practices to recruit and retain older workers to meet workforce needs. We have identified several agencies that have developed their own promising practices, and officials in these agencies believe others could effectively build upon these practices if knowledge of them was more widely available. According to OPM, this type of information sharing is a joint responsibility between the agencies and OPM, and officials see the CHCO Council as the primary means for such communication. However, to date, this information has not been made widely available through the CHCO Council. And, while OPM has other methods available—such as its human capital and electronic government practices Web sites—that could be used to efficiently package and broadly disseminate this information to a much larger and diversified audience, it currently has no plans to do so. Today’s workers are better educated, healthier, and are living longer than workers of previous generations, and many look forward to working beyond their normal retirement age in positions that they find personally meaningful. Clearly, the federal government enjoys the benefits of a workforce dedicated to public service and provides workers with the opportunities for meaningful work—the ability of the government to retain workers well past their retirement eligibility speaks to this fact. The current economic crisis may cause even more federal workers to stay in the workforce in the near term and forestall the looming retirement wave. But, at some point, these workers will retire, and focusing on the future, the federal government may need to do more to ensure that when the retirement wave does occur, it is prepared to tap the talents of the older workers who have the skills they need. At least three federal agencies have developed their own practices that show promise in recruiting and retaining talented older workers who have needed and specialized skills. Although other agencies might benefit from this information, little attention has been paid to sharing it with other agencies. While OPM officials see this kind of information exchange as a shared responsibility between OPM and the agencies, OPM, as the government’s central personnel agency, is both authorized and best positioned to take on this responsibility. To better assist agencies in attracting and retaining a highly skilled workforce, we recommend that the Director of OPM develop a systematic approach, which may include communicating through the CHCO Council, to share information broadly across the federal government about agency- developed promising practices in recruitment and retention of older, experienced workers to meet their workforce needs. We provided a draft of this report to HUD, OPM, SSA, and USAID for their review and comment. OPM provided written comments which are reproduced in appendix III. In addition, OPM, SSA, and USAID provided technical comments, which we incorporated where appropriate. In its response, OPM wrote that the agency already has tools available on its Web site to assist federal agencies in attracting, recruiting, and retaining talented workers, including older workers. Our draft report cited these efforts, but we also noted that OPM’s Web site does not discuss the promising practices that have been developed by individual federal agencies and, as a consequence, this information is not readily available governmentwide. We continue to believe that the widespread dissemination of agency-developed promising practices will help federal agencies build upon the experiences of others in developing strategies to meet workforce challenges, and therefore have kept the recommendation. We received e-mails from HUD, SSA, and USAID. In responding to our report, both HUD and SSA agreed that disseminating this information would be helpful. SSA further suggested that such sharing of promising practices be incorporated throughout OPM’s workforce planning support rather than isolated as a special initiative. USAID noted that it supports OPM’s legislative proposal to make the process easier for rehiring Civil Service annuitants. USAID views rehiring annuitants as more cost-effective than using contract mechanisms to re- employ retirees on a part-time basis because agencies would avoid the additional overhead charges levied by contract organizations. In addition, USAID supports the proposal because it would better align the rules for civil service retirees with those of the foreign service. We are sending copies of this report to the Secretary of HUD, the Acting Director of OPM, the Commissioner of SSA, the Director of USAID, relevant congressional committees, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. A list of related GAO products is included at the end of this report. If you or your staff have any questions about this report, please contact Barbara Bovbjerg at (202) 512-7215 or bovbjergb@gao.gov or Robert Goldenkoff at (202) 512-6806 or goldenkoffr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and staff acknowledgments are listed in appendix IV. Our objectives were to describe the (1) age and retirement eligibility trends of the current federal workforce and the extent to which agencies hire and retain older workers; (2) workforce challenges that federal agencies face and the strategies they use to recruit and retain older workers to help meet these challenges; and (3) actions the Office of Personnel Management (OPM), as the federal government’s human capital manager, has taken to help agencies hire and retain an experienced, skilled workforce. To describe demographic trends relating to the retirement eligibility and aging of the federal workforce, we analyzed information on the 24 Chief Financial Officer (CFO) agencies from OPM’s human resource reporting system, the Central Personnel Data File (CPDF) for fiscal year 2007. We analyzed data on the age, retirement eligibility, occupations, projected retirement rates, and other characteristics of the career federal workforce. Our analyses included the following variables: agency, occupation, date of birth, service computation date, pay plan/grade, and supervisory status. Using the CPDF information, we analyzed the age distribution of career federal employees at CFO agencies by age groupings (under 40, 40 to 54, and 55 and over). We also analyzed the percentage of career federal employees hired as of the end of fiscal year 2007 who would be eligible to retire from fiscal years 2008 to 2012, and the percentage of workers eligible to retire in occupations in which the retirement rates exceeded the governmentwide average. As a proxy for those occupations that may be at risk due to high retirement eligibility rates, we selected occupations with 500 or more employees as of the end of fiscal year 2007 that exceeded the governmentwide rate of 33 percent by 50 percent or more. We also used CPDF data to determine the extent to which agencies are using specific strategies to hire and retain older workers. Based on previous work, we have determined that the CPDF is sufficiently reliable for the informational purposes of this report. For this report, we defined older workers as those who are aged 55 and older. To estimate the number of employees eligible to retire and the number who actually retired, we determined eligibility rates for fiscal years 1997 through 2007 by applying retirement plan eligibility rules to data in the CPDF using employees’ age at hire, birth date, and retirement plan. We determined past retirement rates by analyzing CPDF separation data from the CPDF for fiscal years 1998 through 2007. To report on how agencies make use of governmentwide flexibilities, we conducted in-depth reviews of three agencies—the Department of Housing and Urban Development (HUD), the Social Security Administration (SSA), and the United States Agency for International Development (USAID). We chose these agencies because they are among the 24 CFO agencies whose proportion of workers eligible to retire by 2012 exceeds the governmentwide average of 33 percent. These agencies also represent a range of agency sizes. In addition, we chose to review SSA not only because it will be facing a large number of possible retirements, but at the same time, will be facing an increased demand for its services. We also reviewed studies and conducted interviews with experts in the area of retirement, including members of university-based retirement research centers, AARP, Partnership for Public Service, the U.S. Merit Systems Protection Board, IBM International, and various agency officials, to identify notable approaches other agencies have developed to hire and retain older workers. Through this work, we identified several agencies that have developed their own innovative approaches and met with officials from these agencies. To report on OPM’s activities and challenges, we augmented information obtained from our reviews of three agencies by interviewing various officials at OPM and reviewing relevant documents. To address this objective, we interviewed officials at OPM and interviewed other selected federal agencies and private sector experts. Also, we reviewed previous GAO work relating to older workers and federal human capital strategies. Our work at OPM included interviews with key officials and reviews of OPM guidance, training materials, legislative proposals, and other documents relevant to hiring and retaining older workers, as well as documents on federal human capital flexibilities. We conducted our work from April 2008 to January 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence we obtained provides a reasonable basis for our findings and conclusions. In addition to the contacts listed above, Dianne M. Blank (Assistant Director) and Kathleen D. White, (Analyst-in-Charge) supervised the development of this report. Cheri L. Harrington and Christopher T. Langford made significant contributions to all aspects of this report. In addition, Belva M. Martin, Clifton G. Douglas, Mary Y. Martin, Nicholas C. Alexander, and Isabella P. Johnson contributed to significant portions of the report. Jessica A. Botsford provided legal support; Gregory H.Wilmoth assisted with design, methodology, and data analysis; and Susannah L. Compton provided writing assistance. Karen A. Brown, Lise Levie, and Ronni Schwartz verified the information in this report. Social Security Administration: Service Delivery Plan Needed to Address Baby Boom Retirement Challenges. GAO-09-24. Washington, D.C.: January 9, 2009. Human Capital: Workforce Diversity Governmentwide and at the Department of Homeland Security. GAO-08-815T. Washington, D.C.: May 21, 2008. Older Workers: Federal Agencies Face Challenges, but Have Opportunities to Hire and Retain Experienced Employees. GAO-08-630T. Washington, D.C.: April 30, 2008. Office of Personnel Management: Opportunities Exist to Build on Recent Progress in Internal Human Capital Capacity. GAO-08-11. Washington, D.C.: October 31, 2007. Older Workers: Some Best Practices and Strategies for Engaging and Retaining Older Workers. GAO-07-433T. Washington, D.C.: February 28, 2007. Highlights of a GAO Forum: Engaging and Retaining Older Workers. GAO-07-438SP. Washington, D.C.: February 28, 2007. Office of Personnel Management: Key Lessons Learned to Date for Strengthening Capacity to Lead and Implement Human Capital Reforms. GAO-07-90. Washington, D.C.: January 19, 2007. Office of Personnel Management: OPM Is Taking Steps to Strengthen Its Internal Capacity for Leading Human Capital Reform. GAO-06-861T. Washington, D.C.: June 27, 2006. Redefining Retirement: Options for Older Americans. GAO-05-620T. Washington, D.C.: April 27, 2005. Human Capital: Opportunities to Improve Executive Agencies’ Hiring Processes. GAO-03-450. Washington, D.C.: May 30, 2003. Human Capital: OPM Can Better Assist Agencies in Using Personnel Flexibilities. GAO-03-428. Washington, D.C.: May 9, 2003. Federal Employee Retirements: Expected Increase Over the Next 5 Years Illustrates Need for Workforce Planning. GAO-01-509. Washington, D.C.: April 27, 2001. Retirement Benefits: Modification of Civil Service Retirement Benefits for Part-Time Work. GAO/PEMD-86-2. Washington, D.C.: January 9, 1986.
The federal workforce, like the nation's workforce as a whole, is aging, and increasingly large percentages are becoming eligible to retire. Eventually baby boomers will leave the workforce and when they do, they will leave behind gaps in leadership, skills, and knowledge due to the slower-growing pool of younger workers. GAO and others have emphasized the need for federal agencies to hire and retain older workers to help address these shortages. Building upon earlier testimony, GAO was asked to examine (1) age and retirement eligibility trends of the current federal workforce and the extent to which agencies hire and retain older workers; (2) workforce challenges selected agencies face and the strategies they use to hire and retain older workers; and (3) actions taken by the Office of Personnel Management (OPM) to help agencies hire and retain experienced workers. To address these questions, GAO analyzed data from OPM's Central Personal Data File, interviewed officials at three agencies with high proportions of workers eligible to retire, and identified agencies' promising practices to hire and retain older workers. What GAO Recommends The proportion of federal employees eligible to retire is growing. While this proportion varies across agencies, in four agencies--the Agency for International Development (USAID), the Department of Housing and Urban Development (HUD), the Small Business Administration, and the Department of Transportation--46 percent of the workforce will be eligible to retire by 2012, well above the governmentwide average of 33 percent. While these eligibility rates suggest that many will retire, the federal government has historically enjoyed relatively high retention rates, with 40 percent or more of federal employees remaining in the workforce for at least 5 years after becoming eligible. Beyond retaining older workers, in fiscal year 2007, federal agencies hired almost 14,000 new workers who were 55 years of age or older and brought back about 5,400 federal retirees to address workforce needs. The increasing numbers of retirement-eligible federal workers present challenges and opportunities. The three agencies we reviewed (HUD, SSA, and USAID) share common challenges. All have large proportions of employees nearing retirement, and according to officials, due to past hiring freezes all have relatively few midlevel staff to help pass down knowledge and skills to less experienced employees. Officials from all three agencies also told us that they have difficulty attracting qualified staff with specialized skills. To address these challenges, the three agencies rely on older workers in different ways. USAID brings back its knowledgeable and skilled retirees as contractors to fill short-term job assignments and to help train and develop the agency's growing number of newly hired staff. SSA uses complex statistical models to project potential retirements in mission critical occupations and uses these data to develop recruitment efforts targeted at a broad pool of candidates, including older workers. While all three agencies rely on older workers to pass down knowledge and skills to junior staff, HUD officials told us this is the primary way they involve older workers, due in part to the agency's focus on recruiting entry-level staff. In addition, some federal agencies have developed practices that other agencies might find useful in tapping older workers to meet short-term needs. For example, the Department of State has developed databases to match interested retirees with short-term assignments requiring particular skills. To help agencies hire and retain an experienced workforce, OPM provides guidance, including support tools and training, and has taken steps to address areas of concern to older workers. For example, OPM has initiated actions to streamline the federal application process and to eliminate barriers that deter some federal retirees from returning to federal service or from working part-time at the end of their careers. However, although some federal agencies have developed strategies that could be used effectively by other agencies to hire and retain experienced workers to meet workforce needs, this information is not widely available. And, while OPM has other methods available--such as its human capital and electronic government practices Web sites--that could be used to efficiently package and broadly disseminate this information to a much larger audience, it currently has no plans to do so.
Charter schools are a new and increasingly popular entrant in the debate on restructuring and improving U.S. public education. The model offered by charter schools differs substantially from the traditional model for governing and funding public schools. Charter schools operate under a charter or contract that specifies the terms under which the schools may operate and the student outcomes they are expected to achieve. Charter schools may be exempt from most local and state rules, hire their own staff, determine their own curriculum, receive funding directly from the state, and control their own budgets. In contrast, traditional public schools are subject to substantial external controls, such as local, state, and federal requirements, which limit their authority over curriculum and personnel decisions. Federal, state, and local funding for traditional public schools usually flows through the district, and individual schools often have little control over their budgets. Between 1991 and 1994, 11 states enacted legislation authorizing charter schools to achieve a variety of purposes, including encouraging innovative teaching, promoting performance-based accountability, expanding choices in the types of public schools available, creating new professional opportunities for teachers, improving student learning, and promoting community involvement. The federal government has also acted on behalf of charter schools. Two major pieces of federal education legislation passed in 1994 include provisions on charter schools. The Improving America’s Schools Act, which reauthorized and amended the ESEA of 1965, includes a new federal grant program to support the design and implementation of charter schools (see app. II for a description of this program). The Improving America’s Schools Act also specifies the conversion of a school to charter school status as a possible corrective action that a school district can require of a school that has been identified for school improvement. The Goals 2000: Educate America Act allows states to use federal funds provided under the act to promote charter schools. As of January 1995, nine states had approved 134 charter schools with diverse instructional and operating characteristics. Another two states—Georgia and Kansas—had adopted laws authorizing charter schools but had not yet approved any. (See table 1.) As many as 14 more states may consider legislation in 1995. Approved charter schools include 85 new schools and 49 conversions of existing schools (see fig. 1), with some states only allowing such conversions (see table 2). Charter schools’ diverse instructional programs include approaches such as instructing children of multiple ages in the same classroom, known as multiage grouping; teaching subjects in the context of a certain theme, known as thematic instruction; and using the Internet as an instructional tool. Some charter schools specialize in certain subject areas, such as the arts, sciences, or technology; others emphasize work experience through internships or apprenticeships. Some charter schools target specific student populations, including students at risk of school failure, dropouts, limited English proficient students, noncollege-bound students, or home-schooled students. Under the state laws in California, Colorado, Kansas, and Wisconsin, charter schools that target students at risk of school failure receive preference for approval. As some advocates envision them, charter schools would operate with far greater autonomy than traditional schools. They would operate independently from the school districts where they are located and unconstrained by government regulations; they would control their own budgets, personnel, curriculum, and instructional approaches. While this is the case for charter schools in some states, other states have laws that authorize charter schools with more limited autonomy. State laws influence charter schools’ autonomy by how they provide for their (1) legal status, (2) approval, (3) funding, and (4) exemption from rules. Charter schools under four states’ laws are legally independent from the school districts where they are located; that is, the charter schools are legally responsible for their operations (see table 3). Charter schools in Minnesota, for example, operate as nonprofit corporations or cooperatives. In five states, charter schools must be part of a school district that is legally responsible for the school’s operations (see table 3). In one state, California, a charter school’s legal status is determined through negotiation with the local school board that approves its charter. Some charter schools in California have organized as legally independent nonprofit corporations; others are legally part of a district; and some schools’ legal status remains to be determined. In one state, Hawaii, the legal status of charter schools remains uncertain and awaits a decision by the State Attorney General. The legal status of a charter school may influence its authority over budgeting and personnel decisions. Legally independent charter schools generally control their own budgets and make their own hiring and firing decisions. Charter schools that remain legally part of a school district may have little control over budgeting or personnel, although this varies. All charter schools must be approved by some public institution. Most have been approved by a school district or state board of education, although some states involve neither. State laws vary considerably in the options they give to charter schools seeking approval. State laws also vary in allowing applicants to appeal a decision to reject a charter school application. (See table 4.) Required school district approval could result in less autonomous charter schools if districts use their leverage with the schools to maintain more traditional relationships with them. The availability of multiple approval options could result in more autonomous charter schools because applicants could seek the least restrictive situation. As a condition for approving a charter, for example, one district required charter schools’ terms of employment—for teacher tenure, salary, and schedule for advancement—to be the same as those for other schools in the district. Evidence from California indicates that districts were least supportive of charter schools seeking the most independence. Charter schools’ funding arrangements vary in (1) the extent to which the funding amounts are negotiable and (2) how funds flow to the schools. Charter schools’ autonomy could be limited when funding amounts are subject to negotiation with the school district that approves the charter. Districts may seek to retain control over some funds as a condition for approval. In six states, the amount of state or local funding for charter schools is subject to negotiation with the school districts that approve the charters. In four states, funding for charter schools is set by the state, and the amount is not subject to negotiation with school districts. In one state, Arizona, funding is subject to negotiation when charter schools are approved by school districts, but not when they are approved by the state. In states in which funding is not subject to negotiation, funds flow from the state directly to the charter school, with the exception of Massachusetts and Michigan. In states in which funding is subject to negotiation, funds flow from the state to the district to the charter school. (See table 5.) Charter schools’ autonomy from state and district rules varies considerably across states. Some state laws exempt charter schools from most state education rules; that is, charter schools receive a blanket exemption. Other states require charter schools to request exemption from specific rules (rule-by-rule exemption), requests that are subject to district or state approval or both. (See table 6.) Legally independent charter schools are not subject to district rules unless agreed to as part of negotiations leading to charter approval. In contrast, charter schools that are legally part of a district are subject to district rules unless waivers are negotiated. Some districts have denied waivers from local rules requested by charter schools. The extent to which charter schools can be held accountable depends on how the schools assess student performance and report results to the public institutions responsible for their oversight and contract renewal. The schools’ charters indicate plans to use a wide variety of assessment methods to measure a wide variety of student outcomes. Some of these assessments and outcomes were subject to negotiation with the charter-granting institution; others are mandated under law in some states (see table 7). Some charter schools state their plans for assessment in great detail, have their assessment systems in place, and have begun collecting data. Others—including some schools already open—state their plans in more general terms and are still developing their assessment systems. Student assessments used by charter schools include portfolios, exhibitions, demonstrations of students’ work, and often standardized achievement tests. Student outcomes include objective outcomes—such as specific achievement levels or gains on standardized tests, attendance and graduation rates—and subjective outcomes, such as becoming an independent learner, understanding how science is applied to the real world, participating in community service, and understanding the responsibilities of citizenship. Because charter schools’ efforts to assess and report student performance are fairly recent, several important questions about accountability are unanswered. First, are charter schools collecting adequate baseline data to judge changes in student performance? Accurate judgments may be difficult in schools that opened before their assessment methods were developed. Second, will charter schools report data by race, sex, or socioeconomic status so that the performance of specific student groups can be assessed? No state laws require charter schools to do so; some include no reporting requirements; and most leave the type of reporting to local discretion (see table 7). Third, what are the implications of requiring charter schools to meet state performance standards and to use standardized, norm-referenced tests? Will it discourage charter schools with specialized purposes or that target low-achieving student populations? Will it encourage charter schools to have more traditional instructional programs? Charter schools pose new challenges for federal programs in allocating funds, providing services, and assigning legal responsibility. These challenges stem from the lack of connection of some charter schools to school districts—the usual local point of federal program administration. School districts are considered LEAs for the purposes of federal program administration; they receive allocations of federal funds from their states and are held legally responsible for meeting program requirements. However, an important issue is whether some charter schools—those with legal independence—can be considered LEAs. While legally independent charter schools appear to meet the definition of an LEA, states are uncertain about this and have approached the issue differently. Title I and special education programs illustrate challenges posed by charter schools to federal education program administration. As an LEA, a charter school would be eligible to receive Title I funds directly from its state education agency (SEA) and held legally responsible for its Title I program. As a school considered part of a traditional school district, a charter school would be eligible for Title I funds just as any other school in a district and would not be eligible to receive funds directly from its SEA. Current law provides SEAs flexibility in allocating grants to LEAs that could apply to charter schools considered LEAs. However, SEAs using census data to calculate LEA allocations face a complication because census data do not exist for charter schools, and SEAs must use the same measure of low income throughout the state. It is uncertain, for example, whether an SEA could use other data adjusted to be equivalent to census data for this purpose. An SEA might be able to apply for a waiver under the new charter schools grant program to permit use of such adjusted data; however, language in different waiver provisions makes this unclear. In commenting on a draft of this report, the Department of Education stated that it intends to use the broader authority to grant waivers under the charter schools provision to promote flexibility in charter schools (see app. III). Of those states that authorized legally independent charter schools, Arizona and Massachusetts have not yet decided on how to treat them concerning Title I. California, Minnesota, and Michigan have decided on contrasting approaches. The California Department of Education has not decided whether its legally independent charter schools are LEAs for Title I purposes. To avoid creating a new funding structure, it treats all charter schools as regular schools within a district for Title I funding. If a charter school is eligible for Title I funding, then the district must determine the charter school’s share the same way it does for other eligible schools. While state officials in Minnesota consider charter schools LEAs, the state Title I office has delegated responsibility for Title I to districts and given them two options for serving charter schools. Under the first option, the district employs the Title I staff and provides services at the charter school. Under the second option, the district allocates part of its Title I funds to the charter school, and the charter school employs the Title I staff. Under either option, the state considers the district legally responsible for the charter school’s Title I program. The state adopted this arrangement because it lacked census data on charter schools but was required to use census data as part of its statewide distribution approach to allocating Title I funds to LEAs. The state Title I office in Michigan considers charter schools LEAs and plans to allocate Title I funds directly to them; it considers the schools legally responsible for administering their own Title I programs. To ensure that charter schools get a fair share of Title I funding, the state Title I office devised a way to divide a traditional LEA’s Title I allocation with a charter school within its boundaries. As of September 1994, Michigan had used this method in one charter school, the charter school at Wayne State University in Detroit. The state Title I office, with the consent of the district and the charter school, allocated part of Detroit’s Title I allocation to the charter school on the basis of the number of students eligible for free or reduced-price lunch at the school. The state expects to use the same method for other charter schools, although this may be more difficult when students from more than one district attend a charter school, the state coordinator said. Whether charter schools are LEAs or part of a traditional school district has implications for (1) which institution—the school or the district—is legally responsible for meeting federal special education requirements and (2) how states and districts fund special education services. Under the IDEA, LEAs must provide a “free appropriate public education” to disabled children. Regulations implementing the act specify requirements that LEAs must follow in identifying children with disabilities and selecting their special education services. While the IDEA provides some federal funding for special education, most funding comes from state and local sources. Charter schools pose a particular challenge to funding special education when local revenues are used for this purpose. Since charter schools do not levy taxes, another institution must provide the revenue. Minnesota, which treats its charter schools as individual LEAs, resolved issues of legal responsibility and funding after some uncertainty and may serve as a useful example for other states. The SEA in Minnesota decided that legal responsibility for meeting federal special education requirements for children in charter schools depends on whether the district or the parent places the child in the charter school. If the district where the student lives places the child in a charter school, then the district remains legally responsible. If the parent places the student in a charter school, then this is “akin to the child moving to another district,” and the charter school becomes legally responsible. These decisions were established in rulings on complaint investigations. In one case, the complainant alleged that the district where the student lived failed to implement the student’s individualized education plan (IEP) at the Metro School for the Deaf. The Minnesota Department of Education ruled that the district was in violation and was responsible for ensuring service provision because it had placed the student in the charter school.In another case, the complainant also alleged that the district had failed to implement the student’s IEP at a charter school, specifically, that the student had received no speech services during the school year. The Minnesota Department of Education ruled that, because the student was placed at the Cedar Riverside Charter School by parental choice, the district of residence was not responsible for providing the student a free appropriate public education and that the charter school was now responsible for doing so. In Minnesota, the SEA allocates state funds directly to charter schools as a partial reimbursement for special education costs. Charter schools, in turn, bill unreimbursed costs to the districts where the students live. The districts are expected to use revenues from property taxes or federal special education funds to fund the unreimbursed amount. In the future, the SEA may allocate federal special education funds directly to charter schools. Officials in several districts said they were unhappy with the state’s expectation that they use local property taxes for unreimbursed costs for charter schools’ special education programs because the charter schools are legally independent. Charter schools offer a new model for autonomous public schools that provides opportunities for diverse and innovative approaches to education. A great deal, however, remains to be learned about these schools, for example, whether limits on their autonomy will stifle innovation. Furthermore, this autonomy poses challenges for holding charter schools accountable for student performance and administering federal programs. Accountability for student performance is a critical aspect of the charter schools model, given the schools’ autonomy from external controls that govern traditional public schools. Whether charter schools can be held accountable for student performance depends in part on how well student performance is assessed and reported. Important issues for future evaluations of these schools include whether charter schools (1) collect adequate baseline data to judge changes in student performance and (2) report data by race, sex, or socioeconomic status to assess the performance of specific student groups. The challenges charter schools pose for federal program administration concern their status as single schools operating as LEAs. Current law and regulations did not anticipate such an arrangement. Unless the Department of Education clarifies (1) whether charter schools may be considered LEAs and (2) how these schools can be treated for purposes of administering Title I and special education programs, uncertainty will persist that could impede charter schools’ implementation. We recommend that the Secretary of Education determine whether states may consider charter schools LEAs for federal program administration. In addition, if charter schools may be LEAs, the Secretary should provide guidance that specifies how states may allocate Title I funds to charter schools, particularly in states that use census data to count low-income children, and how states may determine charter schools’ legal responsibility for providing special education services. The Department of Education provided written comments on a draft of this report (see app. III). The Department said our report raised thoughtful issues about the challenges facing charter schools and presented an informative survey of their development nationally. The Department also commented on our recommendations to the Secretary and questions we raised about the applicability of different waiver provisions. In its comments on our recommendations, the Department stated that it (1) encourages states to develop legal arrangements that best support state and local strategies and (2) intends to work with states on a case-by-case basis to address issues raised in our report concerning federal program administration in charter schools. We support the Department’s intention to work with states to resolve these issues. However, the Department’s response does not fully clarify whether, and under what conditions, charter schools can be considered LEAs and we believe the Department should do so. In the draft reviewed by the Department, we also noted that the applicability of different waiver provisions in the Improving America’s Schools Act was uncertain in regard to charter schools. In its comments, the Department stated that it intends to use the broader authority to grant waivers under the charter schools provision of the act to promote flexibility in charter schools. We revised the report to incorporate the Department’s comments on this matter. We are sending copies of this report to congressional committees, the Secretary of Education, and other interested parties. Please call Richard Wenning, Evaluator-in-Charge, at (202) 512-7048, or Beatrice Birman, Assistant Director, at (202) 512-7008 if you or your staff have any questions about this report. Other staff who contributed to this report are named in appendix V. Vistas-Bear Valley Charter School P. O. Box 6057 Big Bear Lake, CA 92315 El Dorado Charter Community 6767 Green Valley Road Placerville, CA 95667 Early Intervention- Healthy Start Charter School Folsom Middle School 500 Blue Ravine Road Folsom, CA 95630 Grass Valley Alternative 10840 Gilmore Way Grass Valley, CA 95945 Accelerated School P. O. Box 341105 Los Angeles, CA 90034 Canyon School 421 Entrada Drive Santa Monica, CA 90402 Edutrain 1100 S. Grand Avenue Los Angeles, CA 90015 Fenton Avenue School 11828 Gain Street Lake View Terrace, CA 91342 Marquez School 16821 Marquez Avenue Pacific Palisades, CA 90272 The Open School 1034 Steams Drive Los Angeles, CA 90035 Palisades Elementary Charter School 800 Via De La Paz Pacific Palisades, CA 90272 Palisades High School 15777 Bowdoin Street Pacific Palisades, CA 90272 210 in charter school component 9-10 in charter school component (continued) Vaughn Next Century Learning Center 13330 Vaughn Street San Fernando, CA 91340 Westwood School Los Angeles Unified School District, CA 2050 Selby Avenue Los Angeles, CA 90025 Natomas Charter School 3700 Del Paso Road Sacramento, CA 95834 Jingletown Middle School 2506 Truman Avenue Oakland, CA 94605 Linscott Charter School 220 Elm Street Watsonville, CA 95076 Sonoma County Charter 1825 Willowside Road Santa Rosa, CA 95401 Pioneer Primary/Pioneer Middle 8810 14th Avenue Stanford, CA 93230 Schnell 2871 Schnell School Road Placerville, CA 95667 Ready Springs Home Study Ready Springs Union School District, CA 10862 Spenceville Road Penn Valley, CA 95946 The Eel River School P. O. Box 218 Covelo, CA 95428 (continued) July 1993 (Two charter schools housed together but working independently) 150 (90 in Homeschool and 60 in White Oak) Peabody Charter School 3018 Calle Noguera Santa Barbara, CA 93105 Santa Barbara Charter School 6100 Stow Canyon Road Goleta, CA 93117 (continued) Altimira P. O. Box 1546 Sonoma, CA 95476 Twin Ridges Alternative Charter School P. O. Box 529 North San Juan, CA 95960 Options for Youth 29 Foothill La Placenta, CA 91214 176 students in two centers (Victor Valley - 103 and Hesperia Unified District - 73) New school serving K-12 and adults. No adults presently enrolled. Lincoln High 1081 7th Street Lincoln, CA 95648 Sheridan Elementary 4730 H Street Sheridan, CA 95681 Mailing address: P.O. Box 268 Sheridan, CA 95681 Yucca Mesa P. O. Box 910 Yucca Valley, CA 92286 GAO was unable to get this information before publication. Planning to open in fall 1995 120 (expected) (Table notes on next page) GAO was unable to get this information before publication. Benjamin Franklin Classical 390 Oakland Parkway Franklin, MA 02038 270 (expected) Boston Renaissance 529 5th Avenue New York, NY 10017 700 (expected) Boston University 775 Commonwealth Avenue Boston, MA 02115 150 (expected) Cape Cod Lighthouse P. O. Box 968 South Orleans, MA 02662 100 (expected) City on a Hill Charter School 39 Jordan Road Brookline, MA 02146 60 (expected) Community Day 190 Hampshire Street Lawrence, MA 01840 140 (expected) Fenway II 250 Rutherford Avenue Charlestown, MA 02129 Francis W. Parker 234 Massachusetts Avenue Harvard, MA 01451 Lowell Charter School 529 5th Avenue New York, NY 10017 400 (expected) Lowell Middlesex Academy 33 Kearney Square Lowell, MA 01852 100 (expected) Neighborhood House 232 Centre Street Dorchester, MA 02124 45 (expected) South Shore 936 Nantasket Avenue Hull, MA 02045 60 (expected) (continued) Western Massachusetts Hilltown 3 Edward Street Haydenville, MA 01039 35 (expected) Worcester 529 5th Avenue New York, NY 10017 500 (expected) YouthBuild 173A Norfolk Avenue Roxbury, MA 02119 50 (expected) GAO was unable to get this information before publication. Toivola-Meadowlands 7705 Western Avenue P.O. Box 215 Meadowlands, MN 55765 City Academy St. Paul, MN School District 1109 Margaret Street St. Paul, MN 55106 New Heights Schools, Inc. 614 W. Mulberry Stillwater, MN 55082 (continued) Minnesota New Country School P. O. Box 423 Henderson, MN 56044 Parents Allied With Children and Teachers (PACT) 600 East Main Street Anoka, MN 55303 School site: 440 Pierce Street Anoka, MN GAO was unable to get this information before publication. The Improving America’s Schools Act, which reauthorized the Elementary and Secondary Education Act of 1965, includes a provision establishing a new federal grant program to support the design and implementation of charter schools. The text of this provision appears here. SEC. 10301. FINDINGS AND PURPOSE. (a) FINDINGS.--The Congress finds that (1) enhancement of parent and student choices among public schools can assist in promoting comprehensive educational reform and give more students the opportunity to learn to challenging State content standards and challenging State student performance standards, if sufficiently diverse and high-quality choices, and genuine opportunities to take advantage of such choices, are available to all students; (2) useful examples of such choices can come from States and communities that experiment with methods of offering teachers and other educators, parents, and other members of the public the opportunity to design and implement new public schools and to transform existing public schools; (3) charter schools are a mechanism for testing a variety of educational approaches and should, therefore, be exempted from restrictive rules and regulations if the leadership of such schools commits to attaining specific and ambitious educational results for educationally disadvantaged students consistent with challenging State content standards and challenging State student performance standards for all students; (4) charter schools, as such schools have been implemented in a few States, can embody the necessary mixture of enhanced choice, exemption from restrictive regulations, and a focus on learning gains; (5) charter schools, including charter schools that are schools-within-schools, can help reduce school size, which reduction can have a significant effect on student achievement; (6) the Federal Government should test, evaluate, and disseminate information on a variety of charter schools models in order to help demonstrate the benefits of this promising education reform; and (7) there is a strong documented need for cash flow assistance to charter schools that are starting up, because State and local operating revenue streams are not immediately available. (b) PURPOSE.--It is the purpose of this part to increase national understanding of the charter schools model by-- (1) providing financial assistance for the design and initial implementation of charter schools; and (2) evaluating the effects of such schools, including the effects on students, student achievement, staff, and parents. SEC. 10302. PROGRAM AUTHORIZED. (a) IN GENERAL.--The Secretary may award grants to State educational agencies having applications approved pursuant to section 10303 to enable such agencies to conduct a charter school grant program in accordance with this part. (b) SPECIAL RULE.--If a State educational agency elects not to participate in the program authorized by this part or does not have an application approved under section 10303, the Secretary may award a grant to an eligible applicant that serve such State and has an application approved pursuant to section 10303(c). (c) PROGRAM PERIODS.-- (1) GRANTS TO STATES.--Grants awarded to State educational agencies under this part shall be awarded for a period of not more than 3 years. (2) GRANTS TO ELIGIBLE APPLICANTS.--Grants awarded by the Secretary to eligible applicants or subgrants awarded by State educational agencies to eligible applicants under this part shall be awarded for a period of not more than 3 years, of which the eligible applicant may use-- (A) not more than 18 months for planning and (B) not more than 2 years for the initial implementation of a charter school. (d) LIMITATION.--The Secretary shall not award more than one grant and State educational agencies shall not award more than one subgrant under this part to support a particular charter school. SEC. 10304. ADMINISTRATION. (a) SELECTION CRITERIA FOR STATE EDUCATIONAL AGENCIES.--The Secretary shall award grants to State educational agencies under this part on the basis of the quality of the applications submitted under section 10303(b), after taking into consideration such factors as (1) the contribution that the charter schools grant program will make to assisting educationally disadvantaged and other students to achieving State content standards and State student performance standards and, in general, a State’s education improvement plan; (2) the degree of flexibility afforded by the State educational agency to charter schools under the State’s charter schools law; (3) the ambitiousness of the objectives for the State charter school grant program; (4) the quality of the strategy for assessing achievement of those objectives; and (5) the likelihood that the charter school grant program will meet those objectives and improve educational results for students. (b) SELECTION CRITERIA FOR ELIGIBLE APPLICANTS.--The Secretary shall award grants to eligible applicants under this part on the basis of the quality of the applications submitted under section 10303(c), after taking into consideration such factors as-- (1) the quality of the proposed curriculum and (2) the degree of flexibility afforded by the State educational agency and, if applicable, the local educational agency to the charter school; (3) the extent of community support for the application; (4) the ambitiousness of the objectives for the charter school; (5) the quality of the strategy for assessing achievement of those objectives; and (6) the likelihood that the charter school will meet those objectives and improve educational results for students. (c) PEER REVIEW.--The Secretary, and each State educational agency receiving a grant under this part, shall use a peer review process to review applications for assistance under this part. (d) DIVERSITY OF PROJECTS.--The Secretary and each State educational agency receiving a grant under this part, shall award subgrants under this part in a manner that, to the extent possible, ensures that such grants and subgrants-- (1) are distributed throughout different areas of the Nation and each State, including urban and rural areas; and (2) will assist charter schools representing a variety of educational approaches, such as approaches designed to reduce school size. (e) WAIVERS.--The Secretary may waive any statutory or regulatory requirement over which the Secretary exercises administrative authority except any such requirement relating to the elements of a charter school described in section 10306(1), if-- (1) the waiver is requested in an approved application under this part; and (2) the Secretary determines that granting such a waiver will promote the purpose of this part. (f) USE OF FUNDS.-- (1) STATE EDUCATIONAL AGENCIES.--Each State educational agency receiving a grant under this part shall use such grant funds to award subgrants to one or more eligible applicants in the State to enable such applicant to plan and implement a charter school in accordance with this part. (2) ELIGIBLE APPLICANTS.--Each eligible applicant receiving funds from the Secretary or a State educational agency shall use such funds to plan and implement a charter school in accordance with this part. (3) ALLOWABLE ACTIVITIES.--An eligible applicant receiving a grant or subgrant under this part may use the grant or subgrant funds only for-- (A) post-award planning and design of the educational program, which may include-- (i) refinement of the desired educational results and of the methods for measuring progress toward achieving those results; and (ii) professional development of teachers and other staff who will work in the charter school; and (B) initial implementation of the charter school, (i) informing the community about the school; (ii) acquiring necessary equipment and educational materials and supplies; (iii) acquiring or developing curriculum (iv) other initial operational costs that cannot be met from State or local sources. (4) ADMINISTRATIVE EXPENSES.--Each State educational agency receiving a grant pursuant to this part may reserve not more than 5 percent of such grant funds for administrative expenses associated with the charter school grant program assisted under this part. (5) REVOLVING LOAN FUNDS.--Each State educational agency receiving a grant pursuant to this part may reserve not more than 20 percent of the grant amount for the establishment of a revolving loan fund. Such fund may be used to make loans to eligible applicants that have received a subgrant under this part, under such terms as may be determined by the State educational agency, for the initial operation of the charter school grant program of such recipient until such time as the recipient begins receiving ongoing operational support from State or local financing sources. SEC. 10305. NATIONAL ACTIVITIES. The Secretary may reserve not more than ten percent of the funds available to carry out this part for any fiscal year for-- (1) peer review of applications under section 10304(c); (2) an evaluation of the impact of charter schools on student achievement, including those assisted under this part; and (3) other activities designed to enhance the success of the activities assisted under this part, such as-- (A) development and dissemination of model State charter school laws and model contracts or other means of authorizing and monitoring the performance of charter schools; and (B) collection and dissemination of information on successful charter schools. SEC. 10306. DEFINITIONS As used in this part: (1) The term ’charter school’ means a public school (A) in accordance with an enabling State statute, is exempted from significant State or local rules that inhibit the flexible operation and management of public schools, but not from any rules relating to the other requirements of this paragraph; (B) is created by a developer as a public school, or is adapted by a developer from an existing public school, and is operated under public supervision and direction; (C) operates in pursuit of a specific set of educational objectives determined by the school’s developer and agreed to by the authorized public chartering agency; (D) provides a program of elementary or secondary education, or both; (E) is nonsectarian in its programs, admissions policies, employment practices, and all other operations, and is not affiliated with a sectarian school or religious institution; (F) does not charge tuition; (G) complies with the Age Discrimination Act of 1975, title VI of the Civil Rights Act of 1964, title IX of the Education Amendments of 1972, section 504 of the Rehabilitation Act of 1973, and part B of the Individuals with Disabilities Education Act; (H) admits students on the basis of a lottery, if more students apply for admission than can be accommodated; (I) agrees to comply with the same Federal and State audit requirements as do other elementary and secondary schools in the State, unless such requirements are specifically waived for the purpose of this program; (J) meets all applicable Federal, State, and local health and safety requirements; and (K) operates in accordance with State law. (2) The term ’developer’ means an individual or group of individuals (including a public or private nonprofit organization), which may include teachers, administrators and other school staff, parents, or other members of the local community in which a charter school project will be carried out. (3) The term ’eligible applicant’ means an authorized public chartering agency participating in a partnership with a developer to establish a charter school in accordance with this part. (4) The term ’authorized public chartering agency’ means a State educational agency, local educational agency, or other public entity that has the authority pursuant to State law and approved by the Secretary to authorize or approve a charter school. SEC. 10307. AUTHORIZATION OF APPROPRIATIONS. For the purpose of carrying out this part, there are authorized to be appropriated $15,000,000 for fiscal year 1995 and such sums as may be necessary for each of the four succeeding fiscal years. GAO would like to acknowledge the assistance of the following experts. These individuals provided valuable insights on the issues discussed in this report; however, they do not necessarily endorse the positions taken in the report. In addition to those named above, the following individuals made important contributions to this report: Patricia M. Bundy, Evaluator; Sarah Keith, Intern; Julian P. Klazkin, Senior Attorney; Sheila Nicholson, Evaluator; Diane E. Schilder, Senior Social Science Analyst. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the growth of charter schools, focusing on: (1) the number of charter schools that have been approved under state laws; (2) the characteristics of charter schools' instructional programs; (3) whether charter schools operate autonomously and are held accountable for student performance; and (4) the challenges charter schools pose for federal education programs. GAO found that: (1) 9 states have approved 134 charter schools developed by teachers, school administrators, parents, and private corporations; (2) as charter schools increase in number, so do their diversity and innovation; (3) charter school instructional programs focus on multiage classes and often teach subjects within a common theme; (4) some charter schools specialize in certain subjects, while other charter schools target specific student populations; (5) charter schools' autonomy varies among the states based on their legal status, approval, funding, and exemption from rules; (6) charter schools vary in how they measure student performance and it is too soon to determine whether these schools will meet their student performance objectives; (7) the major challenge for federal program administration is determining whether those charter schools that are legally independent of their school districts can be considered local education agencies (LEA) for program administration purposes; and (8) although states have taken different approaches to address charter schools' status as LEA, further clarification is needed on how charter schools can be treated for federal program administration and whether these schools are eligible for educational funds.
DOD and NASA build costly, complex systems that serve a variety of national security and science, technology, and space exploration missions. Within DOD, the Air Force’s Space and Missile Systems Center is responsible for acquiring most of DOD’s space systems; however, the Navy is also acquiring a replacement satellite communication system. MDA, also within DOD, is responsible for developing, testing, and fielding an integrated, layered ballistic missile defense system (BMDS) to defend against all ranges of enemy ballistic missiles in all phases of flight. The major projects that NASA undertakes range from highly complex and sophisticated space transportation vehicles, to robotic probes, to satellites equipped with advanced sensors to study the Earth. Requirements for government space systems can be more demanding than those of the commercial satellite and consumer electronics industry. For instance, DOD typically has more demanding standards for radiation-hardened parts, such as microelectronics, which are designed and fabricated with the specific goal of enduring the harshest space radiation environments, including nuclear events. Companies typically need to create separate production lines and in some cases special facilities. In the overall electronics market, military and NASA business is considered a niche market. Moreover, over time, government space and missile systems have increased in complexity, partly as a result of advances in commercially driven electronics technology and subsequent obsolescence of mature high-reliability parts. Systems are using more and increasingly complex parts, requiring more stringent design verification and qualification practices. In addition, acquiring qualified parts from a limited supplier base has become more difficult as suppliers focus on commercial markets at the expense of the government space market—which requires stricter controls and proven reliability. Further, because DOD and NASA’s space systems cannot usually be repaired once they are deployed, an exacting attention to parts quality is required to ensure that they can operate continuously and reliably for years at a time through the harsh environmental conditions of space. Similarly, ballistic missiles that travel through space after their boost phase to reach their intended targets are important for national security and also require reliable and dependable parts. These requirements drive designs that depend on reliable parts, materials and processes that have passed CDRs, been fully tested, and demonstrated long life and tolerance to the harsh environmental conditions of space. There have been dramatic shifts in how parts for space and missile defense systems have been acquired and overseen. For about three decades, until the 1990s, government space and missile development based its quality requirements on a military standard known as MIL-Q- 9858A. This standard required contractors to establish a quality program with documented procedures and processes that are subject to approval by government representatives throughout all areas of contract performance. Quality is theoretically ensured by requiring both the contractor and the government to monitor and inspect products. MIL-Q- 9858A and other standards—collectively known as military specifications—were used by DOD and NASA to specify the manufacturing processes, materials, and testing needed to ensure that parts would meet quality and reliability standards needed to perform in and through space. In the 1990s, concerns about cost and the need to introduce more innovation brought about acquisition reform efforts that loosened a complex and often rigid acquisition process and shifted key decision-making responsibility—including management and oversight for parts, materials, and processes—to contractors. This period, however, was marked by continued problematic acquisitions that ultimately resulted in sharp increases in cost, schedule, and quality problems. For DOD, acquisition reform for space systems was referred to as Total System Performance Responsibility (TSPR). Under TSPR, program managers’ oversight was reduced and key decision-making responsibilities were shifted onto the contractor. In May 2003, a report of the Defense Science Board/Air Force Scientific Advisory Board Joint Task Force stated that the TSPR policy marginalized the government program management role and replaced traditional government “oversight” with “insight.” In 2006, a retired senior official responsible for testing in DOD stated that “TSPR relieved development contractors of many reporting requirements, including cost and technical progress, and built a firewall around the contractor, preventing government sponsors from properly overseeing expenditure of taxpayer dollars.” We found that TSPR reduced government oversight and led to major reductions in various government capabilities, including cost-estimating and systems-engineering staff. MDA chose to pursue the Lead Systems Integrator (LSI) approach as part of its acquisition reform effort. The LSI approach used a single contractor responsible for developing and integrating a system of systems within a given budget and schedule. We found in 2007 that a proposal to use an LSI approach on any new program should be seen as a risk at the outset, not because it is conceptually flawed, but because it indicates that the government may be pursuing a solution that it does not have the capacity to manage. Within NASA, a similar approach called “faster, better, cheaper” was intended to help reduce mission costs, improve efficiency, and increase scientific results by conducting more and smaller missions in less time. The approach was intended to stimulate innovative development and application of technology, streamline policies and practices, and energize and challenge a workforce to successfully undertake new missions in an era of diminishing resources. We found that while NASA had many successes, failures of two Mars probes revealed limits to this approach, particularly in terms of NASA’s ability to learn from past mistakes. As DOD and NASA moved from military specifications and standards, so did suppliers. According to an Aerospace Corporation study, both prime contractors and the government space market lost insight and traceability into parts as suppliers moved from having to meet military specifications and standards to an environment where the prime contractor would ensure that the process used by the supplier would yield a quality part. During this time, downsizing and tight budgets also eroded core skills, giving the government less insight, with fewer people to track problems and less oversight into manufacturing details. As DOD and NASA experienced considerable cost, schedule, and performance problems with major systems in the late 1990s and early 2000s, independent government-sponsored reviews concluded that the government ceded too much control to contractors during acquisition reform. As a result, in the mid-to late 2000s, DOD and NASA reached broad consensus that the government needed to return to a lifecycle mission assurance approach aimed at ensuring mission success. For example, MDA issued its Mission Assurance Provisions (MAP) for acquisition of mission and safety critical hardware and software in October 2006. The MAP is to assist in improving MDA’s acquisition activities through the effective application of critical best practices for quality safety and mission assurance. In December 2008, DOD updated its acquisition process which includes government involvement in the full range of requirements, design, manufacture, test, operations, and readiness reviews. Also in the last decade, DOD and NASA have developed policies and procedures aimed at preventing parts quality problems. For example, policies at each agency set standards to require the contractor to establish control plans related to parts, materials, and processes. Policies at the Air Force, MDA, and the NASA component we reviewed also establish minimum quality and reliability requirements for electronic parts—such as capacitors, resistors, connectors, fuses, and filters—and set standards to require the contractor to select materials and processes to ensure that the parts will perform as intended in the environment where they will function, considering the effects of, for example, static electricity, extreme temperature fluctuations, solar radiation, and corrosion. In addition, DOD and NASA have developed plans and policies related to counterfeit parts control that set standards to require contractors to take certain steps to prevent and detect counterfeit parts and materials. Table 1 identifies the major policies related to parts quality at DOD and NASA. Government policies generally require various activities related to the selection and testing of parts, materials, and processes. It is the prime contractor’s responsibility to determine how the requirements will be managed and implemented, including the selection and management of subcontractors and suppliers. In addition, it is the government’s responsibility to provide sufficient oversight to ensure that parts quality controls and procedures are in place and rigorously followed. Finally, DOD and NASA have quality and mission assurance personnel staff on their programs to conduct on-site audits at contractor facilities. Table 2 illustrates the typical roles of the government and the prime contractor in ensuring parts quality. DOD and NASA also have their own oversight activities that contribute to system quality. DOD has on-site quality specialists within the Defense Contract Management Agency and the military services, MDA has its Mission Assurance program, and NASA has its Quality Assurance program. Each activity aims to identify quality problems and ensure the on-time, on- cost delivery of quality products to the government through oversight of manufacturing and through supplier management activities, selected manufacturing activities, and final product inspections prior to acceptance. Likewise, prime contractors employ quality assurance specialists and engineers to assess the quality and reliability of both the parts they receive from suppliers and the overall weapon system. In addition, DOD and NASA have access to one or more of the following databases used to report deficient parts: the Product Data Reporting and Evaluation Program (PDREP), the Joint Deficiency Reporting System (JDRS), and the Government Industry Data Exchange Program (GIDEP). Through these systems, the government and industry participants share information on deficient parts. Parts quality problems reported by each program affected all 21 programs we reviewed at DOD and NASA and in some cases contributed to significant cost overruns, schedule delays, and reduced system reliability and availability. In most cases, problems were associated with electronics parts, versus mechanical parts or materials. Moreover, in several cases, parts problems were discovered late in the development cycle and, as such, tended to have more significant cost and schedule consequences. Table 3 identifies the cost and schedule effects of parts quality problems for the 21 programs we reviewed. The costs in this table are the cumulative costs of all the parts quality problems that the programs identified as most significant as of August 2010 and do not necessarily reflect cost increases to the program’s total costs. In some cases, program officials told us that they do not track the cost effects of parts quality problems or that it was too early to determine the effect. The schedule effect is the cumulative total of months it took to resolve a problem. Unless the problems affected a schedule milestone such as launch date, the total number of months may reflect problems that were concurrent and may not necessarily reflect delays to the program’s schedule. The programs we reviewed are primarily experiencing quality problems with electronic parts that are associated with electronic assemblies, such as computers, communication systems, and guidance systems, critical to the system operations. Based on our review of 21 programs, 64.7 percent of the parts quality problems were associated with electronic parts, 14.7 percent with mechanical parts, and 20.6 percent with materials used in manufacturing. In many cases, programs experienced problems with the same parts and materials. Figure 3 identifies the distribution of quality problems across electronic parts, mechanical parts, and materials. In many cases, programs experienced problems with the same parts and materials. For electronic parts, seven programs reported problems with capacitors, a part that is widely used in electronic circuits. Multiple programs also reported problems with printed circuit boards, which are used to support and connect electronic components. While printed circuit boards range in complexity and capability, they are used in virtually all but the simplest electronic devices. As with problems with electronic parts, multiple programs also experienced problems with the same materials. For example, five programs reported problems with titanium that did not meet requirements. In addition, two programs reported problems with four different parts manufactured with pure tin, a material that is prohibited in space because it poses a reliability risk to electronics. Figure 4 identifies examples of quality problems with parts and materials that affected three or more programs. While parts quality problems affected all of the programs we reviewed, problems found late in development—during final integration and testing at the instrument and system level—had the most significant effect on program cost and schedule. As shown in figure 5, part screening, qualification, and testing typically occur during the final design phase of spacecraft development. When parts problems are discovered during this phase, they are sometimes more easily addressed without major consequences to a development effort since fabrication of the spacecraft has not yet begun or is just in the initial phases. In several of the cases we reviewed, however, parts problems were discovered during instrument and system-level testing, that is, after assembly or integration of the instrument or spacecraft. As such, they had more significant consequences as they required lengthy failure analysis, disassembly, rework, and reassembly, sometimes resulting in a launch delay. Our work identified a number of cases in which parts problems identified late in development caused significant cost and schedule issues. Parts quality problems found during system-level testing of the Air Force’s Advanced Extremely High Frequency satellite program contributed to a launch delay of almost 2 years and cost the program at least $250 million. A power-regulating unit failed during system-level thermal vacuum testing because of defective electronic parts that had to be removed and replaced. This and other problems resulted in extensive rework and required the satellite to undergo another round of thermal vacuum testing. According to the program office, the additional thermal vacuum testing alone cost about $250 million. At MDA, the Space Tracking and Surveillance System program discovered problems with defective electronic parts in the Space- Ground Link Subsystem during system-level testing and integration of the satellite. By the time the problem was discovered, the manufacturer no longer produced the part and an alternate contractor had to be found to manufacture and test replacement parts. According to officials, the problem cost about $7 million and was one of the factors that contributed to a 17-month launch delay of two demonstration satellites and delayed participation in the BMDS testing we reported on in March 2009. At NASA, parts quality problems found late in development resulted in a 20-month launch delay for the Glory program and cost $71.1 million. In August 2008, Glory’s spacecraft computer failed to power up during system-level testing. After a 6-month failure analysis, the problem was attributed to a crack in the computer’s printed circuit board, an electronic part in the computer used to connect electronic components. Because the printed circuit board could not be manufactured reliably, the program had to procure and test an alternate computer. The program minimized the long lead times expected with the alternate computer by obtaining one that had already been procured by NASA. However, according to contractor officials, design changes were also required to accommodate the alternate computer. In June 2010, after the computer problem had been resolved, the Glory program also discovered problems with parts for the solar array drive assembly that rendered one of the arrays unacceptable for flight and resulted in an additional 3-month launch delay. Also at NASA, the National Polar-orbiting Operational Environmental Satellite System Preparatory Project experienced $105 million in cost increases and 27 months of delay because of parts quality problems. In one case, a key instrument developed by a NASA partner failed during instrument-level testing because the instrument frame fractured at several locations. According to the failure review board, stresses exceeded the material capabilities of several brazed joints—a method of joining metal parts together. According to officials, the instrument’s frame had to be reinforced, which delayed instrument delivery and ultimately delayed the satellite’s launch date. In addition, officials stated that they lack confidence in how the partner-provided satellite instruments will function on orbit because of the systemic mission assurance and systems engineering issues that contributed to the parts quality problems. For some of the programs we reviewed, the costs associated with parts quality problems were minimized because the problems were found early and were resolved within the existing margins built into the program schedule. For example, the Air Force’s Global Positioning System (GPS) program discovered problems with electronic parts during part-level testing and inspection. An investigation into the problem cost about $50,000, but did not result in delivery delays. An independent review team ultimately concluded that the parts could be used without a performance or mission impact. At NASA, the Juno program discovered during part- level qualification testing that an electronic part did not meet performance requirements. The program obtained a suitable replacement from another manufacturer; it cost the program $10,000 to resolve the issue with no impact on program schedule. In other cases, the costs of parts quality problems were amplified because they were a leading cause of a schedule delay to a major milestone, such as launch readiness. For example, of the $60.9 million cost associated with problems with the Glory spacecraft computer found during system-level testing, $11.6 million was spent to resolve the issue, including personnel costs for troubleshooting, testing, and oversight as well as design, fabrication, and testing of the new computer. The majority of the cost— $49.3 million—was associated with maintaining the contractor during the 15-month launch delay. Similarly, problems with parts for Glory’s solar array assembly cost about $10.1 million, $2.7 million to resolve the problem and $7.4 million resulting from the additional 3-month schedule delay. Similarly, program officials for NASA’s National Polar-orbiting Environmental Satellite System Preparatory Project attributed the $105 million cost of its parts quality problems to the costs associated with launch and schedule delays, an estimated $5 million a month. In several cases, the programs were encountering other challenges that obscured the problems caused by poor quality parts. For example, the Air Force’s Space-Based Infrared System High program reported that a part with pure tin in the satellite telemetry unit was discovered after the satellite was integrated. After an 11-month failure review board, the defective part was replaced. The program did not quantify the cost and schedule effect of the problem because the program was encountering software development issues that were already resulting in schedule delays. Similarly, NASA’s Mars Science Laboratory program experienced a failure associated with joints in the rover propulsion system. According to officials, the welding process led to joint embrittlement and the possibility of early failure. The project had to test a new process, rebuild, and test the system, which cost about $4 million and resulted in a 1-year delay in completion. However, the program’s launch date had already been delayed 25 months because of design issues with the rover actuator motors and avionics package—in effect, buying time to resolve the problem with the propulsion system. In addition to the launch delays discussed above, parts quality problems also resulted in reduced system reliability and availability for several other programs we reviewed. For example, the Air Force’s GPS program found that an electronic part lacked qualification data to prove the part’s quality and reliability. As a result, the overall reliability prediction for the space vehicle was decreased. At MDA, the Ground-Based Midcourse Defense program discovered problems with an electronic part in the telemetry unit needed to transmit flight test data. The problem was found during final assembly and test operations of the Exoatmospheric Kill Vehicle resulting in the cancellation of a major flight test. This increased risk to the program and the overall BMDS capability, since the lack of adequate intercept data reduced confidence that the system could perform as intended in a real- world situation. Also, MDA’s Aegis Ballistic Missile Defense program recalled 16 missiles from the warfighter, including 7 from a foreign partner, after the prime contractor discovered that the brackets used to accommodate communications and power cabling were improperly adhered to the Standard Missile 3 rocket motor. If not corrected, the problem could have resulted in catastrophic mission failure. Regardless of the cause of the parts quality problem, the government typically bears the costs associated with resolving the issues and associated schedule impact. In part, this is due to the use of cost- reimbursement contracts. Because space and missile defense acquisitions are complex and technically challenging, DOD and NASA typically use cost-reimbursement contracts, whereby the government pays the prime contractor’s allowable costs to the extent prescribed in the contract for the contractor’s best efforts. Under cost-reimbursement contracts, the government generally assumes the financial risks associated with development, which may include the costs associated with parts quality problems. Of the 21 programs we reviewed, 20 use cost-reimbursement contracts. In addition, 17 programs use award and incentive fees to reduce the government’s risk and provide an incentive for excellence in such areas as quality, timeliness, technical ingenuity, and cost-effective management. Award and incentive fees enable the reduction of fee in the event that the contractor’s performance does not meet or exceed the requirements of the contract. Aside from the use of award fees, senior quality and acquisition oversight officials told us that incentives for prime contractors to ensure quality are limited. The parts quality problems we identified were directly attributed to poor control of manufacturing processes and materials, poor design, and lack of effective supplier management. Generally, prime contractor activities to capture manufacturing knowledge should include identifying critical characteristics of the product’s design and then the critical manufacturing processes and materials to achieve these characteristics. Manufacturing processes and materials should be documented, tested, and controlled prior to production. This includes establishing criteria for workmanship, making work instructions available, and preventing and removing foreign object debris in the production process. Poor workmanship was one of the causes of problems with electronic parts. At DOD, poor workmanship during hand-soldering operations caused a capacitor to fail during testing on the Navy’s Mobile User Objective System program. Poor soldering workmanship also caused a power distribution unit to experience problems during vehicle-level testing on MDA’s Targets and Countermeasures program. According to MDA officials, all units of the same design by the same manufacturer had to be X-ray inspected and reworked, involving extensive hardware disassembly. As a corrective action, soldering technicians were provided with training to improve their soldering operations and ability to perform better visual inspections after soldering. Soldering workmanship problems also contributed to a capacitor failure on NASA’s Glory program. Analysis determined that the manufacturer’s soldering guidelines were not followed. Programs also reported quality problems because of the use of undocumented and untested manufacturing processes. For example, MDA’s Aegis Ballistic Missile Defense program reported that the brackets used to accommodate communications and power cabling were improperly bonded to Standard Missile 3 rocket motors, potentially leading to mission failure. A failure review board determined that the subcontractor had changed the bonding process to reduce high scrap rates and that the new process was not tested and verified before it was implemented. Similarly, NASA’s Landsat Data Continuity Mission program experienced problems with the spacecraft solar array because of an undocumented manufacturing process. According to program officials, the subcontractor did not have a documented process to control the amount of adhesive used in manufacturing, and as a result, too much adhesive was applied. If not corrected, the problem could have resulted in solar array failure on orbit. Poor control of manufacturing materials and the failure to prevent contamination also caused quality problems. At MDA, the Ground-Based Midcourse Defense program reported a problem with defective titanium tubing. The defective tubing was rejected in 2004 and was to be returned to the supplier; however, because of poor control of manufacturing materials, a portion of the material was not returned and was inadvertently used to fabricate manifolds for two complete Ground-Based Interceptor Exoatmospheric Kill Vehicles. The vehicles had already been processed and delivered to the prime contractor for integration when the problem was discovered. Lack of adherence to manufacturing controls to prevent contamination and foreign object debris also caused parts quality problems. For example, at NASA, a titanium propulsion tank for the Tracking Data and Relay Satellite program failed acceptance testing because a steel chip was inadvertently welded onto the tank. Following a 3-month investigation into the root cause, the tank was scrapped and a replacement tank was built. In addition to problems stemming from poor control of manufacturing processes and materials, many problems resulted from poor part design, design complexity, and inattention to manufacturing risks. For example, attenuators for the Navy’s Mobile User Objective System exhibited inconsistent performance because of their sensitivity to temperature changes. Officials attributed the problem to poor design, and the attenuators were subsequently redesigned. At NASA, design problems also affected parts for the Mars Science Laboratory program. According to program officials, several resistors failed after assembly into printed circuit boards. A failure review board determined that the tight design limits contributed to the problem. Consequently, the parts had to be redesigned and replaced. Programs also underestimated the complexity of parts design, which created risks of latent design and workmanship defects. For example, NASA’s Glory project experienced problems with the state-of-the-art printed circuit board for the spacecraft computer. According to project officials, the board design was almost impossible to manufacture with over 100 serial steps involved in the manufacturing process. Furthermore, failure analysis found that the 27,000 connection points in the printed circuit board were vulnerable to thermal stresses over time leading to intermittent failures. However, the quality of those interconnections was difficult to detect through standard testing protocols. This is inconsistent with commercial best practices, which focus on simplified design characteristics as well as use of mature and validated technology and manufacturing processes. Program officials at each agency also attributed parts quality problems to the prime contractor’s failure to ensure that its subcontractors and suppliers met program requirements. According to officials, in several cases, prime contractors were responsible for flowing down all applicable program requirements to their subcontractors and suppliers. Requirements flow-down from the prime contractor to subcontractors and suppliers is particularly important and challenging given the structure of the space and defense industries, wherein prime contractors are subcontracting more work to subcontractors. At MDA, the Ground-Based Midcourse Defense program experienced a failure with an electronics part purchased from an unauthorized supplier. According to program officials, the prime contractor flowed down the requirement that parts only be purchased from authorized suppliers; however, the subcontractor failed to execute the requirement and the prime contractor did not verify compliance. Program officials for NASA’s Juno program attributed problems with a capacitor to the supplier’s failure to review the specification prohibiting the use of pure tin. DOD’s Space-Based Infrared System High program reported problems with three different parts containing pure tin and attributed the problems to poor requirements flow-down and poor supplier management. Figure 6 shows an example of tin whiskers on a capacitor, which can cause catastrophic problems to space systems. DOD and NASA have instituted new policies to prevent and detect parts quality problems, but most of the programs we reviewed were initiated before these policies took effect. Moreover, newer programs that do come under the policies have not reached the phases of development where parts problems are typically discovered. In addition, agencies and industry have been collaborating to share information about potential problems, collecting data, and developing guidance and criteria for activities such as testing parts, managing subcontractors, and mitigating specific types of problems. We could not determine the extent to which collaborative actions have resulted in reduced instances of parts quality problems or ensured that they are caught earlier in the development cycle. This is primarily because data on the condition of parts quality in the space and missile community governmentwide historically have not been collected. And while there are new efforts to collect data on anomalies, there is no mechanism to use these data to help assess the effectiveness of improvement actions. Lastly, there are significant potential barriers to success of efforts to address parts quality problems. They include broader acquisition management problems, workforce gaps, diffuse leadership in the national security space community, the government’s decreasing influence on the overall electronic parts market, and an increase in counterfeiting of electronic parts. In the face of such challenges, it is likely that ongoing improvements will have limited success without continued assessments to determine what is working well and what more needs to be done. As noted earlier in this report, the Air Force, MDA, and NASA have all recently instituted or updated existing policies to prevent and detect parts quality problems. At the Air Force and MDA, all of the programs we reviewed were initiated before these recent policies aimed at preventing and detecting parts quality problems took full effect. In addition, it is too early to tell whether newer programs—such as a new Air Force GPS development effort and the MDA’s Precision Tracking Space System—are benefiting from the newer policies because these programs have not reached the design and fabrication phases where parts problems are typically discovered. However, we have reported that the Air Force is taking measures to prevent the problems experienced on the GPS IIF program from recurring on the new GPS III program. The Air Force has increased government oversight of its GPS III development and Air Force officials are spending more time at the contractor’s site to ensure quality. The Air Force is also following military standards for satellite quality for GPS III development. At the time of our review, the program had not reported a significant parts quality problem. Table 4 highlights the major differences in the framework between the GPS IIF and GPS III programs. In addition to new policies focused on quality, agencies are also becoming more focused on industrial base issues and supply chain risks. For example, MDA has developed the supplier road map database in an effort to gain greater visibility into the supply chain in order to more effectively manage supply chain risks. In addition, according to MDA officials, MDA has recently been auditing parts distributors in order to rank them for risk in terms of counterfeit parts. NASA has begun to assess industrial base risks and challenges during acquisition strategy meetings and has established an agency Supply Chain Management Team to focus attention on supply chain management issues and to coordinate with other government agencies. Agencies and industry also participate in a variety of collaborative initiatives to address quality, in particular, parts quality. These range from informal groups focused on identifying and sharing news about emerging problems as quickly as possible, to partnerships that conduct supplier assessments, to formal groups focused on identifying ways industry and the government can work together to prevent and mitigate problems. As shown in table 5, these groups have worked to establish guidance, criteria, and standards that focus on parts quality issues, and they have enhanced existing data collection tools and created new databases focused on assessing anomalies. One example of the collaborative efforts is the Space Industrial Base Council (SIBC)—a government-led initiative—which brings together officials from agencies involved in space and missile defense to focus on a range of issues affecting the space industrial base and has sparked numerous working groups focused specifically on parts quality and critical suppliers. These groups in turn have worked to develop information- sharing mechanisms, share lessons learned and conduct supplier assessments, soliciting industry’s input as appropriate. For instance, the SIBC established a critical technology working group to explore supply chains and examine critical technologies to put in place a process for strategic management of critical space systems’ technologies and capabilities under the Secretary of the Air Force and the Director of the National Reconnaissance Office. The working group has developed and initiated a mitigation plan for batteries, solar cells and arrays, and traveling wave tube amplifiers. In addition, the Space Supplier Council was established under the SIBC to focus on the concerns of second-tier and lower-tier suppliers, which typically have to go through the prime contractors, and to promote more dialogue between DOD, MDA, NASA, other space entities, and these suppliers. Another council initiative was the creation of the National Security Space Advisory Forum, a Web-based alert system developed for sharing critical space system anomaly data and problem alerts, which became operational in 2005. Agency officials also cited other informal channels used to share information regarding parts issues. For example, NASA officials stated that after verifying a parts issue, they will share their internal advisory notice with any other government space program that could potentially be affected by the issue. According to several government and contractor officials, the main reasons for delays in information sharing were either the time it took to confirm a problem or concerns with proprietary and liability issues. NASA officials stated that they received advisories from MDA and had an informal network with MDA and the Army Space and Missile Defense Command to share information about parts problems. Officials at the Space and Missile Systems Center also mentioned that they have informal channels for sharing part issues. For example, an official in the systems engineering division at the Space and Missile Systems Center stated that he has weekly meetings with a NASA official to discuss parts issues. In addition to the formal and informal collaborative efforts, the Air Force’s Space and Missile Systems Center, MDA, NASA, and the National Reconnaissance Office signed a memorandum of understanding (MOU) in February 2011 to encourage additional interagency cooperation in order to strengthen mission assurance practices. The MOU calls on the agencies to develop and share lessons learned and best practices to ensure mission success through a framework of collaborative mission assurance. Broad objectives of the framework are to develop core mission assurance practices and tools; to foster a mission assurance culture and world-class workforce; to develop clear and executable mission assurance plans; to manage effective program execution; and to ensure program health through independent, objective assessments. Specific objectives include developing a robust mission assurance infrastructure and guidelines for tailoring specifications and standards for parts, materials, and processes and establishing standard contractual language to ensure consistent specification of core standards and deliverables. In addition, each agency is asked to consider the health of the industrial base in space systems acquisitions and participate in mission assurance activities, such as the Space Supplier Council and mission assurance summits. In signing the MOU, DOD, MDA, NASA, and the National Reconnaissance Office acknowledged the complexity of such an undertaking as it typically takes years to deliver a capability and involves hundreds of industry partners building, integrating, and testing hundreds of thousands of parts, all which have to work the first time on orbit—a single mishap, undetected, can and has had catastrophic results. Although collaborative efforts are under way, we could not determine the extent to which collaborative actions have resulted in reduced instances of parts quality problems to date or ensured that they are caught earlier in the development cycle. This is primarily because data on the condition of parts quality in the space and missile community governmentwide historically have not been collected. The Aerospace Corporation has begun to collect data on on-orbit and preflight anomalies in addition to the Web alert system established by the Space Quality Improvement Council. In addition, there is no mechanism in place to assess the progress of improvement actions using these data or to track the condition of parts quality problems across the space and missile defense sector to determine if improvements are working or what additional actions need to be taken. Such a mechanism is needed given the varied challenges facing improvement efforts. There are significant potential barriers to the success of improvement efforts, including broader acquisition management problems, diffuse leadership in the national security space community, workforce gaps, the government’s decreasing influence on the overall electronic parts market, and an increase in counterfeiting of electronic parts. Actions are being taken to address some of these barriers, such as acquisition management and diffuse leadership, but others reflect trends affecting the aerospace industry that are unlikely to change in the near future and may limit the extent to which parts problems can be prevented. Broader acquisition management problems: Both space and missile defense programs have experienced acquisition problems—well beyond parts quality management difficulties—during the past two decades that have driven up costs by billions of dollars, stretched schedules by years, and increased technical risks. These problems have resulted in potential capability gaps in areas such as missile warning, military communications, and weather monitoring, and have required all the agencies in our review to cancel or pare back major programs. Our reports have generally found that these problems include starting efforts before requirements and technologies have been fully understood and moving them forward into more complex phases of development without sufficient knowledge about technology, design, and other issues. Reduced oversight resulting from earlier acquisition reform efforts and funding instability have also contributed to cost growth and schedule delays. Agencies are attempting to address these broader challenges as they are concurrently addressing parts quality problems. For space in particular, DOD is working to ensure that critical technologies are matured before large-scale acquisition programs begin, requirements are defined early in the process and are stable throughout, and system designs remain stable. In response to our designation of NASA acquisition management as a high-risk area, NASA developed a corrective action plan to improve the effectiveness of its program/project management, and it is in the process of implementing earned value management within certain programs to help projects monitor the scheduled work done by NASA contractors and employees. These and other actions have the potential to strengthen the foundation for program and quality management but they are relatively new and implementation is uneven among the agencies involved with space and missile defense. For instance, we have found that both NASA and MDA lack adequate visibility into costs of programs. Our reports also continue to find that cost and schedule estimates across all three agencies tend to be optimistic. Diffuse leadership within the national security space community: We have previously testified and reported that diffuse leadership within the national security space community has a direct impact on the space acquisition process, primarily because it makes it difficult to hold any one person or organization accountable for balancing needs against wants, for resolving conflicts among the many organizations involved with space, and for ensuring that resources are dedicated where they need to be dedicated. In 2008, a congressionally chartered commission (known as the Allard Commission) reported that responsibilities for military space and intelligence programs were scattered across the staffs of DOD organizations and the intelligence community and that it appeared that “no one is in charge” of national security space. The same year, the House Permanent Select Committee on Intelligence reported similar concerns, focusing specifically on difficulties in bringing together decisions that would involve both the Director of National Intelligence and the Secretary of Defense. Prior studies, including those conducted by the Defense Science Board and the Commission to Assess United States National Security Space Management and Organization (Space Commission), have identified similar problems, both for space as a whole and for specific programs. Changes have been made this past year to national space policies as well as organizational and reporting structures within the Office of the Secretary of Defense and the Air Force to address these concerns and clarify responsibilities, but it remains to be seen whether these changes will resolve problems associated with diffuse leadership. Workforce gaps: Another potential barrier to success is a decline in the number of quality assurance officials, which officials we spoke with pointed to as a significant detriment. A senior quality official at MDA stated that the quality assurance workforce was significantly reduced as a result of acquisition reform. A senior DOD official responsible for space acquisition oversight agreed, adding that the government does not have the in-house knowledge or resources to adequately conduct many quality control and quality assurance tasks. NASA officials also noted the loss of parts specialists who provide technical expertise to improve specifications and review change requests. According to NASA officials, there is now a shortage of qualified personnel with the requisite cross-disciplinary knowledge to assess parts quality and reliability. Our prior work has also shown that DOD’s Defense Contract Management Agency (DCMA), which provides quality assurance oversight for many space acquisitions, was downsized considerably during the 1990s. While capacity shortfalls still exist, DCMA has implemented a strategic plan to address workforce issues and improve quality assurance oversight. The shortage in the government quality assurance workforce reflects a broader decline in the numbers of scientists and engineers in the space sector. The 2008 House Permanent Select Committee on Intelligence report mentioned above found that the space workforce is facing a significant loss of talent and expertise because of pending retirements, which is causing problems in smoothly transitioning to a new space workforce. Similarly, in 2010 we reported that 30 percent of the civilian manufacturing workforce was eligible for retirement, and approximately 26 percent will become eligible for retirement over the next 4 years. Similar findings were reported by the DOD Cost Analysis Improvement Group in 2009. Industrial base consolidation: A series of mergers and consolidations that took place primarily in the 1990s added risks to parts quality—first, by shrinking the pool of suppliers available to produce specialty parts; second, by reducing specialized expertise within prime contractors; and third, by introducing cost-cutting measures that de-emphasize quality assurance. We reported in 2007 that the GPS IIF program, the Space-Based Infrared High Satellite System, and the Wideband Global SATCOM system all encountered quality problems that could be partially attributed to industry consolidations. Specialized parts for the Wideband Global SATCOM system, for example, became difficult to obtain after smaller contractors that made these parts started to consolidate. For GPS, consolidations led to a series of moves in facilities that resulted in a loss of GPS technical expertise. In addition, during this period, the contractor took additional cost-cutting measures that reduced quality. Senior officials responsible for DOD space acquisition oversight with whom we spoke with for this review stated that prime space contractors have divested their traditional lines of expertise in favor of acting in a broader “system integrator” role. Meanwhile, smaller suppliers that attempted to fill gaps in expertise and products created by consolidations have not had the experience and knowledge needed to produce to the standards needed for government space systems. For instance, officials from one program told us that their suppliers were often unaware that their parts would be used in space applications and did not understand or follow certain requirements. Officials also mentioned that smaller suppliers attempting to enter the government space market do not have access to testing and other facilities needed to help build quality into their parts. We recently reported that small businesses typically do not own the appropriate testing facilities, such as thermal vacuum chambers, that are used for testing spacecraft or parts under a simulated space environment and instead must rely on government, university, or large contractor testing facilities, which can be costly. Government’s declining share of the overall electronic parts market: DOD and NASA officials also stated that the government’s declining share of the overall electronic parts market has made it more difficult to acquire qualified electronic parts. According to officials, the government used to be the primary consumer of microelectronics, but it now constitutes only a small percentage of the market. As such, the government cannot easily demand unique exceptions to commercial standards. An example of an exception is DOD’s standards for radiation-hardened parts, such as microelectronics, which are designed and fabricated with the specific goal of enduring the harshest space radiation environments, including nuclear events. We reported in 2010 that to produce such parts, companies would typically need to create separate production lines and in some cases special facilities. Another example is that government space programs often demand the use of a tin alloy (tin mixed with lead) for parts rather than pure tin because of the risk for growth of tin whiskers. According to officials, as a result of European environmental regulations, commercial manufacturers have largely moved away from the use of lead making it more difficult and costly to procure tin alloy parts, and increasing the risk of parts being made with pure tin. Similarly, officials noted concerns with the increased use of lead-free solders used in electronic parts. Moreover, officials told us that when programs do rely on commercial parts, there tends to be a higher risk of lot-to-lot variation, obsolescence, and a lack of part traceability. An increase in counterfeit electronic parts: Officials we spoke with agreed that an increase in counterfeit electronics parts has made efforts to address parts quality more difficult. “Counterfeit” generally refers to instances in which the identity or pedigree of a product is knowingly misrepresented by individuals or companies. A 2010 Department of Commerce study identified a growth in incidents of counterfeit parts across the electronics industry from about 3,300 in 2005 to over 8,000 incidents in 2008. We reported in 2010 that DOD is limited in its ability to determine the extent to which counterfeit parts exist in its supply chain because it does not have a departmentwide definition of “counterfeit” and a consistent means to identify instances of suspected counterfeit parts. Moreover, DOD relies on existing procurement and quality control practices to ensure the quality of the parts in its supply chain. However, these practices are not designed to specifically address counterfeit parts. Limitations in the areas of obtaining supplier visibility, investigating part deficiencies, and reporting and disposal may reduce DOD’s ability to mitigate risks posed by counterfeit parts. At the time of our review, DOD was only in the early stages of addressing counterfeiting. We recommended and DOD concurred that DOD leverage existing initiatives to establish anticounterfeiting guidance and disseminate this guidance to all DOD components and defense contractors. Space and missile systems must meet high standards for quality. The 2003 Defense Science Board put it best by noting that the “primary reason is that the space environment is unforgiving. Thousands of good engineering decisions can be undone by a single engineering flaw or workmanship error, resulting in the catastrophe of major mission failure. Options for correction are scant.” The number of parts problems identified in our review is relatively small when compared to the overall number of parts used. But these problems have been shown to have wide-ranging and significant consequences. Moreover, while the government’s reliance on space and missile systems has increased dramatically, attention and oversight of parts quality declined because of a variety of factors, including the implementation of TSPR and similar policies, workforce gaps, and industry consolidations. This condition has been recognized and numerous efforts have been undertaken to strengthen the government’s ability to detect and prevent parts problems. But there is no mechanism in place to periodically assess the condition of parts quality problems in major space and missile defense programs and the impact and effectiveness of corrective measures. Such a mechanism could help ensure that attention and resources are focused in the right places and provide assurance that progress is being made. We are making two recommendations to the Secretary of Defense and the NASA Administrator. We recommend that the Secretary of Defense and the Administrator of NASA direct appropriate agency executives to include in efforts to implement the new MOU for increased mission assurance a mechanism for a periodic, governmentwide assessment and reporting of the condition of parts quality problems in major space and missile defense programs. This should include the frequency such problems are appearing in major programs, changes in frequency from previous years, and the effectiveness of corrective measures. We further recommend that reports of the periodic assessments be made available to Congress. We provided draft copies of this report to DOD and NASA for review and comment. DOD and NASA provided written comments on a draft of this report. These comments are reprinted in appendixes III and IV, respectively. DOD and NASA also provided technical comments, which were incorporated as appropriate. DOD partially concurred with our recommendation to include in its efforts to implement the new MOU for increased mission assurance a mechanism for a periodic, governmentwide assessment and reporting of the condition of parts quality problems in major space and missile defense programs, to include the frequency problems are appearing, changes in frequency from previous years, and the effectiveness of corrective measures. DOD responded that it would work with NASA to determine the optimal governmentwide assessment and reporting implementation to include all quality issues, of which parts, materials, and processes would be one of the major focus areas. In addition, DOD proposed an annual reporting period to ensure planned, deliberate, and consistent assessments. We support DOD’s willingness to address all quality issues and to include parts, materials, and processes as an important focus area in an annual report. Recent cases of higher-level quality problems that did not fall within the scope of our review include MDA’s Terminal High Altitude Area Defense missile system and the Air Force’s Advanced Extremely High Frequency communications satellite, which were mentioned earlier in our report. It is our opinion that these cases occurred for reasons similar to those we identified for parts, materials, and processes. We recognize that quality issues can include a vast and complex universe of problems. Therefore, the scope of our review and focus of our recommendation was on parts, materials, and processes to enable consistent reporting and analysis and to help direct corrective actions. Should a broader quality focus be pursued, as DOD indicated, it is important that DOD identify ways in which this consistency can be facilitated among the agencies. In response to our second recommendation, DOD stated that it had no objection to providing a report to Congress, if Congress desired one. We believe that DOD should proactively provide its proposed annual reports to Congress on a routine basis, rather than waiting for any requests from Congress, which could be inconsistent from year to year. NASA also concurred with our recommendations. NASA stated that enhanced cross-agency communication, coordination, and sharing of parts quality information will help mitigate threats poses by defective and nonconforming parts. Furthermore, NASA plans to engage other U.S. space agencies to further develop and integrate agency mechanisms for reporting, assessing, tracking, and trending common parts quality problems, including validation of effective cross-agency solutions. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Defense, the Administrator of the National Aeronautics and Space Administration, and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are provided in appendix V. Our specific objectives were to assess (1) the extent to which parts quality problems are affecting Department of Defense (DOD) and National Aeronautics and Space Administration (NASA) space and missile defense programs; (2) the causes of these problems; and (3) initiatives to prevent, detect, and mitigate parts quality problems. To examine the extent to which parts quality problems are affecting DOD (the Air Force, the Navy, and the Missile Defense Agency (MDA)) and NASA cost, schedule, and performance of space and missile defense programs, we reviewed all 21 space and missile programs—9 at DOD, including 4 Air Force, 1 Navy, and 4 MDA systems, and 12 at NASA—that were, as of October 2009, in development and projected to be high cost, and had demonstrated through a critical design review (CDR) that the maturity of the design was appropriate to support proceeding with full- scale fabrication, assembly, integration, and test. DOD space systems selected were major defense acquisition programs— defined as those requiring an eventual total expenditure for research, development, test, and evaluation of more than $365 million or for procurement of more than $2.190 billion in fiscal year 2000 constant dollars. All four MDA systems met these same dollar thresholds. NASA programs selected had a life cycle cost exceeding $250 million. We chose these programs based on their cost, stage in the acquisition process—in development and post- CDR—and congressional interest. A quality problem was defined to be the degree to which the product attributes, such as capability, performance, or reliability, did not meet the needs of the customer or mission, as specified through the requirements definition and allocation process. For each of the 21 systems we examined program documentation, such as parts quality briefings, failure review board reports, advisory notices, and cost and schedule analysis reports and held discussions with quality officials from the program offices, including contractor officials and Defense Contract Management Agency officials, where appropriate. We specifically asked each program, at the time we initiated our review, to provide us with the most recent list of the top 5 to 10 parts, material or processes problems, as defined by that program, affecting its program’s cost, schedule, or performance. Based on additional information gathered through documentation provided by the programs and discussions with program officials, we reviewed each part problem reported by each program to determine if there was a part problem, rather than a material, process, component, or assembly level problem. In addition, when possible we identified the impact that a part, material, or process quality problem might have had on system cost, schedule, and performance. We selected one system with known quality problems, as previously reported in GAO reports, within the Air Force (Space-Based Space Surveillance System), MDA (Ground-Based Midcourse Defense), and NASA (Glory) for further review to gain greater insight into the reporting and root causes of the parts quality problems. Our findings are limited by the approach and data collected. Therefore, we were unable to make generalizable or projectable statements about space and missile programs beyond our scope. We also have ongoing work through our annual DOD assessments of selected weapon programs and NASA assessments of selected larger- scale projects for many of these programs, which allowed us to build upon our prior work efforts and existing DOD and NASA contacts. Programs selected are described in appendix II and are listed below. Advanced Extremely High Frequency Satellites Global Positioning System Block IIF Space-Based Infrared System High Program Space-Based Space Surveillance Block 10 Aegis Ballistic Missile Defense Ground-Based Midcourse Defense Space Tracking and Surveillance System Aquarius Global Precipitation Measurement Mission Glory Gravity Recovery and Interior Laboratory James Webb Space Telescope Juno Landsat Data Continuity Mission Magnetospheric Multiscale Mars Science Laboratory National Polar-orbiting Operational Environmental Satellite System Radiation Belt Storm Probes Tracking and Data Relay Satellite Replenishment DOD and NASA have access to one or more of the following databases used to report deficient parts: the Product Data Reporting and Evaluation Program, the Joint Deficiency Reporting System, and the Government Industry Data Exchange Program. We did not use these systems in our review because of the delay associated with obtaining current information and because it was beyond the scope of the review to assess the utility or effectiveness of these systems. To determine the causes behind the parts quality problems, we asked each program to provide an explanation of the root causes and contributing factors that may have led to each part problem reported. Based on the information we gathered, we grouped the root causes and contributing factors for each part problem. We reviewed program documentation, regulations, directives, instructions, and policies to determine how the Air Force, MDA, and NASA define and address parts quality. We interviewed senior DOD, MDA, and NASA headquarters officials, as well as system program and contractor officials from the Air Force, MDA, and NASA, about their knowledge of parts problems on their programs. We reviewed several studies on quality and causes from the Subcommittee on Technical and Tactical Intelligence, House Permanent Select Committee on Intelligence; the Department of Commerce; and the Aerospace Corporation to gain a better understanding of quality and challenges facing the development, acquisition, and execution of space systems. We met with Aerospace Corporation officials to discuss some of their reports and findings and the status of their ongoing efforts to address parts quality. We relied on previous GAO reports for the implementation status of planned program management improvements. To identify initiatives to prevent, detect, and mitigate parts quality problems, we asked each program what actions were being taken to remedy the parts problems. Through these discussions and others held with agency officials, we were able to obtain information on working groups. We reviewed relevant materials provided to us by officials from DOD, the Air Force, MDA, NASA, and the Aerospace Corporation. We interviewed program officials at the Air Force, MDA, NASA, and the Aerospace Corporation responsible for quality initiatives to discuss those initiatives that would pertain to parts quality and discuss the implementation status of any efforts. We conducted this performance audit from October 2009 to May 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Air Force’s AEHF satellite system will replenish the existing Milstar system with higher-capacity, survivable, jam-resistant, worldwide, secure communication capabilities for strategic and tactical warfighters. The program includes satellites and a mission control segment. Terminals used to transmit and receive communications are acquired separately by each service. AEHF is an international program that includes Canada, the United Kingdom, and the Netherlands. The Air Force’s GPS includes satellites, a ground control system, and user equipment. It conveys positioning, navigation, and timing information to users worldwide. In 2000, Congress began funding the modernization of Block IIR and Block IIF satellites. GPS IIF is a new generation of GPS satellites that is intended to deliver all legacy signals plus new capabilities, such as a new civil signal and better accuracy. The Navy’s MUOS, a satellite communication system, is expected to provide a worldwide, multiservice population of mobile and fixed-site terminal users with an increase in narrowband communications capacity and improve availability for small terminals. MUOS will replace the Ultra High Frequency Follow-On satellite system currently in operation and provide interoperability with legacy terminals. MUOS consists of a network of satellites and an integrated ground network. The Air Force’s SBIRS High satellite system is being developed to replace the Defense Support Program and perform a range of missile warning, missile defense, technical intelligence, and battlespace awareness missions. SBIRS High consists of four satellites in geosynchronous earth orbit plus two replenishment satellites, two sensors on host satellites in highly elliptical orbit plus two replenishment sensors, and fixed and mobile ground stations. The Air Force’s SBSS Block 10 satellite is intended to provide a follow-on capability to the Midcourse Space Experiment / Space Based Visible sensor satellite, which ended its mission in July 2008. SBSS will consist of a single satellite and associated command, control, communications, and ground processing equipment. The SBSS satellite is expected to operate 24 hours a day, 7 days a week, to collect positional and characterization data on earth-orbiting objects of potential interest to national security. MDA’s Aegis BMD is a sea-based missile defense system being developed in incremental, capability-based blocks to defend against ballistic missiles of all ranges. Key components include the shipboard SPY-1 radar, Standard Missile 3 (SM-3) missiles, and command and control systems. It will also be used as a forward-deployed sensor for surveillance and tracking of ballistic missiles. The SM-3 missile has multiple versions in development or production: Blocks IA, IB, and IIA. MDA’s GMD is being fielded to defend against limited long-range ballistic missile attacks during their midcourse phase. GMD consists of an interceptor with a three-stage booster and exoatmospheric kill vehicle, and a fire control system that formulates battle plans and directs components integrated with Ballistic Missile Defense System (BDMS) radars. We assessed the maturity of all GMD critical technologies, as well as the design of the Capability Enhanced II (CE-II) configuration of the Exoatmospheric Kill Vehicle (EKV), which began emplacements in fiscal year 2009. MDA’s STSS is designed to acquire and track threat ballistic missiles in all stages of flight. The agency obtained the two demonstrator satellites in 2002 from the Air Force SBIRS Low program that halted in 1999. MDA refurbished and launched the two STSS demonstrations satellites on September 25, 2009. Over the next 2 years, the two satellites will take part in a series of tests to demonstrate their functionality and interoperability with the BMDS. The Targets and Countermeasures program provides ballistic missiles to serve as targets in the MDA flight test program. The targets program involves multiple acquisitions—including a variety of existing and new missiles and countermeasures. Aquarius is a satellite mission developed by NASA and the Space Agency of Argentina (Comisión Nacional de Actividades Espaciales) to investigate the links between the global water cycle, ocean circulation, and the climate. It will measure global sea surface salinity. The Aquarius science goals are to observe and model the processes that relate salinity variations to climatic changes in the global cycling of water and to understand how these variations influence the general ocean circulation. By measuring salinity globally for 3 years, Aquarius will provide a new view of the ocean’s role in climate. The GPM mission, a joint NASA and Japan Aerospace Exploration Agency project, seeks to improve the scientific understanding of the global water cycle and the accuracy of precipitation forecasts. GPM is composed of a core spacecraft carrying two main instruments: a dual-frequency precipitation radar and a GPM microwave imager. GPM builds on the work of the Tropical Rainfall Measuring Mission and will provide an opportunity to calibrate measurements of global precipitation. The Glory project is a low-Earth orbit satellite that will contribute to the U.S. Climate Change Science Program. The satellite has two principal science objectives: (1) collect data on the properties of aerosols and black carbon in the Earth’s atmosphere and climate systems and (2) collect data on solar irradiance. The satellite has two main instruments —the Aerosol Polarimetry Sensor (APS) and the Total Irradiance Monitor (TIM)—as well as two cloud cameras. The TIM will allow NASA to have uninterrupted solar irradiance data by bridging the gap between NASA’s Solar Radiation and Climate Experiment and the National Polar-orbiting Operational Environmental Satellite System. The Glory satellite failed to reach orbit when it was launched on March 4, 2011. The GRAIL mission will seek to determine the structure of the lunar interior from crust to core, advance our understanding of the thermal evolution of the moon, and extend our knowledge gained from the moon to other terrestrial-type planets. GRAIL will achieve its science objectives by placing twin spacecraft in a low altitude and nearly circular polar orbit. The two spacecraft will perform high- precision measurements between them. Analysis of changes in the spacecraft-to-spacecraft data caused by gravitational differences will provide direct and precise measurements of lunar gravity. GRAIL will ultimately provide a global, high-accuracy, high- resolution gravity map of the moon. The JWST is a large, infrared-optimized space telescope that is designed to find the first galaxies that formed in the early universe. Its focus will include searching for first light, assembly of galaxies, origins of stars and planetary systems, and origins of the elements necessary for life. JWST’s instruments will be designed to work primarily in the infrared range of the electromagnetic spectrum, with some capability in the visible range. JWST will have a large mirror, 6.5 meters (21.3 feet) in diameter and a sunshield the size of a tennis court. Both the mirror and sunshade will not fit onto the rocket fully open, so both will fold up and open once JWST is in outer space. JWST will reside in an orbit about 1.5 million kilometers (1 million miles) from the Earth. The Juno mission seeks to improve our understanding of the origin and evolution of Jupiter. Juno plans to achieve its scientific objectives by using a simple, solar-powered spacecraft to make global maps of the gravity, magnetic fields, and atmospheric conditions of Jupiter from a unique elliptical orbit. The spacecraft carries precise, highly sensitive radiometers, magnetometers, and gravity science systems. Juno is slated to make 32 orbits to sample Jupiter’s full range of latitudes and longitudes. From its polar perspective, Juno is designed to combine local and remote sensing observations to explore the polar magnetosphere and determine what drives Jupiter’s auroras. The LDCM, a partnership between NASA and the U.S. Geological Survey, seeks to extend the ability to detect and quantitatively characterize changes on the global land surface at a scale where natural and man-made causes of change can be detected and differentiated. It is the successor mission to Landsat 7. The Landsat data series, begun in 1972, is the longest continuous record of changes in the Earth’s surface as seen from space. Landsat data are a resource for people who work in agriculture, geology, forestry, regional planning, education, mapping, and global change research. The MMS is made up of four identically instrumented spacecraft. The mission will use the Earth’s magnetosphere as a laboratory to study the microphysics of magnetic reconnection, energetic particle acceleration, and turbulence. Magnetic reconnection is the primary process by which energy is transferred from solar wind to Earth’s magnetosphere and is the physical process determining the size of a space weather storm. The spacecrafts will fly in a pyramid formation, adjustable over a range of 10 to 400 kilometers, enabling them to capture the three-dimensional structure of the reconnection sites they encounter. The data from MMS will be used as a basis for predictive models of space weather in support of exploration. The MSL is part of the Mars Exploration Program (MEP). The MEP seeks to understand whether Mars was, is, or can be a habitable world. To answer this question, the MSL project will investigate how geologic, climatic, and other processes have worked to shape Mars and its environment over time, as well as how they interact today. The MSL will continue this systematic exploration by placing a mobile science laboratory on the Mars surface to assess a local site as a potential habitat for life, past or present. The MSL is considered one of NASA’s flagship projects and will be the most advanced rover yet sent to explore the surface of Mars. The National Polar-orbiting Operational Environmental Satellite System NPP is a joint mission with the National Oceanic and Atmospheric Administration and the U.S. Air Force. The satellite will measure ozone, atmospheric and sea surface temperatures, land and ocean biological productivity, Earth radiation, and cloud and aerosol properties. The NPP mission has two objectives. First, NPP will provide a continuation of global weather observations following the Earth Observing System missions Terra and Aqua. Second, NPP will function as an operational satellite and will provide data until the first NPOESS satellite launches. The RBSP mission will explore the sun’s influence on the Earth and near- Earth space by studying the planet’s radiation belts at various scales of space and time. This insight into the physical dynamics of the Earth’s radiation belts will provide scientists data with which to predict changes in this little understood region of space. Understanding the radiation belt environment has practical applications in the areas of spacecraft system design, mission planning, spacecraft operations, and astronaut safety. The two spacecrafts will measure the particles, magnetic and electric fields, and waves that fill geospace and provide new knowledge on the dynamics and extremes of the radiation belts. The TDRS replenishment system consists of in-orbit communication satellites stationed at geosynchronous altitude coupled with two ground stations located in New Mexico and Guam. The satellite network and ground stations provide mission services for near-Earth user satellites and orbiting vehicles. TDRS K and L are the 11th and 12th satellites, respectively, to be built for the TDRS replenishment system and will contribute to the existing network by providing high bandwidth digital voice, video, and mission payload data, as well as health and safety data relay services to Earth-orbiting spacecraft, such as the International Space Station. In addition to the contact named above, David B. Best, Assistant Director; Maricela Cherveny; Heather L. Jensen; Angie Nichols-Friedman; William K. Roberts; Roxanna T. Sun; Robert S. Swierczek; and Alyssa B. Weir made key contributions to this report. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-10-388SP. Washington, D.C.: March 30, 2010. Best Practices: Increased Focus on Requirements and Oversight Needed to Improve DOD’s Acquisition Environment and Weapon System Quality. GAO-08-294. Washington, D.C.: February 1, 2008. Best Practices: An Integrated Portfolio Management Approach to Weapon System Investments Could Improve DOD’s Acquisition Outcomes. GAO-07-388. Washington, D.C.: March 30, 2007. Best Practices: Stronger Practices Needed to Improve DOD Technology Transition Processes. GAO-06-883. Washington, D.C.: September 14, 2006. Best Practices: Better Support of Weapon System Program Managers Needed to Improve Outcomes. GAO-06-110. Washington, D.C.: November 30, 2005. Best Practices: Setting Requirements Differently Could Reduce Weapon Systems’ Total Ownership Costs. GAO-03-57. Washington, D.C.: February 11, 2003. Best Practices: Capturing Design and Manufacturing Knowledge Early Improves Acquisition Outcomes. GAO-02-701. Washington, D.C.: July 15, 2002. Defense Acquisitions: DOD Faces Challenges in Implementing Best Practices. GAO-02-469T. Washington, D.C.: February 27, 2002. Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapon System Outcomes. GAO-01-288. Washington, D.C.: March 8, 2001. Best Practices: A More Constructive Test Approach Is Key to Better Weapon System Outcomes. GAO/NSIAD-00-199. Washington, D.C.: July 31, 2000. Defense Acquisition: Employing Best Practices Can Shape Better Weapon System Decisions. GAO/T-NSIAD-00-137. Washington, D.C.: April 26, 2000. Best Practices: Better Management of Technology Development Can Improve Weapon System Outcomes. GAO/NSIAD-99-162. Washington, D.C.: July 30, 1999. Defense Acquisition: Best Commercial Practices Can Improve Program Outcomes. GAO/T-NSIAD-99-116. Washington, D.C.: March 17, 1999. Best Practices: Successful Application to Weapon Acquisitions Requires Changes in DOD’s Environment. GAO/NSIAD-98-56. Washington, D.C.: February 24, 1998. Global Positioning System: Challenges in Sustaining and Upgrading Capabilities Persist. GAO-10-636. Washington, D.C.: September 15, 2010. Polar-Orbiting Environmental Satellites: Agencies Must Act Quickly to Address Risks That Jeopardize the Continuity of Weather and Climate Data. GAO-10-558. Washington, D.C.: May 27, 2010. Space Acquisitions: DOD Poised to Enhance Space Capabilities, but Persistent Challenges Remain in Developing Space Systems. GAO-10-447T. Washington, D.C.: March 10, 2010. Space Acquisitions: Government and Industry Partners Face Substantial Challenges in Developing New DOD Space Systems. GAO-09-648T. Washington, D.C.: April 30, 2009. Space Acquisitions: Uncertainties in the Evolved Expendable Launch Vehicle Program Pose Management and Oversight Challenges. GAO-08-1039. Washington, D.C.: September 26, 2008. Defense Space Activities: National Security Space Strategy Needed to Guide Future DOD Space Efforts. GAO-08-431R. Washington, D.C.: March 27, 2008. Space Acquisitions: Actions Needed to Expand and Sustain Use of Best Practices. GAO-07-730T. Washington, D.C.: April 19, 2007. Defense Acquisitions: Assessment of Selected Major Weapon Programs. GAO-06-391. Washington, D.C.: March 31, 2006. Space Acquisitions: DOD Needs to Take More Action to Address Unrealistic Initial Cost Estimates of Space Systems. GAO-07-96. Washington, D.C.: November 17, 2006. Defense Space Activities: Management Actions Are Needed to Better Identify, Track, and Train Air Force Space Personnel. GAO-06-908. Washington, D.C.: September 21, 2006. Space Acquisitions: Improvements Needed in Space Systems Acquisitions and Keys to Achieving Them. GAO-06-626T. Washington, D.C.: April 6, 2006. Space Acquisitions: Stronger Development Practices and Investment Planning Needed to Address Continuing Problems. GAO-05-891T. Washington, D.C.: July 12, 2005. Defense Acquisitions: Risks Posed by DOD’s New Space Systems Acquisition Policy. GAO-04-379R. Washington, D.C.: January 29, 2004. Defense Acquisitions: Improvements Needed in Space Systems Acquisition Management Policy. GAO-03-1073. Washington, D.C.: September 15, 2003. Military Space Operations: Common Problems and Their Effects on Satellite and Related Acquisitions. GAO-03-825R. Washington, D.C.: June 2, 2003. Defense Space Activities: Organizational Changes Initiated, but Further Management Actions Needed. GAO-03-379. Washington, D.C.: April 18, 2003. Missile Defense: European Phased Adaptive Approach Acquisitions Face Synchronization, Transparency, and Accountability Challenges. GAO-11-179R. Washington, D.C.: December 21, 2010. Defense Acquisitions: Missile Defense Program Instability Affects Reliability of Earned Value Management Data. GAO-10-676. Washington, D.C.: July 14, 2010. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-10-388SP. Washington, D.C.: March 30, 2010. Missile Defense: DOD Needs to More Fully Assess Requirements and Establish Operational Units before Fielding New Capabilities. GAO-09-856. Washington, D.C.: September 16, 2009. Ballistic Missile Defense: Actions Needed to Improve Planning and Information on Construction and Support Costs for Proposed European Sites. GAO-09-771. Washington, D.C.: August 6, 2009. Defense Management: Key Challenges Should be Addressed When Considering Changes to Missile Defense Agency’s Roles and Missions. GAO-09-466T. Washington, D.C.: March 26, 2009. Defense Acquisitions: Production and Fielding of Missile Defense Components Continue with Less Testing and Validation Than Planned. GAO-09-338. Washington, D.C.: March 13, 2009. Missile Defense: Actions Needed to Improve Planning and Cost Estimates for Long-Term Support of Ballistic Missile Defense. GAO-08-1068. Washington, D.C.: September 25, 2008. Ballistic Missile Defense: Actions Needed to Improve Process for Identifying and Addressing Combatant Command Priorities. GAO-08-740. Washington, D.C.: July 31, 2008. Defense Acquisitions: Progress Made in Fielding Missile Defense, but Program Is Short of Meeting Goals. GAO-08-448. Washington, D.C.: March 14, 2008. Defense Acquisitions: Missile Defense Agency’s Flexibility Reduces Transparency of Program Cost. GAO-07-799T. Washington, D.C.: April 30, 2007.
Quality is key to success in U.S. space and missile defense programs, but quality problems exist that have endangered entire missions along with less-visible problems leading to unnecessary repair, scrap, rework, and stoppage; long delays; and millions in cost growth. For space and missile defense acquisitions, GAO was asked to examine quality problems related to parts and manufacturing processes and materials across DOD and NASA. GAO assessed (1) the extent to which parts quality problems affect those agencies' space and missile defense programs; (2) causes of any problems; and (3) initiatives to prevent, detect, and mitigate parts quality problems. To accomplish this, GAO reviewed all 21 systems with mature designs and projected high costs: 5 DOD satellite systems, 4 DOD missile defense systems, and 12 NASA systems. GAO reviewed existing and planned efforts for preventing, detecting, and mitigating parts quality problems. Further, GAO reviewed regulations, directives, instructions, policies, and several studies, and interviewed senior headquarters and contractor officials. Parts quality problems affected all 21 programs GAO reviewed at the Department of Defense (DOD) and National Aeronautics and Space Administration (NASA). In some cases they contributed to significant cost overruns and schedule delays. In most cases, problems were associated with electronic versus mechanical parts or materials. In several cases, parts problems discovered late in the development cycle had more significant cost and schedule consequences. For example, one problem cost a program at least $250 million and about a 2-year launch delay. The causes of parts quality problems GAO identified were poor workmanship, undocumented and untested manufacturing processes, poor control of those processes and materials and failure to prevent contamination, poor part design, design complexity, and an inattention to manufacturing risks. Ineffective supplier management also resulted in concerns about whether subcontractors and contractors met program requirements. Most programs GAO reviewed began before the agencies adopted new policies related to parts quality problems, and newer post-policy programs were not mature enough for parts problems to be apparent. Agencies and industry are now collecting and sharing information about potential problems, and developing guidance and criteria for testing parts, managing subcontractors, and mitigating problems, but it is too early to determine how much such collaborations have reduced parts quality problems since such data have not been historically collected. New efforts are collecting data on anomalies, but no mechanism exists to use those data to assess improvements. Significant barriers hinder efforts to address parts quality problems, such as broader acquisition management problems, workforce gaps, diffuse leadership in the national security space community, the government's decreasing influence on the electronic parts market, and an increase in counterfeiting of electronic parts. Given this, success will likely be limited without continued assessments of what works well and must be done. DOD and NASA should implement a mechanism for periodic assessment of the condition of parts quality problems in major space and missile defense programs with periodic reporting to Congress. DOD partially agreed with the recommendation and NASA agreed. DOD agreed to annually address all quality issues, to include parts quality.
DOT’s proposal to reauthorize surface transportation included a 6-year, $600 million Access to Jobs program to support new transportation services for low-income people seeking jobs. The funding levels and other program details of such an initiative may change as the Congress completes final action in 1998 to reauthorize surface transportation programs. The House and Senate reauthorization proposals would authorize appropriations of $900 million over 6 years for similar programs to be administered by DOT. The Senate proposal would also authorize appropriations of an additional $600 million (bringing the total to $1.5 billion) over the same period for a reverse commute program that the Department could use to support its welfare-to-work initiatives. While these programs have not been established, several federal departments currently provide states and localities with federal funds to support transportation welfare reform initiatives. The Department of Health and Human Services (HHS) administers the Temporary Assistance for Needy Families (TANF) program—a $16.5 billion program of annual block grants to the states that replaced Aid to Families With Dependent Children (AFDC). The states may use TANF funds to provide transportation assistance to people on or moving off of public assistance. However, the states generally may not use TANF funds to provide assistance to a family for more than 60 months and must require parents to work within 24 months of receiving assistance. The Balanced Budget Act of 1997 established a 2-year, $3 billion Welfare-to-Work program administered by the Department of Labor (DOL). Among other things, this grant program provides funding for job placement, on-the-job training, and support services (including transportation) for those who are the most difficult to move from welfare to work. The states receive about 75 percent of the funds on the basis of a formula, while local governments, private industry councils, and private, community-based organizations receive most of the remaining 25 percent on a competitive basis. Although not specifically designed to address welfare-to-work issues, HUD’s $17 million Bridges to Work program provides funds to support transportation, job placement, and counseling services for a small number of low-income people living in the central cities of Baltimore, Chicago, Denver, Milwaukee, and St. Louis. HUD provided an $8 million grant for the program in fiscal year 1996, while the Ford, Rockefeller, and MacArthur Foundations provided $6 million and local public and private organizations contributed the remaining $3 million. The demonstration program began in late 1996 and will be completed in 2000. Access to transportation is generally recognized by social service and transportation professionals as a prerequisite for work and for welfare reform. According to the Census Bureau, in 1992, welfare recipients were disproportionately concentrated in inner cities—almost half of all people who received AFDC or state assistance lived in central cities, compared with 30 percent of the U.S. population. However, as cited in the 1998 report entitled Welfare Reform and Access to Jobs in Boston (the 1998 Boston study), national trends since 1970 show that most new jobs have been created in the suburbs rather than in the inner cities. In addition, this study indicated that about 70 percent of the jobs in manufacturing, retailing, and wholesaling—sectors employing large numbers of entry-level workers—were located in the suburbs. Many of these newly created entry-level suburban jobs should attract people moving from welfare to work since many welfare recipients lack both higher education and training. However, most welfare recipients seeking employment live in central cities that are located away from these suburban jobs. Thus, the less-educated, urban poor need either a car or public transportation to reach new suburban employment centers. However, both modes of transportation have posed challenges to welfare recipients. The 1998 Boston study and a 1995 GAO study found that the lack of transportation is one of the major barriers that prevent welfare recipients from obtaining employment. A significant factor limiting welfare recipients’ job prospects has been their lack of an automobile. According to a 1997 HHS study, less than 6 percent of welfare families reported having a car in 1995 and the average reported value of the car was $620. According to DOT’s Bureau of Transportation Statistics (BTS), these figures are probably low because previous welfare eligibility rules limiting the value of assets may have led some recipients to conceal car ownership. Under AFDC, families that received assistance were not allowed to accumulate more than $1,000 in resources such as bank accounts and real estate. This limit excluded the value of certain assets, including vehicles up to $1,500 in value. However, a 1997 study of welfare mothers found that car ownership ranged from 20 to 40 percent. Without a car, welfare recipients must rely on existing public transportation systems to move them from their inner-city homes to suburban jobs. However, recent studies show important gaps between existing transit system routes and the location of entry-level jobs. For example, the 1998 study of Boston’s welfare recipients found that while 98 percent of them lived within one-quarter mile of a bus route or transit station, just 32 percent of potential employers (those companies located in high-growth areas for entry-level employment) were within one-quarter mile of public transit. The study noted that it was presumed that welfare recipients living in or near a central city with a well-developed transit system could rely on public transit to get to jobs. However, the study found that Boston’s transit system was inadequate because (1) many high-growth areas for entry-level employment were in the outer suburbs, beyond existing transit service; (2) some areas were served by commuter rail, which was expensive and in most cases did not provide direct access to employment sites; and (3) when transit was available, the trips took too long or required several transfers, or transit schedules and hours did not match work schedules, such as those for weekend or evening work. Similar findings were reported in a July 1997 study of the Cleveland-Akron metropolitan area. The study found that since inner-city welfare recipients did not own cars, they had to rely on public transit systems to get to suburban jobs. The study found that welfare recipients traveled by bus at times outside the normal rush-hour schedule and often had significant walks from bus stops to their final employment destinations. The study concluded that these transportation barriers would be difficult to overcome using traditional mass transit since the locations of over one-half of the job openings were served by transit authorities other than the one serving inner-city Cleveland residents. The study further indicated that even within areas where employers were concentrated, such as in industrial parks, employers’ locations were still too dispersed to be well served by mass transit systems. According to BTS, transportation for welfare mothers is particularly challenging because they do not own cars and must make more trips each day to accommodate their child care and domestic responsibilities. According to 1997 Census and Urban Institute information, most adult welfare recipients were single mothers, about half of these mothers had children under school age, and more than three-fourths had a high school diploma or less education. To reach the entry-level jobs located in the suburbs without access to a car they would have to make a series of public transit trips to drop children off at child care or schools, go to work, pick their children up, and shop for groceries. According to BTS, traditional transit service is unlikely to meet the needs of many welfare mothers, given their need to take complex trips. For those who do not live in a city, transportation to jobs is also important. In 1995, the National Transit Resource Center, a federally funded technical assistance resource, found that about 60 million rural Americans were underserved or unserved by public transportation.Forty-one percent of rural Americans lived in counties that lacked any public transportation services, and an additional 25 percent of rural residents lived in areas with below-average public transit service. According to the Community Transportation Association of America—a nationwide network of public and private transportation providers, local human services agencies, state and federal officials, transit associations, and individuals—the rural poor have less access to public transportation than their urban counterparts and must travel greater distances to commute to work, obtain essential services, and make needed purchases. In addition, members of low-income rural groups generally own cars that are not maintained as well as they need to be for long-distance commutes. Both DOT and HUD have implemented initiatives to support transportation strategies for moving welfare recipients off federal assistance and into full-time employment. Primarily through FTA’s demonstration programs and seminars and HUD’s Bridges to Work program, these agencies have provided limited funding for programs that support transportation research and demonstration programs aimed at helping the poor move from welfare to work. While the number of welfare recipients moved into jobs has been low, the programs have identified programmatic and demographic factors that local transportation and welfare officials should consider to ensure that the most effective transportation strategies are employed to support welfare reform. According to an FTA official, the agency is supporting welfare-to-work initiatives by funding demonstration projects, working with state and local partners to encourage the development of collaborative transportation plans, providing states and localities with technical assistance, and developing a program that would increase the financial resources available for welfare initiatives. Of the estimated $5 million that FTA has provided for welfare initiatives in 1993 through 1998, the agency’s largest effort has been its JOBLINKS demonstration program. JOBLINKS, a $3.5 million demonstration program administered by the Community Transportation Association of America, began in 1995 to fund projects designed to help people obtain jobs or attend employment training and to evaluate which types of transportation services are the most effective in helping welfare recipients get to jobs. As of March 1998, JOBLINKS had funded 16 projects located in urban and rural areas of 12 states. Ten projects are completed and six are ongoing. While the projects’ objectives are to help people obtain jobs or attend employment training, the projects’ results have differed. For example, a JOBLINKS project in Louisville, Kentucky, was designed to increase by 25 percent the number of inner-city residents hired at an industrial park. The JOBLINKS project established an express bus from the inner city to the industrial park, thereby reducing a 2-hour commute for inner-city residents to 45 minutes. Although an April 1997 evaluation of the project did not indicate if the project had met the 25-percent new-hire goal, it stated that 10 percent of the businesses in the industrial park were able to hire inner-city employees as a result of the express service. Another JOBLINKS project—in Fresno, California—was established to provide transportation services to employment training centers and thereby reduce dropout rates and increase the number of individuals who found jobs. The April 1997 evaluation of the project found that of the 269 participants in a job training program, 20 had completed the program and 3 had found jobs. FTA has also helped state and local transportation agencies develop plans for addressing the transportation needs of their welfare recipients. In 1997, FTA and the Federal Highway Administration provided the National Governors’ Association (NGA) with $330,000 to develop plans that identify the issues, costs, and benefits associated with bringing together the transportation components of various social service programs. In January 1997, NGA solicited grant applications and 24 states and one territory applied for grants. All 25 applicants received grants and are participating in the demonstration project; final plans are expected by September 1998. FTA has also sponsored regional seminars that focus on the transportation issues involved in welfare reform and the actions that states and local agencies need to take to address these issues. The seminars are intended to encourage the states to develop transportation strategies to support their welfare reform programs and to facilitate transportation and human services agencies working together to develop plans that link transportation, jobs, and support services. In addition, FTA helps fund the National Transit Resource Center, which provides technical assistance to communities. For example, the Resource Center developed an Internet site that provides up-to-date information on federal programs, transportation projects, and best practices. HUD’s Bridges to Work program is a 4-year research demonstration program that began in late 1996 with $17 million in public and private funding. This program is intended to link low-income, job-ready, inner-city residents with suburban jobs by providing them with job placement, transportation, and support services (such as counseling). The program was conceived by Public/Private Ventures, a nonprofit research and program development organization located in Philadelphia. Under the program, a total of about 3,000 participants in five cities—Baltimore, Chicago, Denver, Milwaukee, and St. Louis—will receive employment, transportation, and support services. According to HUD, it became involved in welfare reform because a large portion of its clients are low-income or disadvantaged persons who rely upon welfare benefits. Several HUD programs, according to Bridges to Work program documents, are intended to address the geographic mismatch between where the jobless live and where employment centers operate. Bridges to Work researchers identified three solutions to this mismatch: (1) disperse urban residents by moving them closer to suburban jobs, (2) develop more jobs in the urban community, or (3) bridge the geographic gap by providing urban residents with the mobility to reach suburban jobs. HUD’s Bridges to Work program is intended to address the third solution. It was designed to determine whether the geographic separation of jobs and low-income persons could be overcome by the coordinated provision of job, transportation, and support services. The program’s goal is to place 3,000 low-income people in jobs during the 4 years of the program. Through March 1998, the Bridges to Work program had placed 429 low-income, urban residents in suburban jobs. According to the project’s sponsors, the number of placements has been low in part because the program accepts only job-ready applicants—a criterion that limits the number of eligible participants when unemployment rates are low and job-ready people are already employed. A Bridges to Work participant must meet the following criteria: He/she must be at least 18, have a family income of 80 percent or less of the median family income for the metropolitan area (e.g., $29,350 for a family of one in Milwaukee), live in the designated urban area, and be able to work in the designated suburban area. In addition, no more than one-third of the participants can be former AFDC recipients. The pilot phase of the program found jobs paying between $6.00 and $7.99 per hour for over 70 percent of the first 239 placements and one-way commutes of between 31 and 60 minutes each day for over 76 percent of these placements. Bridges to Work officials have found that the five demonstration sites have encountered two key challenges. First, each site needed to establish a collaborative network consisting of transportation, employment, and social services agencies working together with employers to ensure the successful placement of applicants. Baltimore’s network, for example, includes the state transportation agency, the area’s Metropolitan Planning Organization, employment service providers, the city’s employment office, a community-based organization, the Private Industry Council, and the Baltimore-Washington International Business Partnership. Second, recruiting job-ready participants has been difficult. During the current healthy economy, many potential job-ready individuals can find their own jobs closer to home because jobs are plentiful and unemployment is low. The Bridges to Work project’s co-director noted that, in some instances, the sites did not identify an adequate pool of job-ready individuals and therefore needed to change their recruiting and marketing strategies to better locate potential participants for the program. FTA’s JOBLINKS program, HUD’s Bridges to Work program, individual cities’ projects, and past research have reported common strategies for designing and implementing a transportation program that supports welfare to work. Preliminary results show that the following factors appear to support a program’s success: (1) collaboration among transportation, employment, and other human services organizations; (2) an understanding of local job markets; and (3) flexible transportation systems. According to the 1997 JOBLINKS evaluation report and Bridges to Work project managers, welfare-to-work programs must establish a collaborative network among transportation, employment, and other human services organizations to ensure a successful program. Officials noted that for welfare recipients and the poor to move from welfare to work, they need employers’ support, transportation services, and human services organizations’ support to find child care and resolve workplace conflicts. A Bridges to Work director in St. Louis noted that the area’s metropolitan planning organization was motivated to participate in the program because prior welfare-to-work attempts focused on transportation alone, rather than providing participants with the job placement and counseling services needed to find and retain jobs. In addition, the JOBLINKS program concluded in a 1997 evaluation of its 10 projects that coordination among transportation providers, human services agencies, and employers was an important element of successful welfare-to-work programs. Studies conducted in the late 1960s to early 1970s support this experience. For example, in the late 1960s, the Los Angeles Transportation-Employment Project found that improved public transportation alone was not sufficient to increase employment opportunities; other factors, such as the shortage of suitable jobs, obsolete skills, or inadequate education, also had to be addressed. According to the 1997 JOBLINKS evaluation report and Bridges to Work officials, analyses of the local labor and job markets are essential before local welfare-to-work sponsors select transportation strategies to serve their projects’ participants. According to officials, these market analyses should first identify which employers are willing to participate in the program and if their locations provide program participants with reasonable commutes. Next, each employer’s needs, such as shift times and the willingness to offer “living wages,” must be evaluated. For example, a Chicago official said that requiring participants to commute 2 hours each way is not reasonable, particularly for a low-wage job. Milwaukee’s Bridges to Work officials developed a bus schedule to meet the 12-hour shift times of a large employer participating in the program. JOBLINKS’ and Bridges to Work’s preliminary experiences also show that flexible transportation systems are needed to address employers’ locations and shift times. As explained earlier, many studies, including BTS’ study of Boston, showed that lower-income residents could not rely on mass transit to go from the inner city to suburban employment in a timely manner. Mass transit systems ran infrequently to the suburbs, or at night, and often did not stop close to employers. The Denver Bridges to Work site illustrates the importance of a flexible transportation strategy. Denver originally extended the hours of service and added stops to its existing bus system to address a variety of shift times. However, Denver officials soon found that the bus system could not address all the employers’ and employees’ needs and added vanpools and shuttles. Under DOT’s Access to Jobs proposal, as well as the proposals passed by the House of Representatives and the United States Senate, DOT’s financial support of welfare-to-work initiatives would increase substantially. The attention given to the transportation component of welfare reform would increase dramatically as well. However, the Access to Jobs program, as currently defined by DOT, does not contain key information about the program’s objectives and expected outcomes or explain how the results from JOBLINKS and other federal welfare-to-work programs will be reflected in the program’s operation. Accordingly, it is difficult to evaluate how funds provided for an Access to Jobs program would effectively support national welfare reform goals. Details may not be available until after a program is authorized and DOT begins implementation. DOT’s proposal and related documents generally indicate what the Access to Jobs program is to accomplish. The program would provide grants to the states, local governments, and private, nonprofit organizations to help finance transportation services for low-income people seeking jobs and job-related services. The program would provide localities with flexibility in determining the transportation services and providers most appropriate for their areas. Among other things, grant recipients could use the funds to pay for the capital and operating costs of transportation services for the poor, promote employer-provided transportation, or integrate transportation and welfare planning activities. However, the lack of specific information on the program’s purpose, objectives, performance criteria, and evaluation approach makes it difficult to assess how the program would improve mobility for low-income workers and contribute to overall welfare reform objectives. The Government Performance and Results Act of 1993 (Results Act), enacted to improve the effectiveness of and accountability for federal programs, requires agencies to identify annual performance goals and measures for their program activities. DOT’s fiscal year 1999 performance plan under the Results Act showcases the Access to Jobs program under DOT’s goals to improve mobility, but the plan does not define performance goals for measuring the program’s success. In contrast, the plan establishes benchmarks for other mobility goals, such as the average age of bus and rail vehicles or the percentage of facilities and vehicles that meet the requirements of the Americans With Disabilities Act. Since an Access to Jobs program is intended to move people to jobs, rather than build and sustain public transportation systems, evaluation criteria that correspond to this goal would be needed. In addition, DOT’s Access to Jobs Program, as currently defined, does not fully describe how lessons learned through the JOBLINKS and Bridges to Work programs would be incorporated into an Access to Jobs program.For example, although the proposal would require DOT to consider grant applicants’ coordination of transportation and human resource services planning, the proposal would not specifically require grant recipients to carry out such coordination. However, the proposal would allow other federal transportation-eligible funds to be used to meet the program’s matching requirement. According to DOT officials, this provision will help promote coordination between transportation and social service funding. In addition, the proposed program does not specify that grant recipients evaluate the local job and labor markets before selecting the optimal transportation services to provide welfare recipients. Bridges to Work officials expressed concern that FTA would provide Access to Jobs grants primarily to local transportation agencies that may be unwilling to support nontraditional transportation services. For example, in Denver, traditional mass transit systems did not provide sufficient flexibility to transport Bridges to Work participants to their jobs. Accordingly, program officials had to add private van pools and shuttle services to take participants from public transit stops to their new jobs. FTA’s challenge in efficiently managing the Access to Jobs program would be to go beyond its customary mass transit community and work with different local groups (employment, community services) to support non-mass-transit solutions to welfare-to-work mobility problems. Finally, under its proposal, DOT would be required to coordinate its Access to Jobs program with other federal agencies’ efforts. This requirement is particularly important to ensure that FTA’s welfare reform funds are working with, rather than duplicating, those of other federal agencies. HHS and DOL have significant levels of funding that the states and localities can use for transportation services in their welfare-to-work programs. In addition, smaller programs, such as HUD’s Bridges to Work program, have been used to transport welfare recipients to jobs. For example, in Chicago, a local organization has received $1.6 million through the Bridges to Work program; another local organization has applied for a $5.4 million DOL grant to assist welfare recipients in paying for their transportation to work; and these and other local organizations would probably be eligible for grants under the proposed Access to Jobs program. It is therefore important that DOT’s new program ensure that grant recipients are effectively applying and coordinating their federal welfare-to-work grants to successfully move people from welfare to work. Welfare and transportation experts agree that current welfare recipients need many supporting services, such as transportation, job counseling, and child care, to successfully make the transition from welfare to work. An Access to Jobs program would authorize significant funding ($900 million) to support the transportation element of welfare reform. However, the program’s success will depend in part on how FTA defines the program’s specific objectives, performance criteria, and measurable goals and the extent to which the program balances two national needs: the need to provide a supportive framework for helping welfare recipients and the need to oversee federal dollars so that the program does not duplicate other federal and state welfare programs. In addition, a successful Access to Jobs program should build on lessons learned from existing welfare-to-work programs. These lessons learned focus on the need to coordinate transportation strategies with other local job placement and social services, the importance of assessing the local labor and employer markets, and the inclusion of many transportation strategies (not just existing mass transit systems) in implementing welfare reform. If the Congress authorizes an Access to Jobs program, we recommend that the Secretary of Transportation (1) establish specific objectives, performance criteria, and measurable goals for the program when the Department prepares its Fiscal Year 2000 Performance Plan; (2) require that grant recipients coordinate transportation strategies with local job placement and other social service agencies; and (3) work with other federal agencies, such as the departments of Health and Human Services, Labor, and Housing and Urban Development, to coordinate welfare-to-work activities and to ensure that program funds complement and do not duplicate other welfare-to-work funds available for transportation services. To obtain information about the need for transportation in welfare reform, we interviewed FTA, HUD, Community Transportation Association of America, Public/Private Ventures, and National Governors’ Association officials. These officials also provided insights into identifying transportation strategies that programs like FTA’s JOBLINKS, HUD’s Bridges to Work demonstration project, and the NGA’s Transportation Coordination Demonstration project have used to help low-income people secure jobs. In addition, we interviewed program staff at each of the five Bridges to Work demonstration sites and visited one of the sites—the suburban office of Chicago’s Bridges to Work program. We examined the Bridges to Work program’s documentation, preliminary reports, brochures on individual programs, and other descriptive materials. We also reviewed the results of two studies that FTA’s Coordinator for Welfare-to-Work activities identified as significant studies on transportation and welfare reform—BTS’ January 1998 report entitled Welfare Reform and Access to Jobs in Boston and the July 1997 report entitled Housing, Transportation, and Access to Suburban Jobs by Welfare Recipients in the Cleveland Area. To obtain information on the DOL’s grant applications, we spoke with transportation officials in Chicago and Los Angeles. Finally, we reviewed legislative proposals and spoke to transportation and federal officials to obtain information about FTA’s proposed Access to Jobs program. We performed our review from December 1997 through May 1998 in accordance with generally accepted government auditing standards. We provided a draft of this report to DOT and HUD for review and comment. We met with DOT officials from the Office of the Secretary and the Federal Transit Administration’s Coordinator for Welfare-to-Work activities to discuss the Department’s comments on the draft report. DOT agreed with our recommendations and stated that it has begun to take actions to implement our recommendations related to coordinating with local and federal agencies providing welfare-to-work services. First, DOT provided a May 4, 1998, memorandum signed by the Secretaries of Transportation, Health and Human Services, and Labor that encourages coordination among transportation, workforce development, and social service providers. Second, DOT provided examples of how it has begun to encourage collaboration among state and local transit and social service providers and how provisions in the Access to Jobs proposal would foster collaboration further. We have included information in the report on DOT’s collaboration efforts and the provisions of the Access to Jobs proposal that will foster collaboration. Finally, DOT disagreed with our assessment that an Access to Jobs program will require the Federal Transit Administration to undergo a cultural change—a change whereby the agency may have to accept nontraditional transportation solutions to address barriers to welfare-to-work programs. DOT noted that innovative or nontraditional transportation strategies do not exclusively offer the best strategies for helping welfare recipients; traditional mass transit systems may also provide welfare recipients with the means to reach employment centers. In addition, DOT stated that as a result of its collaborative efforts on welfare reform with local and other federal agencies, it believes that it has been a cultural change leader. First, we agree that states and localities should not routinely exclude traditional bus and rail transit systems as one approach to helping welfare recipients get to jobs. Nonetheless, the DOT and HUD studies cited in this report consistently emphasized the limitations of existing mass transit systems as the transportation solution to welfare-to-work barriers. These systems do not adequately serve job-rich suburban markets that inner-city welfare recipients must reach to find employment. Second, we acknowledge the initial work that the Federal Transit Administration has undertaken to prepare state and local transportation officials for their new welfare-to-work responsibilities and included examples of this effort in this report. However, the Access to Jobs program would represent a significant federal commitment. Accordingly, a change in the traditional mass transit culture at the Federal Transit Administration will still be needed to ensure that Access to Jobs funds address innovative and nontraditional transportation solutions to welfare-to-work problems. DOT had additional technical comments that we incorporated throughout the report, where appropriate. In its comments, HUD stated that we should expand our recommendations to the Secretary of Transportation to include HUD’s suggested changes to the Access to Jobs program. (See app. I.) These suggested changes would allow Access to Jobs grant recipients to (1) use program funds for planning and coordination purposes and (2) apply “soft expenditures” (such as the value of staff reassigned to the program) to fund their required local match. In addition, HUD suggested that it be included among the federal agencies with which DOT must coordinate program implementation. HUD’s first two suggestions may be important for the Congress to consider as it completes programmatic and funding decisions for the Access to Jobs program through its reauthorization of surface transportation programs. However, we have not included these as recommendations in our report because they address policy issues that were not part of our review’s scope. We agree with HUD’s last suggested change and have modified our recommendations to include HUD as one of the federal agencies that DOT should work with when it begins implementing the Access to Jobs program. HUD also had minor technical comments that we incorporated throughout the report, where appropriate. We will send copies of this report to interested congressional committees, the Secretary of Transportation, the Secretary of Housing and Urban Development, and the Administrator of the Federal Transit Administration. We will also make copies available to others on request. If you have any questions about this report, please call me at (202) 512-2834. Major contributors to this report were Ruthann Balciunas, Joseph Christoff, Catherine Colwell, Gail Marnik, and Phyllis F. Scheinberg. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed: (1) whether current studies and research demonstrate the importance of transportation services in implementing welfare reform; (2) the preliminary results of the Federal Transit Administration's (FTA) current welfare-to-work programs and the Department of Housing and Urban Development's (HUD) Bridges to Work program; and (3) how an Access to Jobs program would support welfare reform. GAO noted that: (1) transportation and welfare studies show that without adequate transportation, welfare recipients face significant barriers in trying to move from welfare to work; (2) existing public transportation systems cannot always bridge the gap between where the poor live and where jobs are located; (3) the majority of entry-level jobs that the welfare recipients and the poor would be likely to fill are located in suburbs that have limited or no accessibility through existing public transportation systems; (4) FTA has funded welfare-to-work demonstration projects, planning grants, and regional seminars, while HUD's Bridges to Work research program is in the early stages of placing inner-city participants in suburban jobs; (5) although these programs began recently and have limited funding, they have identified programmatic and demographic factors that state and local officials should consider when they select the best transportation strategies for their welfare-to-work programs; (6) these factors include: (a) collaboration among transportation providers and employment and human services organizations; (b) analyses of local labor markets to help design transportation strategies that link employees to specific jobs; and (c) flexible transportation strategies that may not always rely on existing mass transit systems; (7) if authorized, an Access to Jobs program would bring additional resources and attention to the transportation element of welfare reform; (8) however, limited information about the program's objectives or expected outcomes makes it difficult to evaluate how the program would improve mobility for low-income workers or support national welfare-to-work goals; (9) the new program may require FTA and local transit agencies to undergo a cultural change whereby they are willing to accept nontraditional approaches for addressing welfare-to-work barriers; (10) the agency must ensure that the millions of dollars it contributes to welfare reform support rather than duplicate the transportation funds provided through other federal and state agencies; and (11) while FTA has begun to consider some of these important issues, addressing all of them before the program is established would help ensure that the transportation funds provided for an Access to Jobs program would be used efficiently and effectively in support of national welfare goals.
As I previously stated, and we have reported on for several years, DOD faces a range of financial management and related business process challenges that are complex, long-standing, pervasive, and deeply rooted in virtually all business operations throughout the department. As I recently testified and as discussed in our latest financial audit report, DOD’s financial management deficiencies, taken together, continue to represent the single largest obstacle to achieving an unqualified opinion on the U.S. government’s consolidated financial statements. To date, none of the military services has passed the test of an independent financial audit because of pervasive weaknesses in internal control and processes and fundamentally flawed business systems. In identifying improved financial performance as one of its five governmentwide initiatives, the President’s Management Agenda recognized that obtaining a clean (unqualified) financial audit opinion is a basic prescription for any well-managed organization. At the same time, it recognized that without sound internal control and accurate and timely financial and performance information, it is not possible to accomplish the President’s agenda and secure the best performance and highest measure of accountability for the American people. The Joint Financial Management Improvement Program (JFMIP) principals have defined certain measures, in addition to receiving an unqualified financial statement audit opinion, for achieving financial management success. These additional measures include (1) being able to routinely provide timely, accurate, and useful financial and performance information, (2) having no material internal control weaknesses or material noncompliance with laws and regulations, and (3) meeting the requirements of the Federal Financial Management Improvement Act of 1996 (FFMIA). Unfortunately, DOD does not meet any of these conditions. For example, for fiscal year 2003, the DOD Inspector General issued a disclaimer of opinion on DOD’s financial statements, citing 11 material weaknesses in internal control and noncompliance with FFMIA requirements. Recent audits and investigations by GAO and DOD auditors continue to confirm the existence of pervasive weaknesses in DOD’s financial management and related business processes and systems. These problems have (1) resulted in a lack of reliable information needed to make sound decisions and report on the status of DOD activities, including accountability of assets, through financial and other reports to Congress and DOD decision makers, (2) hindered its operational efficiency, (3) adversely affected mission performance, and (4) left the department vulnerable to fraud, waste, and abuse. For example, 450 of the 481 mobilized Army National Guard soldiers from six GAO case study Special Forces and Military Police units had at least one pay problem associated with their mobilization. DOD’s inability to provide timely and accurate payments to these soldiers, many of whom risked their lives in recent Iraq or Afghanistan missions, distracted them from their missions, imposed financial hardships on the soldiers and their families, and has had a negative impact on retention. (GAO-04-89, Nov. 13, 2003) DOD incurred substantial logistical support problems as a result of weak distribution and accountability processes and controls over supplies and equipment shipments in support of Operation Iraqi Freedom activities, similar to those encountered during the prior gulf war. These weaknesses resulted in (1) supply shortages, (2) backlogs of materials delivered in theater but not delivered to the requesting activity, (3) a discrepancy of $1.2 billion between the amount of materiel shipped and that acknowledged by the activity as received, (4) cannibalization of vehicles, and (5) duplicate supply requisitions. (GAO-04-305R, Dec. 18, 2003) Inadequate asset visibility and accountability resulted in DOD selling new Joint Service Lightweight Integrated Suit Technology (JSLIST)—the current chemical and biological protective garment used by our military forces—on the internet for $3 each (coat and trousers) while at the same time buying them for over $200 each. DOD has acknowledged that these garments should have been restricted to DOD use only and therefore should not have been available to the public. (GAO-02-873T, June 25, 2002) Inadequate asset accountability also resulted in DOD’s inability to locate and remove over 250,000 defective Battle Dress Overgarments (BDOs)— the predecessor of JSLIST—from its inventory. Subsequently, we found that DOD had sold many of these defective suits to the public, including 379 that we purchased in an undercover operation. In addition, DOD may have issued over 4,700 of the defective BDO suits to local law enforcement agencies. Although local law enforcement agencies are most likely to be the first responders to a terrorist attack, DOD failed to inform these agencies that using these BDO suits could result in death or serious injury. (GAO-04-15NI, Nov. 19, 2003) Tens of millions of dollars are not being collected each year by military treatment facilities from third-party insurers because key information required to effectively bill and collect from third-party insurers is often not properly collected, recorded, or used by the military treatment facilities. (GAO-04-322R, Feb. 20, 2004) Our analysis of data on more than 50,000 maintenance work orders opened during the deployments of six battle groups indicated that about 29,000 orders (58 percent) could not be completed because the needed repair parts were not available on board ship. This condition was a result of inaccurate ship configuration records and incomplete, outdated, or erroneous historical parts demand data. Such problems not only have a detrimental impact on mission readiness, they may also increase operational costs due to delays in repairing equipment and holding unneeded spare parts inventory. (GAO-03-887, Aug. 29, 2003) DOD sold excess biological laboratory equipment, including a biological safety cabinet, a bacteriological incubator, a centrifuge, and other items that could be used to produce biological warfare agents. Using a fictitious company and fictitious individual identities, we were able to purchase a large number of new and usable equipment items over the Internet from DOD. Although the production of biological warfare agents requires a high degree of expertise, the ease with which these items were obtained through public sales increases the risk that terrorists could obtain and use them to produce biological agents that could be used against the United States. (GAO-04-81TNI, Oct. 7, 2003) Based on statistical sampling, we estimated that 72 percent of the over 68,000 premium class airline tickets DOD purchased for fiscal years 2001 and 2002 was not properly authorized and that 73 percent was not properly justified. During fiscal years 2001 and 2002, DOD spent almost $124 million on premium class tickets that included at least one leg in premium class—usually business class. Because each premium class ticket cost the government up to thousands of dollars more than a coach class ticket, unauthorized premium class travel resulted in millions of dollars of unnecessary costs being incurred annually. (GAO-04-229T, Nov. 6, 2003) Some DOD contractors have been abusing the federal tax system with little or no consequence, and DOD is not collecting as much in unpaid taxes as it could. Under the Debt Collection Improvement Act of 1996, DOD is responsible—working with the Treasury Department—for offsetting payments made to contractors to collect funds owed, such as unpaid federal taxes. However, we found that DOD had collected only $687,000 of unpaid taxes over the last 6 years. We estimated that at least $100 million could be collected annually from DOD contractors through effective implementation of levy and debt collection programs. (GAO-04- 95, Feb. 12, 2004) DOD continues to lack a complete inventory of contaminated real property sites, which affects not only DOD’s ability to assess the potential environmental impact and to plan, estimate costs, and fund cleanup activities, as appropriate, but also its ability to minimize the risk of civilian exposure to unexploded ordnance. The risk of such exposure is expected to grow with the increase in development and recreational activities on land once used by the military for munitions-related activities (e.g., live fire testing and training). (GAO-04-147, Dec. 19, 2003) DOD’s Space and Naval Warfare Systems Command working capital fund activities used accounting entries to manipulate the amount of customer orders for the sole purpose of reducing the actual dollar amounts reported to Congress for work that had been ordered and funded (obligated) by customers but not yet completed by fiscal year end. As a result, congressional and DOD decision makers did not have the reliable information they needed to make decisions regarding the level of funding to be provided to working capital fund customers. (GAO-03-668, July 1, 2003) Our review of fiscal year 2002 data revealed that about $1 of every $4 in contract payment transactions in DOD’s Mechanization of Contract Administration Services (MOCAS) system was for adjustments to previously recorded payments—$49 billion of adjustments out of $198 billion in disbursement, collection, and adjustment transactions. According to DOD, the cost of researching and making adjustments to accounting records was about $34 million in fiscal year 2002, primarily to pay hundreds of DOD and contractor staff. (GAO-03-727, Aug. 8, 2003) DOD and congressional decision makers lack reliable data upon which to base sourcing decisions due to weaknesses in DOD’s data-gathering, reporting, and financial systems. As in the past, we have identified significant errors and omissions in the data submitted to Congress regarding the amount of each military service’s depot maintenance work out-sourced or performed in-house. As a result, both DOD and Congress lack assurances that the dollar amounts of public-private sector workloads reported by military services are reliable. (GAO-03-1023, Sept. 15, 2003) DOD’s information technology (IT) budget submission to Congress for fiscal year 2004 contained material inconsistencies, inaccuracies, or omissions that limited its reliability. For example, we identified discrepancies totaling about $1.6 billion between two primary parts of the submission—the IT budget summary report and the detailed Capital Investments Reports on each IT initiative. These problems were largely attributable to insufficient management attention and limitations in departmental policies and procedures, such as guidance in DOD’s Financial Management Regulations, and to shortcomings in systems that support budget-related activities. (GAO-04-115, Dec. 19, 2003) Since the mid 1980s, we have reported that DOD uses overly optimistic planning assumptions to estimate its annual budget request. These same assumptions are reflected in its Future Years Defense Program, which reports projected spending for the current budget year and at least 4 succeeding years. In addition, in February 2004 the Congressional Budget Office projected that DOD’s demand for resources would grow to about $473 billion a year by fiscal year 2009. DOD’s own estimate for that same year was only $439 billion. As a result of DOD’s continuing use of optimistic assumptions, DOD has too many programs for the available dollars, which often leads to program instability, costly program stretch- outs, and program termination. Over the past few years, the mismatch between programs and budgets has continued, particularly in the area of weapons systems acquisition. For example, in January 2003, we reported that the estimated costs of developing eight major weapons systems had increased from about $47 billion in fiscal year 1998 to about $72 billion by fiscal year 2003. (GAO-03-98, January 2003) DOD did not know the size of its security clearance backlog at the end of September 2003 and had not estimated a backlog since January 2000. Using September 2003 data, we estimated that DOD had a backlog of roughly 360,000 investigative and adjudicative cases, but the actual backlog size is uncertain. DOD’s failure to eliminate and accurately assess the size of its backlog may have adverse affects. For example, delays in updating overdue clearances for personnel doing classified work may increase national security risks and slowness in issuing new clearances can increase the costs of doing classified government work. (GAO-04-344, Feb. 9, 2004) These examples clearly demonstrate not only the severity of DOD’s current problems, but also the importance of reforming financial management and related business operations to improve mission support and the economy and efficiency of DOD’s operations, and to provide for transparency and accountability to Congress and American taxpayers. The underlying causes of DOD’s financial management and related business process and system weaknesses are generally the same ones I outlined in my prior testimony before this Subcommittee 2 years ago. For each of the problems cited in the previous section, we found that one or more of these causes were contributing factors. Over the years, the department has undertaken many initiatives intended to transform its business operations departmentwide and improve the reliability of information for decision making and reporting but has not had much success because it has not addressed the following four underlying causes: a lack of sustained top-level leadership and management accountability for deeply embedded cultural resistance to change, including military service parochialism and stovepiped operations; a lack of results-oriented goals and performance measures and inadequate incentives and accountability mechanisms relating to business transformation efforts. If not properly addressed, these root causes will likely result in the failure of current DOD initiatives. DOD has not routinely assigned accountability for performance to specific organizations or individuals who have sufficient authority to accomplish desired goals. For example, under the Chief Financial Officers Act of 1990, it is the responsibility of the agency Chief Financial Officer (CFO) to establish the mission and vision for the agency’s future financial management and to direct, manage, and provide oversight of financial management operations. However, at DOD, the Comptroller—who is by statute the department’s CFO—has direct responsibility for only an estimated 20 percent of the data relied on to carry out the department’s financial management operations. The other 80 percent comes from DOD’s other business operations and is under the control and authority of other DOD officials. In addition, DOD’s past experience has suggested that top management has not had a proactive, consistent, and continuing role in integrating daily operations for achieving business transformation related performance goals. It is imperative that major improvement initiatives have the direct, active support and involvement of the Secretary and Deputy Secretary of Defense to ensure that daily activities throughout the department remain focused on achieving shared, agencywide outcomes and success. While the current DOD leadership, such as the Secretary, Deputy Secretary, and Comptroller have certainly demonstrated their commitment to reforming the department, the magnitude and nature of day-to-day demands placed on these leaders following the events of September 11, 2001, clearly affect the level of oversight and involvement in business transformation efforts that these leaders can sustain. Given the importance of DOD’s business transformation effort, it is imperative that it receive the sustained leadership needed to improve the economy, efficiency, and effectiveness of DOD’s business operations. Based on our surveys of best practices of world-class organizations, strong executive CFO and Chief Information Officer leadership is essential to (1) making financial management an entitywide priority, (2) providing meaningful information to decision makers, (3) building a team of people that delivers results, and (4) effectively leveraging technology to achieve stated goals and objectives. Cultural resistance to change, military service parochialism, and stovepiped operations have all contributed significantly to the failure of previous attempts to implement broad-based management reforms at DOD. The department has acknowledged that it confronts decades-old problems deeply grounded in the bureaucratic history and operating practices of a complex, multifaceted organization. Recent audits reveal that DOD has made only small inroads in addressing these challenges. For example, the Bob Stump National Defense Authorization Act for Fiscal Year 2003 requires the DOD Comptroller to determine that each financial system improvement meets the specific conditions called for in the act before DOD obligates funds in amounts exceeding $1 million. However, we found that most system improvement efforts were not reviewed by the DOD Comptroller, as required, and that DOD continued to lack a mechanism for proactively identifying system improvement initiatives. We asked for, but DOD did not provide, comprehensive data for obligations in excess of $1 million for business system modernization. Based on the limited information provided, we found that as of December 2003, business system modernization efforts with reported obligations totaling over $479 million were not referred to the DOD Comptroller for review for fiscal years 2003 and 2004. In addition, in September 2003, we reported that DOD continues to use a stovepiped approach to develop and fund its business system investments. Specifically, we found that DOD components receive and control funding for business systems investments without being subject to the scrutiny of the DOD Comptroller. DOD’s ability to address its current “business-as- usual” approach to business system investments is further hampered by its lack of (1) a complete inventory of business systems (a condition we first highlighted in 1998), (2) a standard definition of what constitutes a business system, (3) a well-defined enterprise architecture, and (4) an effective approach for controlling financial system improvements before making obligations exceeding $1 million. Until DOD develops and implements an effective strategy for overcoming resistance, parochialism, and stovepiped operations, reform will fail and “business-as-usual” will continue at the department. At a programmatic level, the lack of clear, linked goals and performance measures handicapped DOD’s past reform efforts. As a result, DOD managers lacked straightforward roadmaps showing how their work contributed to attaining the department’s strategic goals, and they risked operating autonomously rather than collectively. As of March 2004, DOD has formulated departmentwide performance goals and measures and continues to refine and align them with the outcomes described in its strategic plan—the September 2001 Quadrennial Defense Review (QDR). The QDR outlined a new risk management framework, consisting of four dimensions of risk—force management, operational, future challenges, and institutional—to use in considering trade-offs among defense objectives and resource constraints. According to DOD’s Fiscal Year 2003 Annual Report to the President and the Congress, these risk areas are to form the basis for DOD’s annual performance goals. They will be used to track performance results and will be linked to resources. As of March 2004, the department is still in the process of implementing this approach on a departmentwide basis. DOD currently has plans to institutionalize performance management by aligning management activities with the President’s Management Agenda. As part of this effort, DOD linked its fiscal year 2004 budget resources with metrics for broad program areas, e.g., air combat, airlift, and basic research in the Office of Management and Budget’s (OMB) Program Assessment Rating Tool. We have not reviewed DOD’s efforts to link resources to metrics; however, some of our recent work notes the lack of clearly defined performance goals and measures in the management of such areas as defense inventory and military pay. The final underlying cause of the department’s long-standing inability to carry out needed fundamental reform has been the lack of incentives for making more than incremental change to existing “business-as-usual” operations, systems, and organizational structures. Traditionally, DOD has focused on justifying its need for more funding rather than on the outcomes its programs have produced. DOD has historically measured its performance by the amount of money spent, people employed, or number of tasks completed. Incentives for its decision makers to implement changed behavior have been minimal or nonexistent. The lack of incentive to change is evident in the business systems modernization area. Despite DOD’s acknowledgement that many of its systems are error prone, duplicative, and stovepiped, DOD continues to allow its component organizations to make their own investment decisions, following different approaches and criteria. These stovepiped decision-making processes have contributed to the department’s current complex, error-prone environment of approximately 2,300 systems. In March 2003, we reported that ineffective program management and oversight, as well as a lack of accountability, resulted in DOD continuing to invest hundreds of millions of dollars in system modernization efforts without any assurance that the projects will produce operational improvements commensurate with the amount invested. For example, the estimated cost of one of the business system investment projects that we reviewed increased by as much as $274 million, while its schedule slipped by almost 4 years. After spending $126 million, DOD terminated that project in December 2002, citing poor performance and increasing costs. GAO and the DOD Inspector General (DOD IG) have identified numerous business system modernization efforts that cost more than planned, take years longer than planned, and fall short of delivering planned or needed capabilities. Despite this track record, DOD continues to increase spending on business systems while at the same time it lacks the effective management and oversight needed to achieve real results. Without appropriate incentives to improve their project management, ongoing oversight, and adequate accountability mechanisms, DOD components will continue to develop duplicative and nonintegrated systems that are inconsistent with the Secretary’s vision for reform. To effect real change, actions are needed to (1) break down parochialism and reward behaviors that meet DOD-wide goals, (2) develop incentives that motivate decision makers to initiate and implement efforts that are consistent with better program outcomes, including saying “no” or pulling the plug on a system or program that is failing, and (3) facilitate a congressional focus on results-oriented management, particularly with respect to resource-allocation decisions. Over the years, we have given DOD credit for beginning numerous initiatives intended to improve its business operations. Unfortunately, most of these initiatives failed to achieve their intended objective in part, we believe, because they failed to incorporate key elements that in our experience shows are critical to successful reform. Today, I would like to discuss two very important broad-based initiatives DOD currently has underway that, if properly developed and implemented, will result in significant improvements in DOD’s business operations. In addition to these broad-based initiatives, DOD has undertaken several interim initiatives in recent years that have resulted in tangible, although limited, improvements. We believe that these tangible improvements were possible because DOD incorporated many of the key elements critical for reform. Furthermore, I would like to offer two suggestions for legislative consideration that I believe could significantly increase the likelihood of a successful business transformation effort at DOD. As I have previously testified, and the success of the more narrowly defined DOD initiatives I will discuss later illustrate, the following key elements collectively will enable the department to effectively address the underlying causes of its inability to resolve its long-standing financial and business management problems. These elements are addressing the department’s financial management and related business operational challenges as part of a comprehensive, integrated, DOD-wide strategic plan for business reform; providing for sustained and committed leadership by top management, including but not limited to the Secretary of Defense, establishing resource control over business systems investments; establishing clear lines of responsibility, authority, and accountability; incorporating results-oriented performance measures and monitoring progress tied to key financial and business transformation objectives; providing appropriate incentives or consequences for action or inaction; establishing an enterprise architecture to guide and direct business systems modernization investments; and ensuring effective oversight and monitoring. For the most part, these elements, which should not be viewed as independent actions but rather as a set of interrelated and interdependent actions, are consistent with those discussed in the department’s April 2001 financial management transformation report. The degree to which DOD incorporates them into its current reform efforts—both long and short term—will be a deciding factor in whether these efforts are successful. Human capital challenges at DOD are crosscutting and impact the effectiveness of all of its business operations. Effective human capital strategies are necessary for any business transformation to succeed at DOD. For several years, we have reported that many of DOD’s business process and control weaknesses were attributable in part to human capital issues. Recent audits of DOD’s military payroll and the individually billed travel card program further highlight the adverse impact that outdated and inadequate human capital practices, such as insufficient staffing, training, and monitoring of performance, continue to have on DOD business operations. I strongly support the need for modernizing federal human capital policies both within DOD and for the federal government at large. We have found that a critical success factor for overall organizational transformation is the use of a modern, effective, credible, and integrated performance management system to define responsibility and assure accountability for achieving desired goals and objectives. Such a performance management system can help manage and direct the transformation process by linking performance expectations to an employee’s role in the transformation process. GAO has found that there are significant opportunities to use the performance management system to explicitly link senior executive expectations for performance to results-oriented goals. There is a need to hold senior executives accountable for demonstrating competencies in leading and facilitating change and fostering collaboration both within and across organizational boundaries to achieve results. Setting and meeting expectations such as these will be critical to achieving needed transformation changes. Simply put, DOD must convince people throughout the department that they must change business-as-usual practices or they are likely to face serious consequences, personally and organizationally. DOD has already applied this principle at the Defense Finance and Accounting Service (DFAS). For example, DFAS managers— and sometimes staff—are rated and rewarded based on their ability to reach specific annual performance goals. But linking employee pay to the achievement of measurable performance goals must be done within the context of a credible human capital system that includes adequate safeguards. The National Defense Authorization Act for Fiscal Year 2004 authorized DOD to establish a National Security Personnel System for its civilian employees that is modern, flexible, and consistent with the merit principles outlined by the act. This legislation requires DOD to develop a human capital system that is consistent with many of the practices that we have laid out for an effective human capital system, including a modern and results-oriented performance management system. However, in our opinion, DOD does not yet have the necessary institutional infrastructure in place within its organization to support an effective human capital transformation effort. This institutional infrastructure must include, at a minimum, a human capital planning process that integrates the department’s human capital policies, strategies, and programs for both civilian (including contractors) and military personnel, with its program goals, mission, and desired outcomes; the capabilities to effectively develop and implement a new human capital system, and a modern, effective, credible, and hopefully validated performance management system that includes a set of adequate safeguards, including reasonable transparency and appropriate accountability mechanisms, to ensure the fair, effective, and credible implementation of the system. The results of our review of DOD’s strategic human capital planning efforts along with the use of human capital flexibilities and related human capital efforts across government underscore the importance of such an institutional infrastructure in developing and effectively implementing new personnel authorities. In the absence of this critical element, the new human capital authorities will provide little advantage and could actually end up doing damage if not properly implemented. As DOD develops regulations to implement its new civilian personnel system, the department needs to do the following. Ensure the active involvement of the Office of Personnel Management (OPM) in the development process, given the significant implications that changes in DOD regulations may have on governmentwide human capital policies. Ensure the involvement of civilian employees and unions in the development of a new personnel system. The law calls for DOD to involve employees, especially in the design of its new performance management system. Involving employees in planning helps to develop agency goals and objectives that incorporate insights about operations from a front-line perspective. It can also serve to increase employees’ understanding and acceptance of organizational goals and improve motivation and morale. Use a phased approach to implementing the system in recognition that different parts of the organization will have different levels of readiness and different capabilities to implement new authorities. Moreover, a phased approach allows for learning so that appropriate adjustments and midcourse corrections can be made before the regulations are fully implemented departmentwide. In this regard, DOD has indicated that it plans to implement its new human capital system for 300,000 civilian employees by October 1, 2004. It is highly unlikely that DOD will have employed an appropriate process and implemented an appropriate infrastructure to achieve this objective. It is worth mentioning here that the Department of Homeland Security (DHS) is also currently developing a new human capital system. DHS is using a collaborative process that facilitates participation from all levels of DHS, and directly involves OPM. We found that the DHS process to date has generally reflected the important elements of a successful transformation, including effective communication and employee involvement. In addition, DHS plans to implement the job evaluation, pay, and performance management system in phases to allow time for final design, training, and careful implementation. I believe that DOD could benefit from employing a more inclusive process and phased implementation approach similar to the process used by DHS. Another broad-based initiative that is vital to the department’s efforts to transform DOD business operations is the BMMP, which the department established in July 2001. The purpose of the BMMP is to oversee development and implementation of a departmentwide business enterprise architecture (BEA), transition plan, and related efforts to ensure that DOD business system investments are consistent with the architecture. A well- defined and properly implemented business enterprise architecture can provide assurance that the department invests in integrated enterprisewide business solutions and, conversely, can help move resources away from nonintegrated business system development efforts. As we reported in July 2003, within 1 year DOD developed an initial version of its departmentwide architecture for modernizing its current financial and business operations and systems. Thus far, DOD has expended tremendous effort and resources and has made important progress towards complying with legislative requirements. However, substantial work remains before the architecture will begin to have a tangible impact on improving DOD’s overall business operations. I cannot overemphasize the degree of difficulty DOD faces in developing and implementing a well-defined architecture to provide the foundation that will guide its overall business transformation effort. On the positive side, during its initial efforts to develop the architecture, the department established some of the architecture management capabilities advocated by best practices and federal guidance, such as establishing a program office, designating a chief architect, and using an architecture development methodology and automated tool. Further, DOD’s initial version of its BEA provides a foundation on which to build and ultimately produce a well-defined business enterprise architecture. For example, in September 2003, we reported that the “As Is” descriptions within the BEA include an inventory of about 2,300 systems in operation or under development and their characteristics. The “To Be” descriptions address, to at least some degree, how DOD intends to operate in the future, what information will be needed to support these future operations, and what technology standards should govern the design of future systems. While some progress has been made, DOD has not yet taken important steps that are critical to its ability to successfully use the enterprise architecture to drive reform throughout the department’s overall business operations. For example, DOD has not yet defined and implemented the following. Detailed plans to extend and evolve its initial architecture to include the missing scope and detail required by the Bob Stump National Defense Authorization Act for Fiscal Year 2003 and other relevant architectural requirements. Specifically, (1) the initial version of the BEA excluded some relevant external requirements, such as requirements for recording revenue, and lacked or provided little descriptive content pertaining to its “As Is” and “To Be” environments and (2) DOD had not yet developed the transition plan needed to provide a temporal road map for moving from the “As Is” to the “To Be” environment. An effective approach to select and control business system investments for obligations exceeding $1 million. As I previously stated, and it bears repeating here, DOD components currently receive direct funding for their business systems and continue to make their own parochial decisions regarding those investments without having received the scrutiny of the DOD Comptroller as required by the Bob Stump National Defense Authorization Act for Fiscal Year of 2003. Later, I will offer a suggestion for improving the management and oversight of the billions of dollars DOD invests annually in system modernization efforts. Until DOD completes its efforts to refine and implement its enterprise architecture and transition plan, and develop and implement an effective approach for selecting and controlling business system investments, DOD will continue to lack (1) a comprehensive and integrated strategy to guide its business process and system changes, and (2) results-oriented measures to monitor and measure progress, including whether system development and modernization investment projects adequately incorporate leading practices used by the private sector and federal requirements and achieve performance and efficiency commensurate with the cost. These elements are critical to the success of DOD’s BMMP. Developing and implementing a business enterprise architecture for an organization as large and complex as DOD is a formidable challenge but it is critical to effecting the change required to achieve the Secretary’s vision of relevant, reliable, and timely financial and other management information to support the department’s vast operations. As mandated, we plan to continue to report on DOD’s progress in developing the next version of its architecture, developing its transition plan, validating its “As Is” systems inventory, and controlling its system investments. Since DOD’s overall business process transformation is a long-term effort, in the interim it is important for the department to focus on improvements that can be made using, or requiring only minor changes to, existing automated systems and processes. As demonstrated by the examples I will highlight in this testimony, leadership, real incentives, accountability, and oversight and monitoring—key elements to successful reform—have brought about improvements in some DOD operations, such as more timely commercial payments, reduced payment recording errors, and significant reductions in individually billed travel card delinquency rates. To help achieve the department’s goal of improved financial information, the DOD Comptroller has developed a Financial Management Balanced Scorecard that is intended to align the financial community’s strategy, goals, objectives, and related performance measures with the departmentwide risk management framework established as part of DOD’s QDR, and with the President’s Management Agenda. To effectively implement the balanced scorecard, the Comptroller is planning to cascade the performance measures down to the military services and defense agency financial communities, along with certain specific reporting requirements. DOD has also developed a Web site where implementation information and monthly indicator updates will be made available for the financial communities’ review. At the departmentwide level, certain financial metrics will be selected, consolidated, and reported to the top levels of DOD management for evaluation and comparison. These “dashboard” metrics are intended to provide key decision makers, including Congress, with critical performance information at a glance, in a consistent and easily understandable format. DFAS has been reporting the metrics cited below for several years, which, under the leadership of DFAS’ Director and DOD’s Comptroller, have reported improvements, including From April 2001 to January 2004, DOD reduced its commercial pay backlogs (payment delinquencies) by 55 percent. From March 2001 to December 2003, DOD reduced its payment recording errors by 33 percent. The delinquency rate for individually billed travel cards dropped from 18.4 percent in January 2001 to 10.7 percent in January 2004. Using DFAS’ metrics, management can quickly see when and where problems are arising and can focus additional attention on those areas. While these metrics show significant improvements from 2001 to today, statistics for the last few months show that progress has slowed or even taken a few steps backward for payment recording errors and commercial pay backlogs. Our report last year on DOD’s metrics program included a caution that, without modern integrated systems and the streamlined processes they engender, reported progress may not be sustainable if workload is increased. Since we reported problems with DOD’s purchase card program, DOD and the military services have taken actions to address all of our 109 recommendations. In addition, we found that DOD and the military services took action to improve the purchase card program consistent with the requirements of the Bob Stump National Defense Authorization Act for Fiscal Year 2003 and the DOD Appropriation Act for Fiscal Year 2003. Specifically, we found that DOD and the military services had done the following. Substantially reduced the number of purchase cards issued. According to GSA records, DOD had reduced the total number of purchase cards from about 239,000 in March 2001 to about 134,609 in January 2004. These reductions have the potential to significantly improve the management of this program. Issued policy guidance to field activities to (1) perform periodic reviews of all purchase card accounts to reestablish a continuing bona fide need for each card account, (2) cancel accounts that were no longer needed, and (3) devise additional controls over infrequently used accounts to protect the government from potential cardholder or outside fraudulent use. Issued disciplinary guidelines, separately, for civilian and military employees who engage in improper, fraudulent, abusive, or negligent use of a government charge card. In addition, to monitor the purchase card program, the DOD IG and the Navy have prototyped and are now expanding a data-mining capability to screen for and identify high-risk transactions (such as potentially fraudulent, improper, and abusive use of purchase cards) for subsequent investigation. On June 27, 2003, the DOD IG issued a reportsummarizing the results of an in-depth review of purchase card transactions made by 1,357 purchase cardholders. The report identified 182 cardholders who potentially used their purchase cards inappropriately or fraudulently. We believe that consistent oversight played a major role in bringing about these improvements in DOD’s purchase and travel card programs. During 2001, 2002, and 2003, seven separate congressional hearings were held on the Army and Navy purchase and individually billed travel card programs. Numerous legislative initiatives aimed at improving DOD’s management and oversight of these programs also had a positive impact. Another important initiative underway at the department pertains to financial reporting. Under the leadership of Comptroller Zakheim, DOD is working to instill discipline into its financial reporting processes to improve the reliability of the department’s financial data. Resolution of serious financial management and related business management weaknesses is essential to achieving any opinion on the DOD consolidated financial statements. Pursuant to the requirements in section 1008 of the National Defense Authorization Act for Fiscal Year 2002, DOD has reported for the past 3 years on the reliability of the department’s financial statements, concluding that the department is not able to provide adequate evidence supporting material amounts in its financial statements. Specifically, DOD stated that it was unable to comply with applicable financial reporting requirements for (1) property, plant, and equipment (PP&E), (2) inventory and operating materials and supplies, (3) environmental liabilities, (4) intragovernmental eliminations and related accounting entries, (5) disbursement activity, and (6) cost accounting by responsibility segment. Although DOD represented that the military retirement health care liability data had improved for fiscal year 2003, the cost of direct health care provided by DOD-managed military treatment facilities was a significant amount of DOD’s total recorded health care liability and was based on estimates for which adequate support was not available. DOD has indicated that by acknowledging its inability to produce reliable financial statements, as required by the act, the department saves approximately $23 million a year through reduction in the level of resources needed to prepare and audit financial statements. However, DOD has set the goal of obtaining a favorable opinion on its fiscal year 2007 departmentwide financial statements. To this end, DOD components and agencies have been tasked with addressing material line item deficiencies, in conjunction with the BMMP. This is an ambitious goal and we have been requested by Congress to review the feasibility and cost effectiveness of DOD’s plans for obtaining such an opinion within the stated time frame. To instill discipline in its financial reporting process, the DOD Comptroller requires DOD’s major components to prepare quarterly financial statements along with extensive footnotes that explain any improper balances or significant variances from previous year quarterly statements. All of the statements and footnotes are analyzed by Comptroller office staff and reviewed by the Comptroller. In addition, the midyear and end-of- year financial statements must be briefed to the DOD Comptroller by the military service Assistant Secretary for Financial Management or the head of the defense agency. We have observed several of these briefings and have noted that the practice of preparing and explaining interim financial statements has led to the discovery and correction of numerous recording and reporting errors. If DOD continues to provide for active leadership, along with appropriate incentives and accountability mechanisms, improvements will continue to occur in its programs and initiatives. I would like to offer two suggestions for legislative consideration that I believe could contribute significantly to the department’s ability to not only address the impediments to DOD success but also to incorporate needed key elements to successful reform. These suggestions would include the creation of a chief management official and the centralization of responsibility and authority for business system investment decisions with the domain leaders responsible for the department’s various business process areas, such as logistics and human resource management. Previous failed attempts to improve DOD’s business operations illustrate the need for sustained involvement of DOD leadership in helping to assure that the DOD’s financial and overall business process transformation efforts remain a priority. While the Secretary and other key DOD leaders have certainly demonstrated their commitment to the current business transformation efforts, the long-term nature of these efforts requires the development of an executive position capable of providing the strong and sustained executive leadership—over a number of years and various administrations. The day-to-day demands placed on the Secretary, the Deputy Secretary, and others make it difficult for these leaders to maintain the oversight, focus, and momentum needed to resolve the weaknesses in DOD’s overall business operations. This is particularly evident given the demands that the Iraq and Afghanistan postwar reconstruction activities and the continuing war on terrorism have placed on current leaders. Likewise, the breadth and complexity of the problems preclude the Under Secretaries, such as the DOD Comptroller, from asserting the necessary authority over selected players and business areas. While sound strategic planning is the foundation upon which to build, sustained leadership is needed to maintain the continuity needed for success. One way to ensure sustained leadership over DOD’s business transformation efforts would be to create a full-time executive level II position for a chief management official who would serve as the Principal Under Secretary of Defense for Management. This position would provide the sustained attention essential for addressing key stewardship responsibilities such as strategic planning, performance and financial management, and business systems modernization in an integrated manner, while also facilitating the overall business transformation operations within DOD. This position could be filled by an individual, appointed by the President and confirmed by the Senate, for a set term of 7 years with the potential for reappointment. Such an individual should have a proven track record as a business process change agent in large, complex, and diverse organizations—experience necessary to spearhead business process transformation across the department and serve as an integrator for the needed business transformation efforts. In addition, this individual would enter into an annual performance agreement with the Secretary that sets forth measurable individual goals linked to overall organizational goals in connection with the department’s overall business transformation efforts. Measurable progress towards achieving agreed upon goals would be a basis for determining the level of compensation earned, including any related bonus. In addition, this individual’s achievements and compensation would be reported to Congress each year. We have made numerous recommendations to DOD intended to improve the management oversight and control of its business systems modernization investments. However, as previously mentioned, progress in achieving this control has been slow and, as a result, DOD has little or no assurance that current business systems modernization investment money is being spent in an economically efficient and effective manner. DOD’s current systems investment process has contributed to the evolution of an overly complex and error-prone information technology environment containing duplicative, nonintegrated, and stovepiped systems. Given that DOD plans to spend $19 billion on business systems and related infrastructure for fiscal year 2004—including an estimated $5 billion in modernization money—it is critical that actions be taken to gain more effective control over such business systems investments. One suggestion we have for legislative action to address this issue that is consistent with our open recommendations to DOD, is to establish specific management oversight, accountability, and control of funding with the “owners” of the various functional areas or domains. This legislation would define the scope of the various business areas (e.g., acquisition, logistics, finance and accounting) and establish functional responsibility for management of the portfolio of business systems in that area with the relevant Under Secretary of Defense for the six departmental domains and the Chief Information Officer for the Enterprise Information Environment Mission (information technology infrastructure). For example, planning, development, acquisition, and oversight of DOD’s portfolio of logistics business systems would be vested in the Under Secretary of Defense for Acquisition, Technology, and Logistics. We believe it is critical that funds for DOD business systems be appropriated to the domain owners in order to provide for accountability, transparency, and the ability to prevent the continued parochial approach to systems development that exists today. The domains would establish a hierarchy of investment review boards with DOD-wide representation, including the military services and Defense agencies. These boards would be responsible for reviewing and approving investments to develop, operate, maintain, and modernize business systems for the domain portfolio, including ensuring that investments were consistent with DOD’s BEA. All domain owners would be responsible for coordinating their business system modernization efforts with the chief management official who would chair the Defense Business Systems Modernization Executive Committee. Domain leaders would also be required to report to Congress through the chief management official and the Secretary of Defense, on applicable business systems that are not compliant with review requirements and to include a summary justification for noncompliance. As seen again in Iraq, the excellence of our military forces is unparalleled. However, that excellence is often achieved in the face of enormous challenges in DOD’s financial management and other business areas, which have serious and far-reaching implications related to the department’s operations and critical national defense mission. Our recent work has shown that DOD’s long-standing financial management and business problems have resulted in fundamental operational problems, such as failure to properly pay mobilized Army Guard soldiers and the inability to provide adequate accountability and control over supplies and equipment shipments in support of Operation Iraqi Freedom. Further, the lack of adequate transparency and appropriate accountability across all business areas has resulted in certain fraud, waste, and abuse and hinders DOD’s attempts to develop world-class operations and activities to support its forces. As our nation continues to be challenged with growing budget deficits and increasing pressure to reduce spending levels, every dollar that DOD can save through improved economy and efficiency of its operations is important. DOD’s senior leaders have demonstrated a commitment to transforming the department and improving its business operations and have taken positive steps to begin this effort. We believe that our two suggested legislative initiatives will greatly improve the likelihood of meaningful, broad-based reform at DOD. The continued involvement and monitoring by congressional committees will be critical to ensure that DOD’s initial transformation actions are sustained and extended and that the department achieves its goal of securing the best performance and highest measure of accountability for the American people. I commend the Subcommittee for holding this hearing and I encourage you to use this vehicle, on an annual basis, as a catalyst for long overdue business transformation at DOD. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions you or other members of the Subcommittee may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-9095 or kutzg@gao.gov, Randolph Hite at (202) 512-3439 or hiter@gao.gov, or Evelyn Logue at 202-512-3881. Other key contributors to this testimony include Sandra Bell, Meg Best, Molly Boyle, Mary Ellen Chervenic, Cherry Clipper, Francine Delvecchio, Abe Dymond, Gayle Fischer, Geoff Frank, John Kelly, Elizabeth Mead, John Ryan, Cary Russell, Lisa Shames, Darby Smith, Edward Stephenson, Derrick B. Stewart, Carolyn Voltz, Marilyn Wasleski, and Jenniffer Wilson. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In March 2002, GAO testified on the Department of Defense's (DOD) financial management problems and key elements necessary for successful reform. Although the underlying conditions remain fundamentally unchanged, within the past 2 years DOD has begun a number of initiatives intended to address previously reported problems and transform its business operations. The Subcommittee on Readiness and Management Support, Senate Committee on Armed Services, asked GAO to provide a current status report on DOD's progress to date and suggestions for improvement. Specifically, GAO was asked to provide (1) an overview of the impact of financial and related business weaknesses on DOD operations, (2) the underlying causes of DOD business transformation challenges, and (3) the status of DOD reform efforts. In addition, GAO reiterates the key elements to successful reform: (1) an integrated business transformation strategy, (2) sustained leadership and resource control, (3) clear lines of responsibility and accountability, (4) results-oriented performance, (5) appropriate incentives and consequences, (6) an enterprise architecture to guide reform efforts, and (7) effective monitoring and oversight. GAO also offers two suggestions for legislative consideration which are intended to improve the likelihood of meaningful, broad-based financial management and related business reform at DOD. DOD's senior civilian and military leaders are committed to transforming the department and improving its business operations and have taken positive steps to begin this effort. However, overhauling the financial management and related business operations of one of the largest and most complex organizations in the world represents a huge management challenge. Six DOD program areas are on GAO's "high risk" list, and the department shares responsibility for three other governmentwide high-risk areas. DOD's substantial financial and business management weaknesses adversely affect not only its ability to produce auditable financial information, but also to provide timely, reliable information for management and Congress to use in making informed decisions. Further, the lack of adequate transparency and appropriate accountability across all of DOD's major business areas results in billions of dollars in annual wasted resources in a time of increasing fiscal constraint. Four underlying causes impede reform: (1) lack of sustained leadership, (2) cultural resistance to change, (3) lack of meaningful metrics and ongoing monitoring, and (4) inadequate incentives and accountability mechanisms. To address these issues, GAO reiterates the keys to successful business transformation and makes two additional suggestions for legislative action. First, GAO suggests that a senior management position be established to spearhead DOD-wide business transformation efforts. Second, GAO proposes that the leaders of DOD's functional areas, referred to as domains, receive and control the funding for system investments, as opposed to the military services. Domain leaders would be responsible for managing business system and process reform efforts within their business areas and would be accountable to the new senior management official for ensuring their efforts comply with DOD's business enterprise architecture.
Approximately 4 percent of discretionary spending in the United States’ federal budget is appropriated for the conduct of foreign affairs activities. This includes funding for bilateral and multilateral assistance, military assistance, and State Department activities. Spending for State, taken from the “150 Account,” makes up the largest share of foreign affairs spending. Funding for State’s Diplomatic and Consular Programs—State’s chief operating account, which supports the department’s diplomatic activities and programs, including salaries and benefits—comprises the largest portion of its appropriations. Embassy security, construction, and maintenance funding comprises another large portion of State’s appropriation. Funding for the administration of foreign affairs has risen dramatically in recent fiscal years, due, in part, to enhanced funding for security-related improvements worldwide, including personnel, construction, and equipment following the bombings of two U.S. embassies in 1998 and the events of September 11, 2001. For example, State received about $2.8 billion in fiscal year 1998, but by fiscal year 2003, State’s appropriation was approximately $6 billion. For fiscal year 2004, State is seeking approximately $6.4 billion, which includes $4 billion for diplomatic and consular affairs and $1.5 billion for embassy security, construction, and maintenance. In addition, State plans to spend $262 million over fiscal years 2003 and 2004 on information technology modernization initiatives overseas. Humanitarian and economic development assistance is an integral part of U.S. global security strategy, particularly as the United States seeks to diminish the underlying conditions of poverty and corruption that may be linked to instability and terrorism. USAID is charged with overseeing U.S. foreign economic and humanitarian assistance programs. In fiscal year 2003, Congress appropriated about $12 billion—including supplemental funding—to USAID, and the agency managed programs in about 160 countries, including 71 overseas missions with USAID direct-hire presence. Fiscal year 2004 foreign aid spending is expected to increase due, in part, to substantial increases in HIV/AIDS funding and security- related economic aid. I would like to discuss State’s performance in managing its overseas real estate, overseeing major embassy construction projects, managing its overseas presence and staffing, modernizing its information technology, and developing and implementing strategic plans. State manages an overseas real property portfolio valued at approximately $12 billion. The management of real property is an area where State could achieve major cost savings and other operational efficiencies. In the past, we have been critical of State’s management of its overseas property, including its slow disposal of unneeded facilities. Recently, officials at State’s Bureau of Overseas Buildings Operations (OBO), which manages the government’s real property overseas, have taken a more systematic approach to identifying unneeded properties and have significantly increased the sale of these properties. For example, in 2002, OBO completed sales of 26 properties totaling $64 million, with contracts in place for another $40 million in sales. But State needs to dispose of more facilities in the coming years as it embarks on an expensive plan to replace embassies and consulates that do not meet State’s security requirements and/or are in poor condition. Unneeded property and deteriorating facilities present a real problem— but also an opportunity to improve U.S. operations abroad and achieve savings. We have reported that the management of overseas real estate has been a continuing challenge for State, although the department has made improvements in recent years. One of the key weaknesses we found was the lack of a systematic process to identify unneeded properties and to dispose of them in a timely manner. In 1996, we identified properties worth hundreds of millions of dollars potentially excess to State’s needs or of questionable value and expensive to maintain that the department had not previously identified for potential sale. As a result of State’s inability to resolve internal disputes and sell excess property in an expeditious manner, we recommended that the Secretary of State appoint an independent panel to decide which properties should be sold. The Secretary of State created this panel in 1997. As of April 2002, the Real Property Advisory Board had reviewed 41 disputed properties and recommended that 26 be sold. By that time, State had disposed of seven of these properties for about $21 million. In 2002, we again reviewed State’s processes for identifying and selling unneeded overseas real estate and found that it had taken steps to implement a more systematic approach that included asking posts to annually identify properties for disposal and increasing efforts by OBO and officials from State’s OIG to identify such properties when they visit posts. For example, the director of OBO took steps to resolve disputes with posts that have delayed the sale of valuable property. OBO has also instituted monthly Project Performance Reviews to review all aspects of real estate management, such as the status of acquisitions and disposal of overseas property. However, we found that the department’s ability to monitor property use and identify potentially unneeded properties was hampered by errors and omissions in its property inventory. Inaccurate inventory information can result in unneeded properties not being identified for potential sale. Therefore, we recommended that the department improve the accuracy of its real property inventory. In commenting on our report, OBO said that it had already taken action to improve its data collection. For example, State sent a cable to all overseas posts reminding them of their responsibilities to maintain accurate real estate records. State has significantly improved its performance in selling unneeded property. In total, between fiscal years 1997 through 2002, State sold 129 properties for more than $459 million. Funds generated from property sales are being used to help offset embassy construction costs in Berlin, Germany; Luanda, Angola; and elsewhere. State estimates it will sell additional properties between fiscal years 2003 and 2008 valued at approximately $300 million. More recently, State has taken action to sell two properties (a 0.4 acre parking lot and an office building) in Paris identified in a GAO report as potentially unneeded. After initially resisting the sale of the parking lot, the department reversed its decision and sold both properties in June 2003 for a total of $63.1 million—a substantial benefit to the government. The parking lot alone was sold conditionally for $20.7 million. Although this may be a unique case, it demonstrates how scrutiny of the property inventory could result in potential savings. The department should continue to look closely at property holdings to see if other opportunities exist. If State continues to streamline its operations and dispose of additional facilities over the next several years, it can use those funds to help offset the cost of replacing about 160 embassies and consulates for security reasons in the coming years. In the past, State has had difficulties ensuring that major embassy construction projects were completed on time and within budget. For example, in 1991 we reported that State’s previous construction program suffered from delays and cost increases due to, among other things, poor program planning and inadequate contractor performance. In 1998, State embarked on the largest overseas embassy construction program in its history in response to the bombings of U.S. embassies in Africa. From fiscal years 1999 through 2003, State received approximately $2.7 billion for its new construction program and began replacing 25 of 185 posts identified as vulnerable by State. To better manage this program, OBO has undertaken several initiatives aimed at improving State’s stewardship of its funds for embassy buildings, including cutting costs of planned construction projects, using standard designs, and reducing construction duration through a “fast track” process. Moreover, State hopes that additional management tools aimed at ensuring that new facilities are built in the most cost-effective manner, including improvements in how agencies determine requirements for new embassies, will help move the program forward. State is also pursuing a cost-sharing plan that would charge other federal agencies for the cost of their overall overseas presence and provide additional funds to help accelerate the embassy construction program. While State has begun replacing many facilities, OBO officials estimated that beginning in fiscal year 2004, it will cost an additional $17 billion to replace facilities at remaining posts. As of February 2003, State had begun replacing 25 of 185 posts identified by State as vulnerable after the 1998 embassy bombings. To avoid the problems that weakened the previous embassy construction program, we recommended that State develop a long-term capital construction plan that identifies (1) proposed construction projects’ cost estimates and schedules and (2) estimated annual funding requirements for the overall program. Although State initially resisted implementing our recommendation, OBO’s new leadership reconsidered this recommendation and has since produced two annual planning documents titled the “Long-Range Overseas Building Plan.” According to OBO, the long-range plan is the roadmap by which State, other departments and agencies, the Office of Management and Budget (OMB), the Congress, and others can focus on defining and resolving the needs of overseas facilities. In addition to the long-range plan, OBO has undertaken several initiatives aimed at improving State’s stewardship of its embassy construction funds. These measures have the potential to result in significant cost savings and other efficiencies. For example, OBO has developed Standard Embassy Designs (SED) for use in most embassy construction projects. SEDs provide OBO with the ability to contract for shortened design and construction periods and control costs through standardization; shifted from “design-bid-build” contracting toward “design-build” contracts, which have the potential to reduce project costs and construction time frames; developed and implemented procedures to enforce cost planning during the design phase and ensure that the final designs are within budget; and increased the number of contractors eligible to bid for construction projects, thereby increasing competition for contracts, which could potentially result in lower bids. OBO has set a goal of a 2-year design and construction period for its mid- sized, standard embassy design buildings, which, if met, could reduce the amount of time spent in design and construction by almost one year. We reported in January 2003 that these cost-cutting efforts allowed OBO to achieve $150 million in potential cost savings during fiscal year 2002. These savings, according to OBO, resulted from the application of the SEDs and increased competition for the design and construction of these projects. Despite these gains, State will face continuing hurdles throughout the life of the embassy construction program. These hurdles include meeting construction schedules within the estimated costs and ensuring that State has the capacity to manage a large number of projects simultaneously. Because of the high costs associated with this program and the importance of providing secure facilities overseas, we believe this program merits continuous oversight by State, GAO, and the Congress. In addition to ensuring that individual construction projects meet cost and performance schedules, State must also ensure that new embassies are appropriately sized. Given that the size and cost of new facilities are directly related to agencies’ anticipated staffing needs, it is imperative that future requirements be predicted as accurately as possible. Embassy buildings that are designed too small may require additional construction and funding in the future; buildings that are too large may have unused space—a waste of government funds. State’s construction program in the late 1980s encountered lengthy delays and cost overruns in part because it lacked coordinated planning of post requirements prior to approval and budgeting for construction projects. As real needs were determined, changes in scope and increases in costs followed. OBO now requires that all staffing projections for new embassy compounds be finalized prior to submitting funding requests, which are sent to Congress as part of State’s annual budget request each February. In April 2003, we reported that U.S. agencies operating overseas, including State, were developing staffing projections without a systematic approach. We found that State’s headquarters gave embassies little guidance on factors to consider when developing projections, and thus U.S. agencies did not take a consistent or systematic approach to determining long-term staffing needs. Based on our recommendations, State in May 2003 issued a “Guide to Developing Staffing Projections for New Embassy and Consulate Compound Construction,” which requires a more serious, disciplined approach to developing staffing projections. When fully implemented, this approach should ensure that overseas staffing projections are more accurate and minimize the financial risks associated with building facilities that are designed for the wrong number of people. Historically, State has paid all costs associated with the construction of overseas facilities. Following the embassy bombings, the Overseas Presence Advisory Panel (OPAP) noted a lack of cost sharing among agencies that use overseas facilities. As a result, OPAP recommended that agencies be required to pay rent in government-owned buildings in foreign countries to cover operating and maintenance costs. In 2001, an interagency group put forth a proposal that would require agencies to pay rent based on the space they occupy in overseas facilities, but the plan was not enacted. In 2002, OMB began an effort to develop a mechanism that would require users of overseas facilities to share the construction costs associated with those facilities. The administration believes that if agencies were required to pay a greater portion of the total costs associated with operating overseas facilities, they would think more carefully before posting personnel overseas. As part of this effort, State has presented a capital security cost-sharing plan that would require agencies to help fund its capital construction program. State’s proposal calls for each agency to fund a proportion of the total construction program cost based on its respective proportion of total overseas staffing. OBO has reported that its proposed cost-sharing program could result in additional funds, thereby reducing the duration of the overall program. State maintains a network of approximately 260 diplomatic posts in about 170 countries worldwide and employs a direct-hire workforce of about 30,000 employees, about 60 percent of those overseas. The costs of maintaining staff overseas vary by agency but in general are extremely high. In 2002, the average annual cost of placing one full-time direct-hire American family of four in a U.S. embassy was approximately $339,000. These costs make it critical that the U.S. overseas presence is sized appropriately to conduct its work. We have reported that State and most other federal agencies overseas have historically lacked a systematic process for determining the right number of personnel needed overseas— otherwise known as rightsizing. Moreover, in June 2002, we reported that State faces serious staffing shortfalls at hardship posts—in both the number of staff assigned to these posts and their experience, skills, and/or language proficiency. Thus, State has been unable to ensure that it has “the right people in the right place at the right time with the right skills to carry out America’s foreign policy”—its definition of diplomatic readiness. However, since 2001, State has directed significant attention to improving weaknesses in the management of its workforce planning and staffing issues that we and others have noted. Because personnel salaries and benefits consume a huge portion of State’s operating budget, it is important that the department exercise good stewardship of its human capital resources. Around the time GAO designated strategic human capital management as a governmentwide high-risk area in 2001, State, as part of its Diplomatic Readiness Initiative (DRI), began directing significant attention to addressing its human capital needs, adding 1,158 employees over a 3-year period (fiscal years 2002 through 2004). In fiscal year 2002, Congress allocated nearly $107 million for the DRI. State requested nearly $100 million annually in fiscal years 2003 and 2004 to hire approximately 400 new staff each year. The DRI has enabled the department to boost recruitment. However, State has historically lacked a systematic approach to determine the appropriate size and location of its overseas staff. To move the rightsizing process forward, the August 2001 President’s Management Agenda identified it as one of the administration’s priorities. Given the high costs of maintaining the U.S. overseas presence, the administration has instructed U.S. agencies to reconfigure the number of overseas staff to the minimum necessary to meet U.S. foreign policy goals. This OMB-led initiative aims to develop cost-saving tools or models, such as increasing the use of regional centers, revising the Mission Performance Planning (MPP) process, increasing overseas administrative efficiency, and relocating functions to the United States. According to the OPAP, although the magnitude of savings from rightsizing the overseas presence cannot be known in advance, “significant savings” are achievable. For example, it said that reducing all agencies’ staffing by 10 percent could yield governmentwide savings of almost $380 million a year. GAO’s Rightsizing Framework In May 2002, we testified on our development of a rightsizing framework. The framework is a series of questions linking staffing levels to three critical elements of overseas diplomatic operations: security of facilities, mission priorities and requirements, and cost of operations. It also addresses consideration of rightsizing options, such as relocating functions back to the United States or to regional centers, competitively sourcing functions, and streamlining operations. Rightsizing analyses could lead decision makers to increase, decrease, or change the mix of staff at a given post. For example, based on our work at the U.S. embassy in Paris, we identified positions that could potentially be relocated to regional centers or back to the United States. On the other hand, rightsizing analyses may indicate the need for increased staffing, particularly at hardship posts. In a follow-up report to our testimony, we recommended that the director of OMB ensure that our framework is used as a basis for assessing staffing levels in the administration’s rightsizing initiative. In commenting on our rightsizing reports, State endorsed our framework and said it plans to incorporate elements of our rightsizing questions into its future planning processes, including its MPPs. State also has begun to take further actions in managing its overseas presence—along the lines that we recommended in our June 2002 report on hardship posts— including revising its assignment system to improve staffing of hardship posts and addressing language shortfalls by providing more opportunities for language training. In addition, State has already taken some rightsizing actions to improve the cost effectiveness of its overseas operating practices. For example, State plans to spend at least $80 million to purchase and renovate a 23-acre, multi-building facility in Frankfurt, Germany—slated to open in mid- 2005—for use as a regional hub to conduct and support diplomatic operations; has relocated more than 100 positions from the Paris embassy to the regional Financial Services Center in Charleston, South Carolina; and is working with OMB on a cost-sharing mechanism, as previously mentioned, that will give all U.S. agencies an incentive to weigh the high costs to taxpayers associated with assigning staff overseas. In addition to these rightsizing actions, there are other areas where the adoption of industry best practices could lead to cost reductions and streamlined services. For example, in 1997, we reported that State could significantly streamline its employee transfer and housing relocation processes. We also reported in 1998 that State’s overseas posts could potentially save millions of dollars by implementing best practices such as competitive sourcing. In light of competing priorities as new needs emerge, particularly in Iraq and Afghanistan, State must be prepared to make difficult strategic decisions on which posts and positions it will fill and which positions it could remove, relocate, or regionalize. State will need to marshal and manage its human capital to facilitate the most efficient, effective allocation of these significant resources. Up-to-date information technology, along with adequate and modern office facilities, is an important part of diplomatic readiness. We have reported that State has long been plagued by poor information technology at its overseas posts, as well as weaknesses in its ability to manage information technology modernization programs. State’s information technology capabilities provide the foundation of support for U.S. government operations around the world, yet many overseas posts have been equipped with obsolete information technology systems that prevented effective interagency information sharing. The Secretary of State has made a major commitment to modernizing the department’s information technology. In March 2003, we testified that the department invested $236 million in fiscal year 2002 on key modernization initiatives for overseas posts and plans to spend $262 million over fiscal years 2003 and 2004. State reports that its information technology is now in the best shape it has ever been, including improved Internet access and upgraded computer equipment. The department is now working to replace its antiquated cable system with a new integrated messaging and retrieval system, which it acknowledges is an ambitious effort. State’s OIG and GAO have raised a number of concerns regarding the department’s management of information technology programs. For example, in 2001, we reported that State was not following proven system acquisition and investment practices in attempting to deploy a common overseas knowledge management system. This system was intended to provide functionality ranging from basic Internet access and e-mail to mission-critical policy formulation and crisis management support. We recommended that State limit its investment in this system until it had secured stakeholder involvement and buy-in. State has since discontinued the project due to a lack of interagency buy-in and commitment, thereby avoiding additional costs of more than $200 million. Recognizing that interagency information sharing and collaboration can pay off in terms of greater efficiency and effectiveness of overseas operations, State’s OIG reported that the department recently decided to merge some of the objectives associated with the interagency knowledge management system into its new messaging system. We believe that the department should try to eliminate the barriers that prevented implementation of this system. As State continues to modernize information technology at overseas posts, it is important that the department employ rigorous and disciplined management processes on each of its projects to minimize the risks that the department will spend large sums of money on systems that do not produce commensurate value. Linking performance and financial information is a key feature of sound management—reinforcing the connection between resources consumed and results achieved—and an important element in giving the public a useful and informative perspective on federal spending. A well-defined mission and clear, well understood strategic goals are essential in helping agencies make intelligent trade-offs among short- and long-term priorities and ensure that program and resource commitments are sustainable. In recent years, State has made improvements to its strategic planning process both at headquarters and overseas that are intended to link staffing and budgetary requirements with policy priorities. For instance, State has developed a new strategic plan for fiscal years 2004 through 2009, which, unlike previous strategic plans, was developed in conjunction with USAID and aligns diplomatic and development efforts. At the field level, State revised the MPP process so that posts are now required to identify key goals for a given fiscal year, and link staffing and budgetary requirements to fulfilling these priorities. State’s compliance with the Government Performance and Results Act of 1993 (GPRA), which requires federal agencies to prepare annual performance plans covering the program activities set out in their budgets, has been mixed. While State’s performance plans fell short of GPRA requirements from 1998 through 2000, the department has recently made strides in its planning and reporting processes. For example, in its performance plan for 2002, State took a major step toward implementing GPRA requirements, and it has continued to make improvements in its subsequent plans. As we have previously reported, although connections between specific performance and funding levels can be difficult to make, efforts to infuse performance information into budget deliberations have the potential to change the terms of debate from simple outputs to outcomes. Continued improvements to strategic and performance planning will ensure that State is setting clear objectives, tying resources to these objectives, and monitoring its progress in achieving them—all of which are essential to efficient operations. Now I would like to discuss some of the challenges USAID faces in managing its human capital, evaluating its programs and measuring their performance, and managing its information technology and financial systems. I will also outline GAO’s findings from our reviews of USAID’s democracy and rule of law programs in Latin America and the former Soviet Union. Since the early 1990s, we have reported that USAID has made limited progress in addressing its human capital management issues and managing the changes in its overseas workforce. A major concern is that USAID has not established a comprehensive workforce plan that is integrated with the agency’s strategic objectives and ensures that the agency has skills and competencies necessary to meet its emerging foreign assistance challenges. Developing such a plan is critical due to a reduction in the agency’s workforce during the 1990s and continuing attrition—more than half of the agency’s foreign service officers are eligible to retire by 2007. According to USAID’s OIG, the steady decline in the number of foreign service and civil service employees with specialized technical expertise has resulted in insufficient staff with needed skills and experience and less experienced personnel managing increasingly complex programs. Meanwhile, USAID’s program budget has increased from $7.3 billion in 2001 to about $12 billion in fiscal year 2003, due primarily to significant increases in HIV/AIDS funding and supplemental funding for emerging programs in Iraq and Afghanistan. The combination of continued attrition of experienced foreign service officers, increased program funding, and emerging foreign policy priorities raises concerns regarding USAID’s ability to maintain effective oversight of its foreign assistance programs. USAID’s lack of progress in institutionalizing a workforce planning system has led to certain vulnerabilities. For example, as we reported in July 2002, USAID lacks a “surge capacity” that enables it to quickly hire the staff needed to respond to emerging demands and post-conflict or post- emergency reconstruction situations. We also reported that insufficient numbers of contract officers affected the agency’s ability to deliver hurricane reconstruction assistance in Latin America in the program’s early phases. USAID is aware of its human capital management and workforce planning shortcomings and is now beginning to address some of them with targeted hiring and other actions. USAID continues to face difficulties in identifying and collecting the data it needs to develop reliable performance measures and accurately report the results of its programs. Our work and that of USAID’s OIG have identified a number of problems with the annual results data that USAID’s operating units have been reporting. USAID has acknowledged these concerns and has undertaken several initiatives to correct them. Although the agency has made a serious effort to develop improved performance measures, it continues to report numerical outputs that do not gauge the impact of its programs. Without accurate and reliable performance data, USAID has little assurance that its programs achieve their objectives and related targets. In July 1999, we commented on USAID’s fiscal year 2000 performance plan and noted that because the agency depends on international organizations and thousands of partner institutions for data, it does not have full control over how data are collected, reported, or verified. In April 2002, we reported that USAID had evaluated few of its experiences in using various funding mechanisms and different types of organizations to achieve its objectives. We concluded that with better data on these aspects of the agency’s operations, USAID managers and congressional overseers would be better equipped to analyze whether the agency’s mix of approaches takes full advantage of nongovernmental organizations to achieve the agency’s purposes. USAID’s information systems do not provide managers with the accurate information they need to make sound and cost-effective decisions. USAID’s OIG has reported that the agency’s processes for procuring information technology have not followed established guidelines, which require executive agencies to implement a process that maximizes the value and assesses the risks of information technology investments. In addition, USAID’s computer systems are vulnerable and need better security controls. USAID management has acknowledged these weaknesses and the agency is making efforts to correct them. Effective financial systems and controls are necessary to ensure that USAID management has timely and reliable information to make effective, informed decisions and that assets are safeguarded. USAID has made progress in correcting some of its systems and internal control deficiencies and is in the process of revising its plan to remedy financial management weaknesses as required by the Federal Financial Management Improvement Act of 1996. To obtain its goal, however, USAID needs to continue efforts to resolve its internal control weaknesses and ensure that planned upgrades to its financial systems are in compliance with federal financial system requirements. Our reviews of democracy and rule of law programs in Latin America and the former Soviet Union demonstrate that these programs have had limited results and suggest areas for improving the efficiency and impact of these efforts. In Latin America, we found that U.S. assistance has helped bring about important criminal justice reforms in five countries. This assistance has also help improve transparency and accountability of some government functions, increase attention to human rights, and support elections that observation groups have considered free and fair. In several countries of the former Soviet Union, U.S. agencies have helped support a variety of legal system reforms and introduced some innovative legal concepts and practices in the areas of legislative and judicial reform, legal education, law enforcement, and civil society. In both regions, however, sustainability of these programs is questionable. Establishing democracy and rule of law in these countries is a complex undertaking that requires long-term host government commitment and consensus to succeed. However, host governments have not always provided the political support and financial and human capital needed to sustain these reforms. In other cases, U.S.-supported programs were limited, and countries did not adopt the reforms and programs on a national scale. In both of our reviews, we found that several management issues shared by USAID and the other agencies have affected implementation of these programs. Poor coordination among the key U.S. agencies has been a long-standing management problem, and cooperation with other foreign donors has been limited. U.S. agencies’ strategic plans do not outline how these agencies will overcome coordination problems and cooperate with other foreign donors on program planning and implementation to maximize scarce resources. Also, U.S. agencies, including USAID, have not consistently evaluated program results and have tended to stress output measures, such as the numbers of people trained, over indicators that measure program outcomes and results, such as reforming law enforcement practices. Further, U.S. agencies have not consistently shared lessons learned from completed projects, thus missing opportunities to enhance the outcomes of their programs. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other members of the committee may have at this time. For future contacts regarding this testimony, please call Jess Ford or John Brummet at (202) 512-4128. Individuals making key contributions to this testimony include Heather Barker, David Bernet, Janey Cohen, Diana Glod, Kathryn Hartsburg, Edward Kennedy, Joy Labez, Jessica Lundberg, and Audrey Solis. Overseas Presence: Conditions of Overseas Diplomatic Facilities. GAO- 03-557T. Washington, D.C.: March 20, 2003. Overseas Presence: Rightsizing Framework Can Be Applied at U.S. Diplomatic Posts in Developing Countries. GAO-03-396. Washington, D.C.: April 7, 2003. Embassy Construction: Process for Determining Staffing Requirements Needs Improvement. GAO-03-411. Washington, D.C.: April 7, 2003. Overseas Presence: Framework for Assessing Embassy Staff Levels Can Support Rightsizing Initiatives. GAO-02-780. Washington, D.C.: July 26, 2002. State Department: Sale of Unneeded Property Has Increased, but Further Improvements Are Necessary. GAO-02-590. Washington, D.C.: June 11, 2002. Embassy Construction: Long-Term Planning Will Enhance Program Decision-making. GAO-01-11. Washington, D.C.: January 22, 2001. State Department: Decision to Retain Embassy Parking Lot in Paris, France, Should Be Revisited. GAO-01-477. Washington, D.C.: April 13, 2001. State Department: Staffing Shortfalls and Ineffective Assignment System Compromise Diplomatic Readiness at Hardship Posts. GAO-02-626. Washington, D.C.: June 18, 2002. Foreign Languages: Human Capital Approach Needed to Correct Staffing and Proficiency Shortfalls. GAO-02-375. Washington, D.C.: January 31, 2002. Information Technology: State Department-Led Overseas Modernization Program Faces Management Challenges. GAO-02-41. Washington, D.C.: November 16, 2001. Foreign Affairs: Effort to Upgrade Information Technology Overseas Faces Formidable Challenges. GAO-T-AIMD/NSIAD-00-214. Washington, D.C.: June 22, 2000. Electronic Signature: Sanction of the Department of State’s System. GAO/AIMD-00-227R. Washington, D.C.: July 10, 2000. Major Management Challenges and Program Risks: Department of State. GAO-03-107. Washington, D.C.: January 2003. Department of State: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-02-42. Washington, D.C.: December 7, 2001. Observations on the Department of State’s Fiscal Year 1999 Performance Report and Fiscal Year 2001 Performance Plan. GAO/NSIAD-00-189R. Washington, D.C.: June 30, 2000. Major Management Challenges and Program Risks: Department of State. GAO-01-252. Washington, D.C.: January 2001. U.S. Agency for International Development: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-721. Washington, D.C.: August 17, 2001. Observations on the Department of State’s Fiscal Year 2000 Performance Plan. GAO/NSIAD-99-183R. Washington, D.C.: July 20, 1999. Major Management Challenges and Program Risks: Implementation Status of Open Recommendations. GAO/OCG-99-28. Washington, D.C.: July 30, 1999. The Results Act: Observations on the Department of State’s Fiscal Year 1999 Annual Performance Plan. GAO/NSIAD-98-210R. Washington, D.C.: June 17, 1998. Major Management Challenges and Program Risks: U.S. Agency for International Development. GAO-03-111. Washington, D.C.: January 2003. Foreign Assistance: Disaster Recovery Program Addressed Intended Purposes, but USAID Needs Greater Flexibility to Improve Its Response Capability. GAO-02-787. Washington, D.C.: July 24, 2002. Foreign Assistance: USAID Relies Heavily on Nongovernmental Organizations, but Better Data Needed to Evaluate Approaches. GAO-02- 471. Washington, D.C.: April 25, 2002. Major Management Challenges and Program Risks: U.S. Agency for International Development. GAO-01-256. Washington, D.C.: January 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In recent years, funding for the Department of State has increased dramatically, particularly for security upgrades at overseas facilities and a major hiring program. The U.S. Agency for International Development (USAID) has also received more funds, especially for programs in Afghanistan and Iraq and HIV/AIDS relief. Both State and USAID face significant management challenges in carrying out their respective missions, particularly in areas such as human capital management, performance measurement, and information technology management. Despite increased funding, resources are not unlimited. Thus, State, USAID, and all government agencies have an obligation to ensure that taxpayer resources are managed wisely. Long-lasting improvements in performance will require continual vigilance and the identification of widespread opportunities to improve the economy, efficiency, and effectiveness of State's and USAID's existing goals and programs. GAO was asked to summarize its findings from reports on State's and USAID's management of resources, actions taken in response to our reports, and recommendations to promote cost savings and more efficient and effective operations at the department and agency. Overall, State has increased its attention to managing resources, and its efforts are starting to show results, including potential cost savings and improved operational effectiveness and efficiency. For example, in 1996, GAO criticized State's performance in disposing of its overseas property. Between fiscal years 1997 through 2002, State sold 129 properties for more than $459 million with plans to sell additional properties between fiscal years 2003 through 2008 for approximately $300 million. Additional sales would help offset costs of replacing about 160 unsecure and deteriorating embassies. State is now taking a more businesslike approach with its embassy construction program, which is estimated to cost an additional $17 billion beginning in fiscal year 2004. Cost-cutting efforts allowed State to achieve $150 million in potential cost savings during fiscal year 2002. State should continue its reforms as it determines requirements for, designs, and builds new embassies. The costs of maintaining staff overseas are generally very high. In response to management weaknesses GAO identified, State has begun addressing workforce planning issues to ensure that the government has the right people in the right places at the right times. State should continue this work and adopt industry best practices that could reduce costs and streamline services overseas. GAO and others have highlighted deficiencies in State's information technology. State invested $236 million in fiscal year 2002 on modernization initiatives overseas and plans to spend $262 million over fiscal years 2003 and 2004. Ongoing oversight of this investment will be necessary to minimize the risks of spending large sums of money on systems that do not produce commensurate value. State has improved its strategic planning to better link staffing and budgetary requirements with policy priorities. Setting clear objectives and tying resources to them will make operations more efficient. GAO and others have also identified some management weaknesses at USAID, mainly in human capital management and workforce planning, program evaluation and performance measurement, information technology, and financial management. While USAID is taking corrective actions, better management of critical systems is essential to safeguard the agency's funds. Given the added resources State and USAID must manage, current budget deficits, and new requirements since Sept. 11, 2001, oversight is needed to ensure continued progress toward effective management practices. This focus could result in cost savings or other efficiencies.
Since the inception of NFIP in 1968, FEMA has sought to have local communities adopt floodplain management ordinances and offered flood insurance to their residents in an effort to reduce the need for government assistance after a flood. Premium subsidies were seen as a way to achieve the program’s objectives by ensuring that owners of existing properties in flood zones could afford flood insurance. NFIP has three components: (1) the provision of flood insurance; (2) the requirement that participating communities adopt and enforce floodplain management regulations; and (3) the identification and mapping of floodplains. Community participation in NFIP is voluntary. However, communities must join NFIP and adopt FEMA-approved building standards and floodplain management strategies in order for their residents to purchase flood insurance through the program. Additionally, communities with Special Flood Hazard Areas (SFHA)—areas at high risk for flooding— must participate in NFIP to be eligible for any form of disaster assistance loans or grants for acquisition or construction purposes in connection with a flood. Participating communities can receive discounts on flood insurance if they establish floodplain management programs that go beyond the minimum requirements of NFIP. FEMA can suspend communities that do not comply with the program, and communities can withdraw from the program. As of May 2013, about 22,000 communities voluntarily participate in NFIP. Potential policyholders can purchase flood insurance that covers both buildings and contents for residential and commercial properties. NFIP’s maximum coverage limit for single-family residential policyholders is $250,000 per unit for buildings and $100,000 per unit for contents. For commercial policyholders, the maximum coverage is $500,000 per unit for buildings and $500,000 for contents. Current law prohibits federally regulated lenders, federal agency lenders, and government-sponsored enterprises for housing from making loans for real estate in SFHAs where the community is participating in NFIP, unless For structures deemed not to the property is covered by flood insurance.be in SFHAs—that is, that have moderate to low risk of flooding—the purchase of flood insurance is voluntary. NFIP studies and maps flood risks, assigning flood zone designations from high to low depending on the risk of flooding. SFHAs are high-risk areas that have a 1 percent or greater annual chance of flooding and are designated as zones A, AE, V, or VE (table 1). Areas designated as V or VE are located along the coast. Areas with a moderate-to-low risk for flooding are designated as zones B, C, or X. Areas where analysis of the flood risk has not been conducted are designated as D zones. NFIP offers two types of flood insurance premiums: subsidized and full- risk. Subsidized rates are not based on actual flood risk. According to FEMA, subsidized rates represent only about 40 percent to 45 percent of rates that reflect full flood risk. (We discuss how FEMA determines rates in more detail later in this report.) The type of policy and the subsequent rate a policyholder pays depend on several property characteristics—for example, whether the structure was built before or after a community’s FIRM had been issued and the location of the structure in the floodplain. Structures built after a community’s FIRM was published must be built to meet FEMA building standards and pay full-risk rates. Some communities may implement activities that exceed the minimum standards. Prior to the Biggert-Waters Act, subsidized policies accounted for about 21 percent of all NFIP policies, while those with full-risk premiums accounted for the remaining 79 percent. While the percentage of subsidized policies has decreased since the program was established, the number of these policies has stayed fairly constant (see fig. 1). As communities were mapped and joined NFIP, new subsidized policies were added. As shown in figure 2, the percentage change in subsidized policies generally followed the same trend as the percentage change in total policies. Even with highly discounted rates, subsidized premiums are, on average, higher than full-risk premiums. The premiums are higher because subsidized pre-FIRM structures generally are more prone to flooding (that is, riskier) than other structures. In general, pre-FIRM properties were not constructed according to the program’s building standards or were built without regard to base flood elevation—the level relative to mean sea level at which there is a 1 percent or greater chance of flooding in a given year. For example, the average annual subsidized premium with October 2011 rates for pre-FIRM subsidized properties located in zone A was about $1,200, while the average annual premium for post-FIRM properties in the same zone paying full-risk rates was about $500. Post- FIRM structures have been built to flood-resistant building codes or mitigation steps have been taken to reduce flood risks; thus, they are generally less flood-prone than pre-FIRM properties. The authority for subsidized rates was included in the National Flood Insurance Act of 1968 as an incentive for communities to join the program by adopting and enforcing floodplain management ordinances that would reduce future flood losses. Subsidies were intended to be only part of an interim solution to long-term adjustments in land use. Congress also authorized the use of subsidized premiums because charging rates that fully and accurately reflected flood risk would be a burden to some property owners. Table 2 shows the sources of legislative authority for various subsidized premium rates. Since NFIP was established, Congress has enacted legislation to strengthen certain aspects of the program. The Flood Disaster Protection Act of 1973 made the purchase of flood insurance mandatory for properties in SFHAs that are secured by mortgages from federally regulated lenders. This requirement expanded the overall number of insured properties, including those that qualified for subsidized premiums. The National Flood Insurance Reform Act of 1994 expanded the purchase requirement for federally backed mortgages on properties located in an SFHA. The Bunning-Bereuter-Blumenauer Flood Insurance Reform Act of 2004 established a pilot program to mitigate properties that continually suffered from severe repeated flood losses and offer grants for properties with repetitive insurance claims.loss” properties who refuse to accept any offer for mitigation actions face higher premiums. Owners of these “repetitive More recently, in July 2012, Congress passed the Biggert-Waters Act.The act extended the authorization for NFIP for 5 years and made reforms to NFIP that include eliminating existing subsidies for any residential property which is not a primary residence; any severe repetitive loss property; any property that has incurred flood-related damage in which the cumulative amounts of payments under this title equaled or exceeded the fair market value of such property; any business property; and any property that has experienced or sustained substantial damage exceeding 50 percent of the fair market value or substantial improvement exceeding 30 percent of the fair market value. Rates that fully reflect flood risk for the types of properties listed previously are to be phased in over several years—with increases of 25 percent each year—until the average risk premium rate for such properties is equal to the average of the risk premium rates for properties within any single risk classification. Furthermore, according to the Biggert-Waters Act, other properties will no longer qualify for subsidies under the following circumstances: any NFIP policy that has lapsed in coverage, as a result of the deliberate choice of the policyholder; and any prospective insured who refuses to accept any offer for mitigation assistance (including an offer to relocate) following a major disaster. The act also stated that no new subsidies would be provided to any property not insured by NFIP as of the date the act was enacted; any property purchased after the date of enactment of the act. (Thus, property sales trigger elimination of subsidies.) The Biggert-Waters Act also requires FEMA to adjust rates to accurately reflect the current risk of flood to properties when an area’s flood map is changed, subject to any other statutory provision in chapter 50 of Title 42 of the United States Code. FEMA is determining how this provision will affect properties that were “grandfathered” into lower rates. In addition, the act allows insurance premium rate increases of 20 percent annually (previously capped at 10 percent), establishes minimum deductibles, and requires FEMA to include the losses from catastrophic years in determining premiums that are based upon “average historical loss year.” It also incorporates a definition of “severe repetitive loss property” for single-family properties and required FEMA to establish a reserve fund, among other things. The Biggert-Waters Act eliminated subsidies on approximately 438,000 policies, and with the continuing implementation of the act, more of the subsidies on the approximately 715,000 remaining policies are expected to be eliminated over time. In terms of characteristics, the geographic distribution of remaining subsidized policies was similar to the distribution of all NFIP policies. Other characteristics we analyzed—indicators of home value and owner income—were different for the policies that continue to qualify for subsidized premium rates compared to those with full-risk rates. In particular, counties with higher home values and income levels tended to have larger percentages of remaining subsidized policies compared to those with full-risk rates. We estimated that the Biggert-Waters Act eliminated subsidies for approximately 438,000 policies, and that about 715,000 policies continue to qualify for subsidized premium rates (remaining subsidized policies). Before the act, subsidized policies represented about 21 percent of all policies and nearly all subsidized policies were in the high risk areas. After the initial reduction of subsidies, the approximately 715,000 policies that would continue to receive subsidized rates represent about 13 percent of all NFIP policies and 21 percent of all SFHA policies.elimination affected various property types, including nonprimary residences, businesses, and severe repetitive loss properties. About 92 percent of the projected remaining subsidized policies cover single-unit primary residence properties and more than 99 percent cover properties in SFHA areas. The continuing implementation of the act is expected to decrease the number of subsidized policies. However, FEMA faces a number of implementation challenges and elimination of subsidies as required by the act will likely take years. As mandated by the Biggert-Waters Act, FEMA has begun phasing out subsidized premiums for business properties, residential properties that are not primary residences, and single-family (1-4 units) severe repetitive loss properties. According to our analysis of NFIP data, the 438,000 policies that would no longer qualify for subsidized premium rates included about 345,000 nonprimary residential policies, about 87,000 business policies, and about 9,000 single-family severe-repetitive loss policies. Nearly all subsidized policies for primary residential properties continue to have subsidized rates. Figure 3 summarizes our analysis of the immediate decreases in subsidized policies stemming from the act, by property type. Subsidies on most of the approximately 715,000 remaining subsidized policies should be eliminated over time. Under provisions of the Biggert- Waters Act, most policies no longer qualify for subsidies if NFIP coverage lapsed or the properties were sold or substantially damaged. We estimated that with implementation of the changes in the act addressing sales and coverage lapses, the number of subsidized policies could decline by almost 14 percent per year (see fig. 4). At this rate, the number of subsidized policies would be reduced by 50 percent in approximately 5 years. After about 14 years, fewer than 100,000 subsidized policies would remain. We based our estimate of the annual decline rate on the average experience of the last 10 years of NFIP data using policies with similar characteristics, but the actual outcomes and time required for subsidies to be reduced could vary. For example, the average annual decline rate for the most recent 3 years of NFIP data was about 11 percent. At this rate, the number of subsidized policies would be reduced by 50 percent in approximately 7 years, and after 18 years, fewer than 100,000 subsidized policies would remain. Additionally, changes from the act may affect the behavior of policyholders. For example, policyholders might not allow their coverage to lapse if they knew that they would lose their subsidy or they might not be able to sell their properties at the same rate if the flood insurance was more expensive. The Biggert-Waters Act will likely require several years for FEMA to fully implement. FEMA officials acknowledged that they have data limitations and other issues to resolve before eliminating some subsidies. We projected that subsidies on most of the policies required to be eliminated by the act could be identified in FEMA’s data; however, data limitations make implementation of some provisions of the act more difficult. For example, the act eliminated subsidies for residential policies that covered nonprimary residences. FEMA has data on whether a policy covers a primary residence but officials stated that it may be outdated or incorrect. In the past, FEMA did not collect this information for policy renewal so it may have changed over time. The act also eliminated subsidies for business policies. However, FEMA categorizes policies as residential and nonresidential rather than residential and business. As a result, FEMA does not have the information to identify nonresidential properties, such as schools or churches that are not businesses and continue to qualify for a subsidy. Beginning in October 2013, FEMA will require applicants to provide residential and business status for new policies and renewals. Additionally, the act states that subsidies will be eliminated for policies that have received cumulative payment amounts for flood-related damage that equaled or exceeded the fair market value of the property, and for policies that experience damage exceeding 50 percent of the fair market value of the property after enactment. Currently, FEMA is unable to make this determination as it does not maintain data on the fair market value of properties insured by subsidized policies. FEMA officials said that they are in the process of identifying a data source. FEMA will have to determine how to apply certain provisions of the Biggert-Waters Act before eliminating some subsidies. For example, the act eliminates subsidies for severe repetitive loss policies and provides a definition of severe repetitive loss for single-family homes. However, it requires FEMA to define severe repetitive loss for multifamily properties. FEMA has not yet developed this definition and we estimate that 1,000 multifamily severe repetitive loss policies will continue to receive a subsidy until the definition is developed and applied. The act also eliminates subsidies when properties are purchased. However, FEMA has not yet determined how to apply this provision of the act to condominium associations. Finally, FEMA officials stated that they have been applying the provisions of the act that eliminate subsidies only to pre-FIRM policies. As a result, approximately 5,500 subsidized post-FIRM V zone structures built before 1981 that currently receive subsidized rates would continue to qualify for subsidies. We analyzed a number of characteristics of the remaining subsidized policies. First, they had a geographic distribution similar to all NFIP policies. Second, while higher percentages of remaining subsidized policies than policies with full-risk rates were found in counties with higher median home values, remaining subsidized policies generally carried smaller amounts of coverage. Third, counties with the highest median household incomes and counties at the lower end of our income ranking had larger percentages of remaining subsidized policies compared to the percentage of policies with full-risk rates. We limited our analysis of the similarities and differences between remaining subsidized policies and the policies with full-risk rates (nonsubsidized) to single-unit primary residences in SFHAs. Our analysis of NFIP data on the location of properties that would continue to receive subsidized rates shows that remaining subsidized policies would cover properties in every state and territory in which NFIP operates. Florida (133,000), Louisiana (65,000), California (64,000), New Jersey (48,000), Texas (44,000), and New York (43,000) had the highest numbers of remaining subsidized policies. These states with the addition of South Carolina also had the highest number of total NFIP policies. In contrast, Indiana, Michigan, and Puerto Rico had the highest percentages of remaining subsidized policies as a fraction of total NFIP policies in the state, representing more than 40 percent of all NFIP policies in those states. Figure 5 shows the estimated number of remaining subsidized policies by state and the remaining subsidized policies as a percentage of total NFIP policies in the state. States with the highest percentage of remaining subsidized policies did not necessarily have the highest percentage of total NFIP policies. Some states had a higher percentage of remaining subsidized policies than the percentage of total NFIP policies in the state (see fig. 6). For example, California had 9 percent of all remaining subsidized policies and about 5 percent of all NFIP policies, and New York had 6 percent of all remaining subsidized policies and 3 percent of all policies. Other states had a larger percentage of total NFIP policies than subsidized policies. For example, Florida had 37 percent of total NFIP policies and about 19 percent of all remaining subsidized policies and Texas had about 12 percent of all policies and 6 percent of remaining subsidized policies. When analyzed by county, the remaining subsidized policies were located in about 2,930 of the more than 3,100 counties with NFIP policies. The number of remaining subsidized policies in the counties varied greatly. We estimated that 151 counties had only one remaining subsidized policy, and another 1,137 had fewer than 25 remaining subsidized policies. We also estimated that 247 counties had more than 500 of these policies. Ten of these counties had more than 10,000 remaining subsidized policies, 4 of which were in Florida, 2 in Louisiana, and 1 each in California, New Jersey, New York, and Texas. Pinellas County, Florida, had the highest number of estimated remaining subsidized policies at more than 28,000. Counties with the highest median home values tended to have a higher percentage of remaining subsidized policies than nonsubsidized policies. For our analysis of the financial characteristics of remaining subsidized and nonsubsidized policies, we selected 351 counties that represented See appendix II more than 78 percent of remaining subsidized policies.for more information about the 351 counties we selected for our analysis. Because FEMA lacks data on home values, we used several indicators of home value to compare properties in these counties that would continue to receive subsidized rates with properties charged full-risk rates (see table 3). Most of the policies were in the counties with relatively high home values. For example, the median home value for more than half of the selected counties was in the top quartile of counties nationwide. Further, the median home value for more than one-third of the selected counties was in the top 10 percent of median home values for all counties nationwide. The results of our analysis of home values varied depending on the indicator and the location. Our analysis showed that in the counties with the highest and lower median home values the percentage of remaining subsidized policies was larger than nonsubsidized policies in SFHAs. For example, about 43 percent of total NFIP policies in the selected 351 counties were in the highest decile of median home values, but about 43 percent of the remaining subsidized policies compared with about 35 percent of nonsubsidized policies were in these counties. Very few policies of any type were in counties in the lower deciles of median home value (deciles 6-10), however in these counties there were higher percentages and larger numbers of remaining subsidized policies than nonsubsidized policies (see table 4). Our analysis of coverage amounts found that remaining subsidized policies generally carried smaller NFIP coverage amounts than nonsubsidized policies in SFHAs, a possible indicator of lower home values. As shown in figure 7, a smaller percentage of remaining subsidized policies had the maximum coverage of $250,000 than nonsubsidized policies (29 percent versus about 50 percent). Also, a larger percentage of remaining subsidized policies had less than $100,000 in building coverage than nonsubsidized policies (26 percent versus 8 percent). The results of our comparison of coverage amounts could indicate that the subsidized policies were for lower-valued properties, but the perceived flood risk and cost of coverage also could affect the coverage amount. Finally, a larger percentage of V-zone policies had the maximum coverage amount than the A-zone policies but represented a small fraction of all SFHA policies. Further details of our analysis by flood zone appear in appendix II. We analyzed NFIP coverage amounts (on single-unit primary residence nonsubsidized policies and remaining subsidized policies in SFHAs) and county median home values together and found that higher coverage amounts were associated with higher county median home values. Counties with higher median home values had larger percentages of both remaining subsidized policies and nonsubsidized policies at the NFIP maximum coverage level of $250,000 than counties with lower median home values. In addition, counties with lower median home values generally had larger percentages of remaining subsidized policies and nonsubsidized policies with lower amounts of coverage (less than $100,000) than counties with higher median home values. However, nonsubsidized policies consistently had higher amounts of coverage. In every decile of county median home value, a larger percentage of nonsubsidized policies had the maximum amount of NFIP coverage than remaining subsidized policies, while a smaller percentage of nonsubsidized policies had lower amounts of coverage (less than $100,000) than remaining subsidized policies. Additional details of the combined analysis are presented in appendix II. We performed five case studies to illustrate results in specific counties. The case studies offer a more in-depth, within county view (how characteristics vary across cities within select counties). We performed the NFIP coverage and median home value analyses, but also used publicly available real estate data to examine city-level median home values within the county. These cases are illustrative only and are not nationwide indicators, and some of the results from these case studies matched our earlier results and some did not. Los Angeles County is one illustration of how NFIP policies compared within a county, but other counties had different results. The results of the other case study counties are presented in appendix II. Case Study: Los Angeles County, California Los Angeles County had a median home value in the top 10 percent of all counties and consistent with our earlier results had a higher percentage of remaining subsidized policies than nonsubsidized policies in SFHAs (more than twice as many policies). Consistent with our analysis of NFIP coverage amounts, a lower percentage of remaining subsidized policies in Los Angeles County had maximum building coverage than nonsubsidized policies (59 versus 77 percent), but a higher percentage had building coverage less than $100,000 (6 versus 3 percent). However, Los Angeles County also had a high percentage of both subsidized and nonsubsidized policies with maximum NFIP coverage and a low percentage of both types of policies at lower levels of coverage. Our analysis of the city median home value in Los Angeles County found that about 88 percent of remaining subsidized and nonsubsidized policies were in cities in the second and third quartiles of median home value. Additionally, although Los Angeles County is located on the Pacific Ocean, it had 120 V-zone (high-risk velocity coastal) policies compared to about 6,000 A-zone (high-risk) policies. Ninety-seven of the V-zone policies were remaining subsidized policies and all were located in a single city with a median home value in the top quartile of median home value. Comparing policies in SFHAs in the selected counties, our analysis showed that in counties with the highest and lowest median household incomes, there were a larger percentage of remaining subsidized policies than nonsubsidized policies. We used county median household income from the 2007 through 2011 ACS 5-year data for all U.S. counties as an indicator of household income for property owners. We analyzed the data to determine relative ranking of the 351 selected counties relative to all counties and compared the number and percentage of properties that would continue to receive subsidized rates with properties charged full- risk rates. In general, most of all of the policies in our analysis were in counties with higher median household income (deciles 1-4), with fewer policies in the counties with lower median household income counties. However, counties in the highest and lowest decile in median household income had higher percentages of remaining subsidized policies than nonsubsidized policies (see table 5). For example, 19 percent of all policies in the 351 selected counties were in the highest decile of median household income. But about 29 percent of the remaining subsidized policies were in these counties versus about 11 percent of nonsubsidized policies. One percent of all policies in the selected counties were in the lowest decile of median household income. But 4 percent of the remaining subsidized policies were in these counties versus 1 percent of nonsubsidized policies. We also examined home value and household income indicators together. Selected counties with the highest median household incomes and highest median home values had higher percentages of remaining subsidized policies than nonsubsidized policies in SFHAs. For example, 78 of the 351 selected counties were in the highest decile category for both median home value and median household income. About 26 percent of remaining subsidized policies were in these counties, compared with 7 percent of nonsubsidized policies. Selected counties with higher median household income generally also had higher median home values, but counties with higher median home values did not always have higher median incomes. Higher percentages of remaining subsidized policies than nonsubsidized policies were found in counties with lower median home values and lower median household incomes. More detail on these results can be found in appendix II. The cost of subsidized policies to NFIP can be measured in terms of forgone net premiums (the difference between subsidized and full-risk rates, adjusted for premium-related expenses). However, FEMA does not have the historical program data needed to make this calculation. Because of this constraint, estimating the historic cost of subsidies on NFIP is difficult. FEMA also does not have information on the flood risk of properties with previously subsidized rates, which is needed to establish full-risk rates for these properties going forward. FEMA does not have sufficient data to estimate the aggregate cost of subsidies. Since fiscal year 2002, FEMA’s annual actuarial rate reviews have included an estimated range of the percentage of the full-risk premiums that policyholders with subsidized premiums pay. (We refer to this as the subsidy rate). FEMA based these estimated ranges, in part, on the analysis in a 1999 report conducted by PricewaterhouseCoopers (PwC), which sampled pre-FIRM structures around the nation and collected information on elevation of the properties to calculate what the full-risk rates on these properties would have been. FEMA has continued to use this report as the basis for estimating the percentage of the full-risk rate that subsidized policyholders pay. Since fiscal year 2002, NFIP has reported that the estimated subsidized premium rate is between 35 and 45 percent of the full-risk premium rate.said that they did not report an estimate before the 1999 PwC report. Therefore, determining forgone premiums without these estimates would be difficult because the percentage of subsidized premium rates compared with full-risk rates may have varied considerably over time. Although it was not possible to estimate forgone premiums since the program was established, the following provides information about the impact of subsidized premiums on the program. Data are not available from FEMA to estimate the forgone premiums before 2002. Using FEMA’s estimated range of subsidy rates to actual premiums collected from 2002 through 2011, we conducted an analysis to estimate the premiums that could have been collected if subsidies had not existed over that period. FEMA officials have clarified their estimate that 2011 subsidized premiums represented 40 percent to 45 percent of full-risk premium rates, explaining that after paying for all administrative and other expenses, the remaining premiums would cover about 40 to 45 percent of the expected average long-term annual losses. Premiums are used to cover not only claims, but also operating expenses and any debt. According to FEMA officials, 17 percent of forgone premiums would be needed to pay operating expenses that would increase if subsidized premiums were increased. Such expenses consist of premium taxes (about 2 to 2.5 percent of premium) and agents’ commissions associated with the private insurance companies that sell and service NFIP policies (about 15 percent of premium). Therefore, about 83 percent would be available to help cover fixed expenses (which do not vary with premiums) and to pay losses. During years when losses are less than average, the program potentially generates a surplus. During higher-loss years, accumulated surplus could be used to help pay the insured flood losses that exceed that year’s net premium revenue and reduce the likelihood of needing to borrow from Treasury. Therefore, additional premiums could have helped offset FEMA’s need to borrow or put the agency in a better position to manage catastrophic losses or repay its debt. A similar number but higher percentage of policies were subsidized in the earlier years of the program, therefore, most of the program’s premium revenue did not reflect the risk of flooding. In 1978 about 76 percent of policies were subsidized compared with about 20 percent in 2012. The Flood Disaster Protection Act of 1973 expanded the use of premium subsidies to encourage the purchase of flood insurance and introduced mandatory flood insurance purchase requirements in SFHAs as a condition of receipt of direct federal and federally related financial assistance related to the property. For the next 7 years, the subsidized premiums remained in effect. During this period, nearly every community with a flood hazard joined NFIP, and policies in force reached 2 million by 1979. The percentage of full-risk premiums that policyholders with subsidized rates paid was also lower than today. When the program began, NFIP administrators set the subsidized rates on the basis of what they considered affordable. However, from 1981 through 1986, FEMA initiated a series of rate increases for all subsidized policies. The increases were intended to generate premiums at least sufficient to cover expenses and losses relative to the historical average loss year when combined with the premiums paid by policyholders with full-risk rates. Since 1986, additional rate increases have been made to bring the average program premium to a level intended to be sufficient to pay for the historical average loss year and have additional funds available to service its debt to Treasury. As mandated in the Biggert-Waters Act, we also calculated the claims and premiums attributable to all policies that received subsidies (historically subsidized policies) since 1978 and to policies with characteristics similar to remaining subsidized policies (remaining subsidized policies). While the difference between claims and premiums is not a meaningful measure of the costs of subsidies because premiums are used to pay not only claims but other costs of administering the program, they provide additional descriptive information. Moreover, because flooding is a highly variable event, with losses varying widely from year to year, even analysis of the decades of historical data available could lead to unreliable conclusions about actual flood risks. Based on our analysis of NFIP claims data, we calculated the amount of claims attributable to historically subsidized policies from 1978 through 2011 to have been $24.1 billion, of which $15.2 billion is attributable to remaining subsidized policies. NFIP had $28.5 billion in claims for policies charged at the full-risk premium rates in the same time period. Based on data provided by FEMA on all subsidized premiums, we calculated the amount of premiums collected for all historically subsidized policies from 1978 through 2011 to have been $26.2 billion, of which $15.7 billion is attributable to remaining subsidized policies. Comparatively, FEMA collected $33.7 billion in premiums for policies with full-risk premium rates for the same time period. FEMA generally lacks information to establish full-risk rates that reflect flood risk for active policies that no longer qualify for subsidies as a result of the Biggert-Waters Act and also lacks a plan for proactively obtaining The act requires FEMA to phase in full-risk rates on such information.these policies. Federal internal control standards state that agencies should identify and analyze risks associated with achieving program objectives, and use this information as a basis for developing a plan for mitigating the risks. In addition, these standards state that agencies should identify and obtain relevant and needed data to be able to meet program goals. Surveyors calculate the elevation of the first-level of a structure in relation to the expected flood level, or base flood elevation. According to FEMA, obtaining such a certificate typically would cost a policyholder from $500 to $2,000 or more. elevation as one of the factors in its model to set full-risk rates for buildings constructed after the publication of a community’s FIRM. FEMA officials said that although a variety of factors, such as occupancy status and number of floors, are used to determine these rates, the elevation of the building is the most important factor. FEMA also uses elevation certificates as administrative tools. Elevation certificates are required for some properties, but optional for others. For example, communities participating in NFIP must obtain the elevation information for all new and substantially improved structures. In addition, FEMA requires elevation certificates to determine rates for post-FIRM buildings located in high-risk areas, the A and V zones. However, an elevation certificate generally has not been required for pre-FIRM buildings that previously received subsidized rates because information about elevation was not used in setting subsidized rates. According to NFIP data, property elevations relative to the base flood elevation are unknown for 97 percent of both the 1.15 million historically subsidized policies and the more than 700,000 remaining subsidized policies in SFHAs. As of October 2013, FEMA is requiring applicants for new policies on pre-FIRM properties that previously received subsidized rates and property owners whose coverage has lapsed to provide elevation certificates. FEMA is phasing-in rate increases for other policyholders who no longer qualify for subsidies and is relying on policyholders to voluntarily provide elevation certificates. With the 1999 PwC report as a basis for an estimate of the full-risk rate for subsidized policies, FEMA officials said they have been using the assumption that subsidized rates are about half of the full-risk rates and have begun implementing premium increases of at least 100 percent for all active policies that are having their subsidies eliminated. According to FEMA, they will phase in these increases at 25 percent per year, consistent with the act, for several years until the rates reach a specific level or until policyholders supply an elevation certificate that indicates the property’s risk, allowing FEMA to determine the full-risk rate. If policyholders voluntarily obtain an elevation certificate that shows that their risk is lower, they may be able to qualify for lower rates or it may not take as many years of rate increases to reach the full-risk rate. However, policyholders at higher risks could be subject to even higher rates. According to FEMA officials, it will take several years for previously subsidized policies to reach a full-risk rate and the agency will communicate to policyholders to encourage them to purchase elevation certificates to determine their actual flood risk. For example, FEMA has posted information on its website about program changes as a result of the Biggert-Waters Act and the importance of obtaining elevation certificates. Although subsidized policies have been identified as a risk to the program because of the financial drain they represent, FEMA does not have a plan to expeditiously and proactively obtain the information needed to set full- risk rates for all of them. Instead, FEMA will rely on certain policyholders to voluntarily obtain elevation certificates. Those at lower risk levels have an incentive to do so because they can qualify for lower rates. However, policyholders with higher risk levels have a disincentive to voluntarily obtain an elevation certificate because they could end up paying an even higher premium. Without a plan to expeditiously obtain property-level elevation information, FEMA will continue to lack basic information needed to accurately determine flood risk and will continue to base full- risk rate increases for previously subsidized policies on limited estimates. As a result, FEMA’s phased-in rates for previously subsidized policies still may not reflect a property’s full risk of flooding, with some policyholders paying premiums that are below and others paying premiums that exceed full-risk rates. As we have previously found, not accurately identifying the actual risk of flooding increases the likelihood that premiums may not be adequate and adds to concerns about NFIP’s financial stability. Through our previous work as well as interviews we conducted and literature we reviewed for this report, we identified three broad options that could help address NFIP’s financial situation: (1) adjust the pace of the elimination of subsidies, (2) target assistance or remaining subsidies by the financial need of property owners, and (3) increase mitigation efforts. In prior work, we discussed similar options for addressing the impact of subsidized policies and the work we conducted for this report confirmed that, with some modifications to reflect the changes from the Biggert-Waters Act, these were still generally the prevailing options. addition, our previous and current work have shown that each of the options has advantages and disadvantages in terms of the impact on the program’s public policy goals and would involve trade-offs that would have to be weighed. For example, charging premium rates that fully reflect the risk of flooding could help improve the financial condition of NFIP and limit taxpayer costs before and after a disaster. However, eliminating or reducing subsidized policies could have unintended consequences, such as increasing premium rates to the point that flood insurance is no longer affordable for some policyholders and potential declines in program participation. See GAO, High-Risk Series: An Update, GAO-13-283 (Washington, D.C.: Feb. 2013). GAO-09-20. where they lived. Stakeholders also noted that the threat of increased premium rates would encourage some policyholders affected by Superstorm Sandy to undertake mitigation efforts as they repaired their properties. Although accelerating the elimination of subsidies could strengthen the financial solvency of the program, it also entails trade-offs and unintended consequences. For example, according to FEMA estimates, the elimination of subsidies for pre-FIRM properties would on average more than double these policyholders’ premium rates, raising concerns about the affordability of the coverage and participation in the program. Higher premium rates might result in reduced participation in NFIP over time as people either decide to drop their policies or are priced out of the market, according to FEMA officials and insurance industry stakeholders we interviewed. The 1999 PwC study estimated that, for communities most likely to experience a decrease in property values if subsidies were immediately eliminated, on average 50 percent of policyholders might cancel their coverage. It is too soon to tell the long-term impacts of the elimination of subsidies that went into effect in 2013. Even reducing, rather than eliminating, subsidies could increase the financial burden on some existing policyholders—particularly low-income policyholders—and could lead to some of them deciding to leave the program. As a result, if owners of pre-FIRM properties, which have relatively high flood losses, cancelled their insurance policies, the federal government—and ultimately taxpayers—could face increased costs in the form of FEMA disaster assistance grants to these individuals. However, according to a recent study, a large proportion of disaster assistance is provided to states, versus directly to individuals, and the assistance provided to individuals via grants and low-interest loans is fairly limited in size. An additional trade-off associated with making immediate increases to premium rates is resistance from local communities. Stakeholders we interviewed further noted that increased insurance costs might make some properties more difficult to sell, particularly pre-FIRM properties in older, inland communities at high risk of flooding. Delaying the elimination of subsidized policies could address stakeholder concerns about the affordability of flood insurance and the time frames in the Biggert-Waters Act for implementing full-risk rates, but also has trade- offs. For example, while stakeholders we interviewed supported provisions of the act to reduce the number of subsidized policies and moving to full-risk rates, they said that the time frames in the act were aggressive and could be burdensome for low-income policyholders. They also stated that more gradual increases for certain policyholders could keep policies more affordable. They noted there have been proposals to delay the elimination of subsidies and phasing in of full-risk rates. However, delaying the elimination of subsidies would continue to expose the federal government to increased financial risk. And, as previously noted, not charging full-risk rates contributes to FEMA’s ongoing management challenges in maintaining the financial stability of NFIP. NFIP has been on our high-risk list since 2006 because of concerns about its long-term financial solvency and management issues. While Congress and FEMA intended that, insofar as practicable, NFIP be funded with premiums collected from policyholders, the program was, by design, not actuarially sound. Targeting assistance, based on financial need, could help ensure that only those in need receive subsidies, with the rest paying full-risk rates. This assistance could take several forms, including direct assistance through NFIP, tax credits, grants, or vouchers. For example, other federal programs have targeted subsidies through means tests or other methods. Such an approach could help ensure that those needing the subsidy would have access to it and retain their coverage. Alternatively, stakeholders we interviewed for this report noted that FEMA could replace the subsidies with vouchers based on financial need to offset higher premiums. For example, the Department of Housing and Urban Development’s Housing Choice Voucher program is administered by public housing agencies that collect information on applicants’ income and assets to determine eligibility and voucher amounts. flood insurance policyholders could be collected to assess need, determine eligibility, and provide appropriate amounts of financial assistance to families that otherwise could not afford their flood insurance premiums. 24 C.F.R. Part 982. According to industry stakeholders we interviewed, targeting assistance based on financial need would help make the planned phased-in premium increases more affordable. In a recent paper on flood insurance affordability, the Association of State Floodplain Managers (ASFPM) suggested that a flood insurance voucher program could be developed for low-income policyholders who may not be able to afford the rate increases or for those who might need time to adjust to premium increases. ASFPM’s paper also noted that, while the premium rate increases required by the Biggert-Waters Act will improve the financial stability of NFIP, those increases could have a significant impact on flood insurance affordability for low-income policyholders. In particular, the ASFPM paper states that assistance will be necessary for some policyholders to help them transition to either full-risk rates, or to mitigate their properties, otherwise some property owners might not be able to afford to remain in their homes. Other insurance industry representatives and stakeholders have also cited affordability concerns and suggested that as full-risk rates were phased in, assistance for low-income individuals could be provided through a voucher system or program based on financial need. A provision of the act requires FEMA to study NFIP participation and affordability issues, including offering vouchers based on income. According to FEMA officials, as of May 31, 2013, FEMA has consulted with the National Academy of Sciences about determining how to undertake this study. As previously discussed, our comparison of characteristics (such as median income and median home values) associated with remaining subsidized and nonsubsidized policies indicates that applying full-risk rates may be overly burdensome for some property owners and not for others. For example, we found a higher percentage of subsidized policies in both counties with lower and very high incomes, indicating that in certain areas, some subsidized policyholders may find higher flood insurance rates difficult to afford, while those who were located in higher- income areas may be able to afford premium increases. However, it could be challenging for FEMA to develop and administer such an assistance program in the midst of ongoing management challenges. Specifically, we have previously found that FEMA has faced significant management challenges in areas that affect NFIP, including strategic and human capital planning; collaboration among offices; and record, financial, and acquisition management. In addition, in previous work we found that FEMA has faced challenges modernizing NFIP’s insurance policy and claims management system. Implementing a financial assistance program would require FEMA to plan and develop new processes. Representatives from a national insurance professional organization we interviewed for this report stated that it would be difficult for FEMA to administer an assistance program and ensure that an evaluation for assistance was done consistently. In addition, they said that to administer an assistance program such as vouchers, tax credits, or grants through the Write-Your-Own companies (insurance companies that sell and service flood insurance for NFIP), a process would be needed to ensure that means-testing is evaluated and administered consistently. They also suggested that it would be easier to administer a program if all policyholders were charged a full-risk rate, with a separate process that would allow them to apply for assistance, based on financial need. A third option to address the financial impact of subsidized premium rates on NFIP would be to substantially expand mitigation efforts to ensure that more homes were better protected from flooding, including making mitigation mandatory. Mitigation efforts such as elevation, relocation, and demolition can be used to help reduce or eliminate the long-term risk of flood damage to structures insured by NFIP. However, mitigation of pre- FIRM properties is voluntary unless a property has been substantially damaged or the owner undertook substantial improvement. GAO-09-20. assistance. While the Biggert-Waters Act eliminated subsidies for severe repetitive loss properties and for prospective policyholders who refuse to accept any offer for mitigation assistance (including an offer to relocate) following a major disaster, properties not built to meet a community’s flood resistant requirements or in the highest-risk zones could face more severe damages in the event of a flood. Insurance industry stakeholders agreed that mitigation could be used to reduce future financial risk for NFIP. Stakeholders we spoke to for this report also commented that since such mitigation measures often are done at the community level, offering community-based policies could help encourage more mitigation. This is consistent with our prior work in which local officials generally support increased mitigation efforts.incorporating community-based flood insurance into NFIP could help leverage community resources for mitigation projects that would benefit the entire community, rather than individual structures. For example, floodplain mangers noted that with a community-based policy, the local unit of government could assess fees on all properties benefitting from community mitigation measures. In addition, because the premium rate would be on a community versus structure basis, the community, not the property owner, generally would make development or neighborhood-type decisions that either increased or decreased risk in the community. Industry stakeholders also commented that Disadvantages associated with mitigation as an option to reduce the financial impact of the subsidized policies include the expense to NFIP, taxpayers, and communities. For example, implementing mitigation measures for tens of thousands of properties that continue to receive subsidized rates could take a number of years to complete, which could have an on-going risk to NFIP’s financial health. We have previously reported that increasing mitigation would be costly and require increased funding. Furthermore, we found in our past and current work that buyouts and relocations would be more costly in certain areas of the country and in some cases the cost for mitigating older structures might be prohibitive. The effectiveness of mitigation efforts could be limited by FEMA’s reliance on local communities with varying resources. For example, not all communities have the staff or resources to fully carry out mitigation, meet cost-sharing requirements, and enforce compliance. As we reported in 2008, even when federal funds are made available to a community and property owners are interested in mitigating their properties, property owners still may have to pay a portion of the mitigation expenses, which could discourage participation in mitigation efforts. In interviews for this report, stakeholders said that mitigation was expensive and that as premiums are increased to full-risk rates, some means of assistance would be helpful for policyholders who may have difficulty paying for mitigation efforts. Mitigation costs would have to be weighed against mitigation benefits (possible savings from a decrease in flood damage). In addition, certain types of mitigation, such as relocation or demolition, might be met with resistance by communities that rely on those properties for tax revenues, such as coastal communities with significant development in areas prone to flooding. Furthermore, mitigation activities are often constrained by conflicting local interests, cost concerns, and a lack of public awareness of the risks of natural hazards and the importance of mitigation. Communities’ economic interests often can conflict with long-term hazard mitigation goals. For example, a community with a goal of economic growth might allow development to occur in hazard-prone areas (along the coast or in floodplains). Our analysis indicates that the three options discussed above are not mutually exclusive and may be used together to reduce the financial impact of subsidized policies on NFIP. For example, accelerating the elimination of subsidies could be done in conjunction with targeting assistance to only those policyholders who need help to retain their flood insurance—thus advancing the goal of strengthening the financial solvency of NFIP and addressing affordability concerns for low-income policyholders. In addition, FEMA may be able to build on its existing mitigation efforts and target assistance for mitigation efforts to those policyholders who need financial assistance. The way in which an option is implemented, such as more aggressively or gradually, also can produce different effects in terms of policy goals and thus change the advantages and disadvantages (see table 6). While FEMA has taken initial steps to eliminate subsidies for various types of properties in accordance with the Biggert-Waters Act requirements, eliminating the more than 700,000 additional policies that continue to receive subsidies will take many years to accomplish. Subsidies on some policies will be eliminated as properties are sold or if coverage lapses, but FEMA has some data limitations and implementation issues to resolve before other subsidies identified in the act can be eliminated. With some efforts under way, FEMA has much work ahead of it in planning and executing implementation of the changes in the act as well as effectively managing NFIP. Although FEMA has information on premiums and claims paid for subsidized policies over time, it does not have the information needed to determine the appropriate premium amounts policyholders should pay to reflect the full level of risk for floods. To phase out and eventually eliminate subsidies and revise rates over time, FEMA will need information on the relative risk of flooding and property elevations (elevation certificates), which generally had not been required for subsidized policies prior to the Biggert-Waters Act. The act requires FEMA to phase in full-risk rates on policies that previously received subsidies. According to federal internal control standards, agencies should identify and analyze risks associated with achieving program objectives, and use this information as a basis for developing a plan for mitigating the risks and obtaining needed information. Going forward, FEMA will require new policyholders and those whose coverage has lapsed to provide elevation information when renewing or obtaining new policies; however, FEMA will rely on other policyholders who previously received subsidized rates to voluntarily provide this information. As FEMA continues to implement the requirements of the act to charge full-risk rates, the agency plans to assume that all subsidized policies pay about half of the full-risk premium and has begun phasing-in rate increases based on this factor for all active policies that are having their subsidies removed. Without a plan to require all policyholders to obtain elevation certificates to accurately document their property elevations and relative risk of flooding, FEMA will lack information that is key to determining appropriate full-risk rate premiums. As a result, the rates that FEMA plans to implement may not adequately reflect a property’s actual flood risk, and some policyholders may be charged too much and some too little for their premiums. To establish full-risk rates for properties with previously subsidized rates that reflect their risk for flooding, we recommend that the Secretary of the Department of Homeland Security (DHS) direct the FEMA Administrator to develop and implement a plan, including a timeline, to obtain needed elevation information as soon as practicable. We provided a draft of this report to DHS for its review and comment. DHS provided written comments that are presented in appendix III. The letter noted that the department concurred with our recommendation to develop and implement a plan to obtain elevation information from previously subsidized policyholders. The letter stated that FEMA will evaluate the appropriate approach for obtaining or requiring the submittal of this information. In particular, the letter noted that although obtaining this information cost-effectively presents significant challenges, FEMA will explore technological advancements and engage with industry to determine the availability of technology, building information data, readily available elevation data, and current flood hazard data that could be used to implement the recommendation. FEMA also provided technical comments, which we have incorporated into the report, as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The Biggert-Waters Flood Insurance Reform Act of 2012 (Biggert-Waters Act) mandated that GAO conduct a number of studies, including this study on the properties that continue to receive subsidized rates after the implementation of the act and options to further reduce these subsidies. This report discusses (1) the number, location, and financial characteristics of properties that continue to receive subsidized rates compared with full-risk rate properties, (2) information needed to estimate the historic financial impact of subsidies and establish rates that reflect the risk of flooding on properties with previously subsidized rates, and (3) options to reduce the financial impact of remaining subsidized properties. Although the Biggert-Waters Act mandated that GAO report on certain characteristics of the remaining subsidized policies and properties, the National Flood Insurance Program (NFIP) databases do not contain information to address several elements listed in the act. Therefore, to the extent possible, we developed alternative methodologies to address the elements of the act. To provide information on the number and location of NFIP-insured properties that would continue to receive subsidized premium rates, we analyzed data from NFIP’s policy and repetitive loss databases as of June 30, 2012. We applied the Federal Emergency Management Agency’s (FEMA) algorithm to determine which policies were subsidized, and applied FEMA’s interpretation of the provisions in the Biggert-Waters Act that eliminate subsidies to determine which policies would retain their subsidies.FEMA’s implementation of legislative requirements authorizing subsidized rates for certain properties in high-risk locations. We also analyzed NFIP’s legislative history and relied on To determine the fair market value of properties that would continue to receive subsidized premium rates, we used other NFIP data and publicly available information as indicators of value because the fair market values required by the act were not available in NFIP’s databases. We used three indicators of home value, (1) NFIP policy-level coverage amounts, (2) 2007 through 2011 5-year American Community Survey (ACS) county-level data on median home values, and (3) January 2013, Zillow city-level median home value index within case study counties. For consistency in our message, we compared all the indicators at the county-level. To place NFIP policies in counties, we used ZIP code information contained in the NFIP policy file as of June 30, 2012, and matched those data with U.S. Postal Service and Department of Housing and Urban Development ZIP code to county data (as of December 2011). For ZIP codes that crossed county borders, we assigned policies proportionally to the counties based on the fields available in the ZIP code to county file. We aggregated the total number of policies and remaining subsidized policies for all counties, and selected 351 counties for our analysis that contained the majority of the policies. We selected all counties with 500 or more remaining subsidized policies for single-unit, primary residences (247 counties). We also included the five counties in each state and Puerto Rico with the most remaining subsidized policies for single-unit primary residences, regardless of the total number in the county, to better ensure a comprehensive national representation. Accordingly, the 351 counties we selected represent 78 percent of all remaining subsidized policies nationwide, 77 percent of all remaining subsidized policies for single-unit primary residences, and 77 percent of all NFIP policies. As more than 99 percent of remaining subsidized policies were in Special Flood Hazard Areas (SFHA), we limited our comparison with nonsubsidized policies to those for single-unit primary residences in SFHAs. We used NFIP policy data as of June 30, 2012, on coverage amounts as the first indicator of home value. To determine how building coverage amounts compared between remaining subsidized and nonsubsidized policies, we categorized NFIP building coverage amounts using less than $100,000, $100,000-$149,999, $150,000-$199,999, $200,000-$249,999, and $250,000, which is the maximum coverage for residential units. We compared the percentage of policies of each type within each category of coverage at the county level for the selected counties. We also conducted this analysis using flood zones, comparing the coverage amounts for A- zone and V-zone policies separately. (The A and V flood zones represent areas at high risk for flooding, and V zones also indicate coastal areas.) Coverage amount as an indicator for home value is limited because NFIP has a maximum building coverage amount of $250,000 per residential unit. Additionally, the perceived flood risk and cost of coverage could affect the coverage amount. However, coverage amount can give an indication of a property’s value relative to other properties. As a second indicator of home value, we used 2007 through 2011 ACS 5- year county-level estimates for median home values (known as B25077) for all counties in the United States and also included the District of Columbia and Puerto Rico. We included Puerto Rico because of its relatively large number of NFIP policies. We used 5-year data because other ACS data sets did not contain data for all the 351 selected counties. Using county median home value, we ranked all counties and determined the deciles for the 351 selected counties. We compared the percentage of remaining subsidized with nonsubsidized policies from the selected counties in each decile. Because these data are at the county level, areas within the county of relatively high or low home values are indistinguishable. We also analyzed the ACS and NFIP coverage data together, at the county level. As a third indicator of home value, we used Zillow city-level median home value data as of January 2013, within five selected counties. For the purposes of our county case study analysis, we selected the Zillow Home Value Index because it was publicly available; covered more housing units at the city level than other housing indices; was estimated at a smaller geographic region; and only included nonforeclosure housing units. We judgmentally selected five case study counties and compared data at the city level within the county to provide more detailed illustrations of how home values for properties that continue to receive subsidies compare with those that pay full-risk rates. These cases are not projectable to all counties. We selected our case study counties based on the number of relevant NFIP policies, their location, and the reliability of the data for the county. Specifically, we selected counties with at least 1,000 remaining subsidized policies and nonsubsidized policies for single- unit primary residences. We selected one county from each of the four states with the most remaining subsidized policies. We selected Pinellas County, Florida; Los Angeles County, California; and Ocean County, New Jersey; however, the Zillow data for Louisiana did not meet our level of reliability and was eliminated. As Pinellas County is on the Gulf of Mexico, Los Angeles County is on the Pacific Ocean, and Ocean County is on the Atlantic Ocean, we chose the other two counties to represent inland flooding—Cook County, Illinois, and Pima County, Arizona. The Zillow information for these counties met our criteria for data reliability. For each county, we determined which NFIP policies may be located in the county based on ZIP code. Because the NFIP city name was not consistently entered, two analysts independently matched the NFIP policy city names to Zillow city names within the county. A third analyst served as the mediator for differences using alternative location information. Within each county, we ranked the cities by median home value and distributed them into quartiles. We compared the number and percentage of remaining subsidized policies with the nonsubsidized policies in the cities in each quartile. Additionally, for each case study county, we reviewed the results from the NFIP coverage and ACS analyses within the county. Because owner income data were not available in NFIP’s databases, we analyzed 2007 through 2011 ACS 5-year data as an indicator of income levels of owners of remaining subsidized properties. We used 5-year, county-level data on median household incomes (B19013) for all counties in the United States, the District of Columbia, and Puerto Rico. Using the median household income data, we ranked all counties and determined the deciles for the 351 selected counties. We compared the percentage of remaining subsidized policies with nonsubsidized policies in SFHAs from the selected counties in each decile. Because these data are at the county level, areas within the county of relatively high or low household incomes are indistinguishable. We also analyzed the ACS median home value and median household income data together, at the county level. Because consistent, nationwide aggregate data on sales prices for each property covered by a remaining subsidized pre-Flood Insurance Rate Map (FIRM) policy since 1968 were not available from NFIP or other sources, we determined that the home value analysis was sufficiently similar to provide an indication of sales prices to respond to this study element. We also used NFIP policy fiscal year-end data from 2002 through 2012 to estimate the potential annual rate of decline in the number of remaining subsidized policies over time. Consistent, nationwide aggregate data on sales dates for each pre-FIRM property since 1968 were not available from NFIP or other sources. We compared sequential years of policy data to determine whether each policy with the characteristics of a remaining subsidized policy continued to have coverage. We first matched company and policy data and if no match was found, matched on owner name. If a policy in the first year failed to match by either method, we assumed that the policy no longer had coverage. We estimated the annual rate of decline for 10 sequential year pairs. We compared our results with a recent NFIP policy tenure study by calculating the decline rate from the reported tenure rate. We estimated the number of remaining subsidized policies over a 30-year period given the different annual decline rates. Because data were not available from NFIP on the number of times each pre-FIRM property had been sold, we determined that the policy decline rate analysis was sufficiently similar to provide an indication of extent of ownership or length of time policies remained in the program to respond to this study element. Additionally, because data were not available from NFIP’s databases on the extent to which pre-FIRM properties are currently owned by the same owners as at the time of the original NFIP rate map, we determined that the policy decline rate analysis was sufficiently similar to provide an indication of extent of ownership or length of time policies remained in the program to respond to this study element. To estimate the financial impact, or cost, of subsidized properties to NFIP, we attempted to calculate forgone premiums—lost revenue to the program in premiums—due to subsidies. Because data on elevations of NFIP subsidized properties were not available to determine the total forgone premiums from subsidized policies, we used FEMA’s estimates of the subsidy rate from 2002 through 2011 to estimate a range of forgone premiums attributable to subsidized properties in this period. We limited our analysis to 2002 through 2011 because FEMA did not estimate subsidy rates prior to 2002. Lacking the information to calculate the ranges associated with the premiums that would have been collected, we made assumptions based on limited historical information from FEMA, including the annual Actuarial Rate Reviews from 2002 through 2011, which state that subsidized premiums were estimated to be between 35 and 45 percent of the full-risk premium (the subsidy rate). Our analysis did not adjust for potential effects on behavior (such as on program participation) or changes in operating expenses that could have occurred had historical rates not been subsidized. In addition, our analysis did not account for new information provided by FEMA officials that only a portion of subsidized premiums is available to pay for losses. We plan to analyze the impact of this new information provided by FEMA in comments on a draft of this report. We will report the methodology and results of our estimate separately. FEMA did not report such estimates from 1978 through 2001. For the period before 2002, we analyzed a prior GAO report, FEMA’s annual actuarial review, and a PricewaterhouseCoopers study commissioned by FEMA and present qualitative information about the cost of subsidies. Additionally, because of the limited historical program data from FEMA, developing a sufficiently reliable year-by-year or state- by-state estimate of cost to NFIP as a result of remaining subsidized policies is not possible. To estimate the total losses incurred by subsidized properties since the establishment of NFIP and compare these with the total losses incurred by all structures charged a nonsubsidized premium rate, we analyzed NFIP claims database as of June 30, 2012, to determine total losses attributable to remaining subsidized and nonsubsidized policies. Data were not available before 2002 that would allow us to determine whether a policy had the characteristics of a remaining subsidized policy. For years prior to 2002, we estimated the proportion of claims for previously subsidized policies that were attributable to remaining subsidized policies, based on the average proportion in the claims data in the latest 10 years. To determine the premium income collected by NFIP as a result of subsidized policies, compared with premium income collected from properties charged a nonsubsidized rate, we analyzed annual NFIP premium data and data broken out by subsidy to determine the annual premiums of remaining subsidized and nonsubsidized policies. We estimated the proportion of previously subsidized premiums attributable to remaining subsidized policies based on the average proportion in the latest 10 years of NFIP policy data. To determine the options to reduce the financial impact of remaining properties with subsidized policies, we analyzed NFIP’s legislative history and reviewed FEMA documents as well as documents from insurance industry organizations and academic institutions to gather information on options to eliminate or reduce the financial impact of subsidized policies on NFIP. In addition, we interviewed NFIP officials and representatives of insurance industry organizations and floodplain managers. We also interviewed a nationally recognized academic knowledgeable about the financial impact and the public policy challenges associated with catastrophic events, and discussed previous studies on NFIP and other relevant studies on flood insurance issues. For all data sets used we performed data testing and gathered information from issuing entities about possible data limitations. For the ACS, Zillow, and NFIP data sets, we interviewed officials on usability and reliability. We determined that each data set used was sufficiently reliable for our intended purposes. We conducted this performance audit from September 2012 to July 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We compared various characteristics of the remaining subsidized policies and nonsubsidized policies in SFHAs in selected counties. In addition, we conducted more detailed analysis of five counties for illustrative purposes. For our analysis of the financial characteristics of subsidized and nonsubsidized policies in SFHAs, we selected 351 counties that represented 78 percent of all remaining subsidized policies nationwide, 77 percent of all remaining subsidized policies for single-unit primary residences, and 77 percent of all NFIP policies. We selected all counties with more than 500 remaining subsidized policies for single-unit primary residences and the five counties in every state (and Puerto Rico) with the most remaining subsidized policies, regardless of number. Figure 8 shows the 351 selected counties and the number of remaining subsidized policies for single-unit primary residences under NFIP. For both remaining subsidized policies and nonsubsidized policies, a larger percentage of policies in V zones (coastal areas with a high risk of flooding) had the maximum coverage amount than policies in A zones (noncoastal areas with a high risk of flooding) (see fig. 9). Also for both types of policies, V-zone policies represented a very small fraction of all policies in SFHAs. For example, 1.6 percent of remaining subsidized policies and 0.8 percent of nonsubsidized policies in SFHAs were in V zones. We analyzed NFIP coverage amounts (for remaining subsidized policies and nonsubsidized policies in SFHAs for single-unit primary residences) and county median home values together and determined that higher coverage amounts were associated with higher county median home values. Counties with higher median home values had higher percentages of remaining subsidized policies and nonsubsidized policies with the NFIP maximum coverage of $250,000 than counties with lower median home values (see table 7). In addition, counties with lower median home values generally had higher percentages of remaining subsidized policies and nonsubsidized policies with lower amounts of coverage (less than $100,000) than counties with higher median home values. However, nonsubsidized policies consistently had higher amounts of coverage. Specifically, in every decile of county median home value, a larger percentage of nonsubsidized policies had the maximum amount of NFIP coverage than remaining subsidized policies. Also in every decile of county median home value, a smaller percentage of nonsubsidized policies had lower amounts of coverage (less than $100,000) than remaining subsidized policies. We analyzed home value and household income indicators together and found that counties with the highest median household incomes and highest median home values had higher percentages of remaining subsidized policies than nonsubsidized policies in SFHAs. For example, 78 of the 351 selected counties were in the highest decile in both median home value and median household income (see table 8). About 26 percent of remaining subsidized policies compared with 7 percent of nonsubsidized policies in SFHAs were in these counties (see table 9). Remaining subsidized policies were also found in higher percentages than nonsubsidized policies in counties with lower median income and lower median household counties (lowest 6 deciles). Counties with higher median household income generally also had higher median home values, but counties with higher median home values did not always have higher median incomes. We performed five case studies to illustrate results in specific counties (see fig. 10). We selected the counties based on the number of relevant NFIP policies, location, and reliability of city-level data. Case studies were chosen to offer a more in-depth, within county view (how things vary across cities within select counties). We performed the NFIP coverage and median home value analyses, but also used publicly available real estate data to examine city-level median home values within the county. We compared remaining subsidized and nonsubsidized policies in SFHAs (A and V flood zones are designated as SFHAs).These cases cannot be projected nationwide, and the results of our analysis from each county are independent of each other. Some of the results from these case studies matched our earlier results, and some did not. Los Angeles County, California; Ocean County, New Jersey; and Cook County, Illinois; had median home values in the top 10 percent of all counties. Consistent with our earlier results for counties with the highest median home values, Cook and Los Angeles Counties had more remaining subsidized policies than nonsubsidized policies (95 percent and 71 percent of all policies for Cook County and Los Angeles County, respectively); however, Ocean County had fewer remaining subsidized policies (about 44 percent). Los Angeles and Ocean Counties had high percentages of both subsidized and nonsubsidized policies with maximum NFIP coverage and a low percentage of both types of policies at lower levels of coverage. However, Cook County had low percentages of maximum coverage policies. Pinellas County, Florida, and Pima County, Arizona had median home values in the second decile of all counties. Although Pinellas County had many more policies than Pima County, both had slightly more remaining subsidized policies than nonsubsidized policies (55 percent and 57 percent of all policies for Pinellas County and Pima County, respectively). Pinellas County had lower percentages of policies at maximum coverage than Los Angeles and Ocean Counties but higher percentages than Pima and Cook Counties. Consistent with our analysis of NFIP coverage amounts, all five counties had lower percentages of remaining subsidized policies at maximum building coverage than nonsubsidized policies. Ocean County had the largest difference between nonsubsidized policies and remaining subsidized policies (77 percent versus 47 percent), and Pima County had the smallest difference (41 percent versus 26 percent). All counties had a higher percentage of remaining subsidized policies than nonsubsidized policies with building coverage less than $100,000, but in some counties the differences were smaller. The results of our analysis of the city median home value were mixed. In all counties except Los Angeles County, higher percentages of remaining subsidized policies than nonsubsidized policies were in cities in the lowest quartile of median home value, but in Cook and Pinellas Counties the differences were larger. In Pinellas County 59 percent of the remaining subsidized policies were in cities in the lowest quartile of median home value. In the counties with V-zone policies (Los Angeles, Ocean, and Pinellas) a slightly higher percentage of remaining subsidized policies were in cities in the highest quartile of median home value than nonsubsidized policies. In Ocean County more than 30 percent of remaining subsidized and nonsubsidized policies were in cities in the highest quartile, while in Pima County, very few policies of either type were in cities in this quartile. In Los Angeles and Pima counties, most policies of either type were in cities in the second and third quartiles. In Cook County policies were not concentrated in any quartile. Additionally, fewer than 2 percent of policies were in V zones. Specifically, in the three counties with V-zone policies (Los Angeles, Ocean, and Pinellas) there were about 1,290 V-zone policies compared with about 72,000 A-zone policies. In each county, more V-zone policies were remaining subsidized policies than nonsubsidized policies. In Ocean and Los Angeles Counties, most V-zone policies of either type were in cities with median home values in the top quartile within the county. In Pinellas County the V-zone policies were located in cities in all quartiles of median home value. In addition to the contact named above, Jill Naamane and Patrick Ward (Assistant Directors); William Chatlos; Barb El Osta; Christopher Forys; Isidro Gomez; Cathy Hurley; Jacquelyn Hamilton; Karen Jarzynka- Hernandez; Courtney LaFountain; May Lee; Barbara Roesmann; Jena Sinkfield; Melvin Thomas; Frank Todisco; Sonya Vartivarian; and Monique Williams made key contributions to this report.
FEMA, which administers NFIP, estimated that in 2012 more than 1 million of its residential flood insurance policies--about 20 percent--were sold at subsidized rates; nearly all were located in high-risk flood areas. Because of their relatively high losses and lower premium rates, subsidized policies have been a financial burden on the program. Due to NFIP's financial instability and operating and management challenges, GAO placed the program on its high-risk list in 2006. The Biggert-Waters Act eliminated subsidized rates on certain properties and mandated GAO to study the remaining subsidized properties. This report examines (1) the number, location, and characteristics of properties that continue to receive subsidized rates compared with full-risk rate properties; (2) the information needed to estimate the historic cost of subsidies and establish rates for previously subsidized policies that reflect the risk of flooding; and (3) options to reduce the financial impact of remaining subsidized policies. GAO analyzed NFIP data on types of policies, premiums, and claims and publicly available home value and household income data. GAO also interviewed representatives from FEMA, insurance industry associations, and floodplain managers. The Biggert-Waters Flood Insurance Reform Act of 2012 (Biggert-Waters Act) immediately eliminated subsidies for about 438,000 National Flood Insurance Program (NFIP) policies, but subsidies on an estimated 715,000 policies across the nation remain. Depending on factors such as policyholder behavior, the number of subsidized policies will continue to decline over time. For example, as properties are sold and the Federal Emergency Management Agency (FEMA) resolves data limitations and defines key terms, more subsidies will be eliminated. GAO analysis found that remaining subsidized policies would cover properties in every state and territory where NFIP operates, with the highest numbers in Florida, Louisiana, and California. In comparing remaining subsidized and nonsubsidized policies GAO found varying characteristics. For example, counties with the highest and lower home values had a larger percentage of subsidized versus nonsubsidized policies. Data constraints limit FEMA's ability to estimate the aggregate cost of subsidies and establish rates reflecting actual flood risks on previously subsidized policies. FEMA does not have sufficient historical program data on the percentage of full-risk rates that subsidized policyholders have paid to estimate the financial impact--in terms of the difference between subsidized and full-risk premium rates--to NFIP of subsidies. Also, because not all policyholders are required to provide documentation about their flood risk, FEMA generally lacks information needed to apply full-risk rates (as required by the Biggert-Waters Act) on previously subsidized policies. FEMA is encouraging these policyholders to voluntarily submit this documentation. Federal internal control standards state that agencies should identify and analyze risks associated with achieving program objectives and develop a plan for obtaining needed data. Without this documentation, the new rates may not accurately reflect a property's full flood risk, and policyholders may be charged rates that are too high or too low relative to their risk of flooding. Options from GAO's previous and current work for reducing the financial impact of subsidies on NFIP include (1) adjusting the pace of subsidy elimination, (2) targeting assistance or subsidies based on financial need, or (3) increasing mitigation efforts, such as relocation or elevation that reduce a property's flood risk. However, these options have advantages and disadvantages. Moreover, the options are not mutually exclusive, and combining them could help offset some disadvantages.FEMA should develop and implement a plan to obtain flood risk information needed to determine full-risk rates for properties with previously subsidized rates. FEMA agreed with the recommendation. FEMA should develop and implement a plan to obtain flood risk information needed to determine full-risk rates for properties with previously subsidized rates. FEMA agreed with the recommendation.
In 1987 the United States and its six major trading partners created the MTCR to restrict the proliferation of missiles and related technology. The MTCR, the only multilateral missile nonproliferation regime, is a voluntary arrangement among countries that share a common interest in arresting missile proliferation. It is not a treaty. The regime consists of common export policy guidelines applied to a common list of controlled items that each MTCR member implements in accordance with its national legislation. Currently, 25 states are formal partners to the MTCR, while an additional 7 states, including China, have adhered or declared an intention to adhere to the MTCR Guidelines. (See app. I for a complete list of current MTCR partners and adherents or declared adherents.) The MTCR Annex divides controlled items into two categories, Category I and Category II items. Category I items are subject to a strong presumption of denial and are rarely licensed for export. They include such items as complete missile systems; unmanned air-vehicle systems, such as cruise missiles; and certain complete subsystems, such as rocket engines and guidance sets. Category II (dual-use) covers a wide range of commodities, including propellants, test equipment, and flight instruments, that could be used for missiles or satellite launches. Category II items must be evaluated case-by-case against specified criteria and if judged to be destined for use in weapons of mass destruction (nuclear, chemical, or biological) are subject to a strong presumption of denial. Federal law regulates the exports of missiles and related technology and requires licenses for the export from the United States of certain missiles, components, and technology specified in the MTCR Annex. The State Department supervises and directs all governmental arms transfers and licenses commercial arms transfers, including U.S. exports of missile items and technology. The Commerce Department licenses exports of dual-use goods and technology, which are controlled for missile technology reasons pursuant to the MTCR Annex to all countries. It has jurisdiction over production equipment for MTCR Annex items, which is controlled as either Category I or Category II, depending on the type of equipment involved. Violators of U.S. export laws are subject to criminal and civil penalties and economic sanctions. Federal laws require the President to impose sanctions on U.S. and foreign individuals and entities that improperly conduct trade in controlled missile technology. Also, such sanctions would apply to a country with a nonmarket economy, such as China, to all activities of that government, with some qualifications (1) relating to the development or production of any missile equipment or technology and (2) affecting the development or production of electronics, space systems or equipment, and military aircraft. MTCR-related licenses comprised a very small portion of total export license activity for China. However, DOD has questioned whether Commerce has been adequately identifying for interagency referral and review all the applications for the export of dual-use missile-related technologies. The Commerce Department initially determines which commodities might contain missile technology. It independently determines that dual-use license applications do not involve missile technology, but if it believes that they might contain missile technology and the destination is a country of concern, Commerce is to refer these applications to the interagency Missile Technology Export Controls (MTEC) group. The group consists of working-level representatives of DOD, the Departments of State and Commerce, the Joint Chiefs of Staff, Arms Control and Disarmament Agency (ACDA), National Aeronautics and Space Administration, U.S. Customs Service, the intelligence community, and others at the invitation of the Chair and concurrence of the group. The MTEC’s charter calls for it to meet as required to review license applications for U.S. exports of missile proliferation concern, referred according to agreed criteria. The MTEC evaluates the transfer in terms of the MTCR and U.S. nonproliferation policy. Commerce can also refer applications to the Central Intelligence Agency’s Nonproliferation Center for information on the suitability of end-users. In addition to the multilateral MTCR, the Enhanced Proliferation Control Initiative (EPCI) of December 1990, a unilateral U.S. control, provides a “catch-all” control by directing that items going to destinations of concern, regardless of whether they are on proliferation control lists, are to be referred to the interagency review process. The Initiative expanded missile technology export controls by requiring U.S. exporters to request an export license for any item that they know or have been informed by the U.S. government is destined for a project of proliferation concern. The Initiative was designed to give the U.S. government a safety net by allowing it to apply export controls when it learns about a pending transaction that risks helping a weapon program, but which is not explicitly covered by the current Commerce Control List. To deter and detect the diversion of dual-use exports to proliferation activities, Commerce or other consulting agencies may request pre-license checks or post-shipment verifications. Pre-license checks are used to establish the legitimacy of the end user or verify the intended use of the export; post-shipment verifications are used to ascertain whether exported items are being used appropriately. The State Department operates a similar program of end-use checks, called the BLUE LANTERN program. The government may also seek assurances from foreign governments that items will not be diverted to proliferation-related uses. The Commerce and State Departments approved a total of 67 export licenses worth about $530 million for missile-related items for China for fiscal years 1990 through 1993. Figure 1 shows the final action that each agency took for all export license applications for China involving missile technology during this period. State-approved licenses (48) Commerce-approved licenses (19) 8% State-denied applications (10) 8% Commerce-denied applications (9) Between fiscal years 1990 and 1993, the Commerce Department identified 33 export license applications for China as containing missile-related technology commodities. It approved, with interagency concurrence, 19 of these applications valued at about $6.5 million. During the same period, Commerce approved a total of 8,600 applications for China, valued at about $6.4 billion, out of a total of 10,860 applications for exports to China. Thus, Commerce-identified dual-use missile technology exports totaled less than 1 percent of all exports requiring individual validated licenses to China. (See app. III for a complete list of dual-use applications for China approved by Commerce after MTEC review.) Figure 2 shows the final status for all 10,860 Commerce Department export license applications for China for fiscal years 1990 through 1993, with approved applications broken down by Export Control Classification Number category. At the time of our review, commodities that were subject to foreign policy controls on weapons delivery systems were grouped under 116 Export Control Classification Numbers (ECCNs) listed in the U.S. Export Administration Regulations. The Commerce Department also can refer items that are contained in other ECCNs to interagency review for potential missile technology. Approved—Partial missile technology ECCNs (5,281) Approved—Non-missile technology ECCNS (3,024) Other actions include returned without action, revoked, suspended, or withdrawn. Percentages do not total 100 percent due to rounding. Between fiscal years 1990 and 1993, the State Department identified 85 export license applications for China as containing missile-related technology commodities. State approved, with interagency concurrence, 48 of these applications—40 with provisos—valued at $523.5 million. During the same period, State approved a total of 96 applications for other arms exports for China, out of a total of 369 applications. U.S. Munitions List license applications for China for the fiscal years 1990-93 period generally were related to (1) satellite equipment, (2) aircraft spare parts, and (3) technical data. DOD officials have expressed concern that Commerce is not referring potential missile technology applications for interagency review. Commerce is solely responsible for deciding if dual-use export license applications are not missile-related technology. In those cases where Commerce determines that applications are not missile-related technology, it does not share all data with other agencies. There currently is no routine mechanism for DOD or other agencies to understand or question Commerce’s analysis and conclusions on the full range of 8,600 approved licenses for China between fiscal years 1990 through 1993, aside from the 33 applications that Commerce referred for interagency review. As a result, there is little transparency into the dual-use missile technology licensing process by officials outside of the Commerce Department. Increasing the transparency of the license applications that Commerce reviews would have the result of either allowing other agencies to find deficiencies in Commerce’s efforts at identifying missile-related exports or, conversely, of reassuring them that Commerce’s review procedures are appropriate and properly implemented. Commerce officials said that Commerce has sole responsibility for classifying commodities on the Commerce Control List. According to the officials, although it is routinely a clear-cut technical matter of checking the parameters on the Control List against the technical specifications of the item on the application, occasionally some interpretation is required. Nevertheless, making this determination in some cases is difficult and requires further review and consultation. Commerce officials also said that, according to agreed interagency procedures, DOD reviewed all Commerce license applications for China for national security reasons and MTCR Annex items, except where there were specific delegations of authority to Commerce. However, high-level Defense Technology Security Administration officials said that they were unfamiliar with referral criteria for MTCR Annex items and that there was no written agreement on such referrals between DOD and the Commerce Department. In fact, DOD requested a review of criteria and referral procedures in May 1994 and corresponded with Commerce several times on how to implement it. Also, the current and past chairmen of MTEC criticized Commerce’s referral of missile technology cases for interagency review. The current chairman said that Commerce would not release to State the Licensing Officer’s Operating Manual, which contains referral criteria. The officials further said that Commerce does not have the technical expertise to properly review missile technology applications and should not be pre-screening them. Commerce Department officials believe that the question of referrals and other agencies’ concerns has already been resolved by the executive branch’s 1994 proposal to amend the Export Administration Act. According to Commerce officials, that proposal would have afforded all relevant agencies, including DOD, the right to see all dual-use license applications. However, the proposed legislation was not enacted and the executive branch has not implemented this provision. In November 1994, Commerce Department officials began discussing with the Defense Technology Security Administration means to implement the proposal. While Commerce said that it refers virtually all applications for exports to China, as indicated above, our review of Commerce database information indicated that Commerce referred to DOD less than 49 percent of all approved applications for exports to China in fiscal year 1993, and referred to the Coordinating Committee for Multilateral Export Control less than 47 percent of all approved applications for exports to China for the same period. In addition, a September 1993 report by a joint team of four inspector general offices noted that there is no agreement between Commerce and most of the other federal agencies regarding which export applications should be referred for comments. Although not specifically addressing missile technology licenses, the report’s findings emphasized the agencies’ general concerns with Commerce’s referrals of export licenses. It concluded until this issue is resolved, the agencies will not have adequate assurance that the license review process is working as efficiently and effectively as it should. The agencies involved—State, Commerce, DOD, and Energy—generally agreed with the concerns raised about interagency referral issues. (See app. II for information about the disposition of various applications by dollar value processed by the Commerce Department.) Licensing process controls for dual-use and missile technology export applications cannot ensure that U.S. proliferation-related dual-use and munitions exports to China, aside from separately monitored satellite exports, are kept from sensitive end users. We did not find direct evidence of diversions of U.S.-supplied dual-use technology or of exports of commodities to China approved in contradiction of export licensing procedures. However, we noted that a DOD classified report indicated that diversions might have occurred. Also, our request for officials of the involved agencies to assess whether specific exports that did not receive interagency review might have benefited from it was denied. (See app. IV for a discussion of our methodology to identify such evidence and the limitations that the executive branch placed on our efforts to find such evidence.) An important premise of the U.S. export licensing process is the ability to assess legitimate end uses and end users of U.S. technology exports. According to the MTCR Guidelines, in evaluating the transfer of MTCR Annex items, the licensing process will consider, among other factors, (1) the capabilities and objectives of the missile and space programs of the recipient state; (2) the significance of the transfer in terms of the potential development of delivery systems (other than manned aircraft) for weapons of mass destruction; and (3) the assessment of the end-use of the transfers. Missile technology licensing procedures for the Commerce Department from the Licensing Officer’s Operating Manual section labeled “MTCR Determination” require missile technology review if an application lists identified classified entities—end users in a country listed in a separate classified memorandum—as the end user and/or ultimate consignee, regardless of the reason for control. In addition, on any application, when the end use is missile-related, the end user is known to be involved in missile activities, or questions are raised, missile technology review is required. The procedures note that it is especially important to have detailed information on the end use. Commerce Department procedures permit Commerce officials to refer license applications to the Central Intelligence Agency’s Nonproliferation Center for assistance in identifying sensitive end users. However, the Central Intelligence Agency recommended 22 general types of foreign end users that Commerce could exempt from Nonproliferation Center review. These types include some foreign government entities whose activities are usually self-explanatory, public service organizations, and some foreign trade organizations. Available data showed that about 31 percent of all 10,860 license applications for China during fiscal years 1990 through 1993 were referred to the Nonproliferation Center. However, Commerce officials said that this percentage would be higher because inconsistent recording of license application referrals by licensing officers precluded an accurate accounting of the number of applications referred to the Nonproliferation Center. Officials from various U.S. government agencies indicated that it is difficult to determine which companies in China are truly privately owned and operated and which are adjuncts to the Chinese government. Sometimes, however, agencies within the intelligence community disagreed over the extent of the problem. A 1993 DOD report cited multiple examples of suspected diversion or use of U.S. civilian technology in China’s aeronautics and astronautics industries. The Central Intelligence Agency’s Nonproliferation Center characterized the report as overstating the case, but did not question the potential for diversion in many of the cases cited. Information that is available on sensitive end users in China is not always shared efficiently or routinely between the intelligence and licensing communities. In June 1994 we reported that, although State and Commerce each use an automated computer system to screen export applications for ineligible or questionable parties, they did not include on their watchlists many pertinent individuals and companies. We also noted that the agencies do not routinely share names on their respective watchlists, and their procedures to add names to their lists and ensure that data is complete and current are inadequate. Commerce noted that, although it disagreed with the report’s conclusions, it agreed to share with State all potentially pertinent parts of each agency’s watchlist. Also, there is no central database on sensitive end users of missile-related technology for routine intelligence or information-sharing with Commerce in the licensing or intelligence communities. Several U.S. government organizations, such as the Los Alamos and Lawrence Livermore National Laboratories, and organizations within DOD, independently maintain—or plan to create—databases containing sensitive end-user information. In fact, a May 1994 report by the Office of Technology Assessment noted that multiple agencies are already developing their own unique proliferation databases for internal use, rather than coordinating their efforts. U.S. government officials believe that the U.S. government generally performs adequate monitoring of China’s compliance with the terms of its MTCR commitments not to export MTCR technology out of China. However, the U.S. government performs limited monitoring of China’s compliance with conditions attached to U.S. missile-related technology exports. The intelligence community has primary monitoring responsibilities of countries’ adherence to MTCR commitments. The interagency Missile Trade Analysis Group analyzes intelligence information concerning missile proliferation and MTCR. The group consists of working-level representatives of DOD, the Departments of State and Commerce, the Joint Chiefs of Staff, ACDA, National Aeronautics and Space Administration, U.S. Customs Service, the intelligence community, and others at the invitation of the chair and concurrence of the group. U.S. government officials generally expressed confidence in U.S. monitoring abilities to detect violations of MTCR commitments not to export such technology. To the degree that Commerce and State monitor license conditions relevant to “no retransfers, resales, or reexports” of U.S.-licensed missile technology commodities, they share indirect responsibility for monitoring adherence to MTCR commitments. Both the Commerce Department’s pre-license checks and post-shipment verifications program and the State Department’s BLUE LANTERN programs are restricted in China. They are restricted partly because the Chinese government does not accept the need to link cooperating with U.S. pre-license checks and post-shipment verifications in order to gain U.S. approval for Chinese export license applications. According to an Assistant Secretary of the International Trade Administration, Commerce has not given China a clear demonstration that if there is no pre-license check, an application would be rejected. DOD, on the other hand, insists that it oversee foreign launches of U.S.-built satellites in China through its Technology Safeguards Monitoring Program. Commerce policy does not require pre-license checks be completed in order that an export license application be approved. Commerce data showed that it requested three pre-license checks for applications involving missile-related technology. Two were conducted and one was canceled. Commerce officials said that the application with the canceled pre-license check was approved after interagency review, while the other two applications were not. Commerce returned the second application without action and advised the applicant to apply to the State Department because it determined that the license application was under State’s jurisdiction. The third application was rejected. Commerce officials said that the pre-license check for the approved missile technology application was canceled the same day it was requested. Commerce officials noted that pre-license checks can be canceled for legitimate reasons. For example, one pre-license check was canceled after the U.S. Embassy in Beijing provided additional information on the transaction, according to Commerce officials. In comparison, for all types of exports, the Commerce Department requested a total of 77 pre-license checks for China between fiscal years 1990 and 1993, and conducted 37 checks, or about 48 percent, while 22 pre-license checks were canceled for various reasons, and 18 were still pending at the time of our review. Compared to 20 other countries of proliferation concern, China had the lowest percentage of completed pre-license checks. Commerce records showed that nine of the export license applications whose requested pre-license checks were canceled received an approved license. The U.S Embassy conducted no post-shipment verifications related to missile technology. One was requested for a missile technology export, but was canceled when the license expired without the shipment being made. In comparison, the U.S Embassy conducted one post-shipment verification with the authorization of the Chinese government out of a total of seven requested for all types of export items. Commerce officials indicated that a post-shipment verification also was requested and canceled for the one missile technology license with a canceled pre-license check noted above. MTEC dropped its request for the condition after a Commerce official said that it would be difficult to conduct the post-shipment verification in China. The group alternatively required the exporter to report to Commerce after it installed the item. At the time of this report, the export had not been shipped. Commerce officials said that Commerce conducted few pre-license checks because of such factors as Chinese sensitivity over sovereignty issues and expense in time, dollars, and distances required to conduct pre-license checks. Noting that discussions were in progress with China on expanding pre-license checks and post-shipment verifications, Commerce officials said they expect no breakthroughs in the near future. According to these officials, Commerce has made continuous efforts for the past 10 years to reach an understanding with China on routinely allowing the United States such checks and verifications, without success. The Foreign Commercial Service Officer at the U.S. Embassy in Beijing is responsible for conducting pre-license checks. However, he said that his role is split between conducting checks and his trade promotion activities. The export controls function is secondary to the trade promotion role. Although some Foreign Commercial Service Officers at consulates in China in the past year have been tasked and trained to conduct pre-license checks, they do not have the required backgrounds for this function and also face conflicts with their trade promotion duties. The Foreign Commercial Service Officer in Beijing said that those at the consulates would have difficulty conducting pre-license checks in China, unless they received well-written cables detailing what to look for. There was little monitoring required of China’s compliance with the conditions associated with five missile technology export licenses that included provisos as conditions of approval. Of the five licenses with conditions, only two required that the exporter provide subsequent documentation. In one case, receipt of the documentation would have initiated a post-shipment verification. The Commerce Department did no follow-up on this 1992 license until 1994, when it learned that the shipment was never sent. Commerce officials said that there would be no follow-up until receipt of the exporter’s documentation, indicating that the shipment had been made. They also noted that the license would expire after 2 years, at which time Commerce would verify that the shipment had not occurred. The interagency MTEC Group, which recommended approval of the license with the proviso, did no follow-up to ensure that the condition was included as part of the license or that the post-shipment verification was ever done. The interagency group typically trusts the licensing agency to implement its recommendations, according to the group’s chairman. In the other case, the exporter was required to report on its installation of equipment after it occurred. Commerce records indicate that the export had not been shipped at the time of this report. Our previous report concerning end-use checks for nuclear dual-use items found systemic weaknesses in the pre-license check/post-shipment verification program for nuclear dual-use items. In the September 1993 special interagency report, which included China in its review, Commerce Department’s Inspector General reported that there is no assurance that either pre-license checks or post-shipment verifications are achieving their objectives. We found some of the same conditions in China, such as insufficient information provided to Foreign Commercial Service Officers in requesting cables and misleading data in the Bureau of Export Administration’s database for tracking the status of pre-license checks, as had been identified in these two reports. The State Department’s BLUE LANTERN end-use check program in China is minimal. State currently performs few BLUE LANTERN checks in China because relatively few Munitions List exports are licensed for China. State Department officials said that relatively few Munitions List licenses are granted to China because of (1) the “Tiananmen Square” sanctions, established by Public Law 101-246, which suspended exports of items on the U.S. Munitions List to military and security end users unless a presidential waiver is obtained and (2) existing International Traffic in Arms Regulations, which require approval of exports to China only as an exception to the standing U.S. policy of denial since China is a proscribed destination. Most of these exports involve satellite projects, monitored under the separate DOD program. According to a State Department official, most of the few remaining munitions items licensed to China are not militarily significant or are not amenable to post-license verification. During the period from fiscal years 1990 through 1993, no pre-license checks for missile technology exports were requested by State. In comparison, three pre-license checks were requested for other non-missile export applications handled by State. Two of the requests were canceled and State issued the licenses, but they were never used. The State Department completed the third check. The State Department requested one post-shipment verification during this period for the application that had received the pre-license check, but could not verify the results. In addition, most of the missile technology exports during the 4-year period involved satellite technology associated with launches of foreign-owned satellites on Chinese boosters. DOD’s Technology Safeguards Monitoring Program provides for continuous monitoring of such exports while they are in China. From December 1989 through January 1993, DOD participated in monitoring five launch campaigns of U.S. satellite equipment launched by Chinese rockets. Personnel from technical and engineering backgrounds, and experts on space systems and test ranges performed the monitoring. China’s 1992 commitments to the MTCR were limited and ambiguous. State Department officials agreed that the terms of China’s commitments contained ambiguities. On the other hand, the terms of U.S. expectations for China’s commitments were straightforward and unambiguous. Nevertheless, these expectations were based on some outdated MTCR standards, which differed from the changed standards subsequently agreed to by MTCR members. The different expectations remained unreconciled. In October 1994, China renewed its commitment to the original MTCR Guidelines and Annex in a signed bilateral statement. This statement further committed China not to sell Category I ground-to-ground missiles and technology to any country. Moreover, China resolved a key ambiguity in its 1992 commitment by agreeing to define MTCR-class missiles using a U.S.-proposed concept. The 1992 U.S.-Chinese understandings were based on a series of classified diplomatic exchanges. The United States established clear standards against which to measure Chinese behavior, even though it could not have been positive that the Chinese government agreed with the 1992 standards. Relative to the 1992 commitments, the October 1994 Chinese commitments are phrased in a jointly agreed manner and are more clearly stated. MTCR partners’ commitments to the regime include abiding by terms of the current MTCR Guidelines and Annex. These provide no payload threshold. China was committed, on the other hand, to only the original 1987 MTCR Annex and Guidelines in effect at the time of its original commitment. At that time, the purpose of the regime was to limit the spread of missiles and unmanned air vehicles/delivery systems capable of carrying a 500-kilogram (1,100 pounds) payload at least 300 kilometers (186 miles). MTCR partners revised the MTCR Guidelines in January 1993 to cover delivery vehicles for all types of weapons of mass destruction (chemical and biological as well as nuclear), regardless of their payload, and revised the annex, most recently in July 1994, to make its terms more specific. Under the terms of its October 1994 commitment, China and the United States will conduct in-depth discussions concerning a Chinese commitment to the current MTCR Guidelines and Annex and prepare the way for eventual Chinese MTCR membership, according to a State Department official. The effectiveness of U.S. sanctions on China is difficult to determine because, to date, no consensus on a definition of, or criteria for, measuring the effectiveness of proliferation sanctions imposed on China has been established. In fact, State Department officials said that they are not responsible for assessing effectiveness of proliferation sanctions, which are congressionally mandated, and that assessing them is not required in the Arms Export Control Act or other laws. In June 1991, the U.S. government imposed sanctions on two Chinese entities because of their trade in missile technology. The U.S. government waived sanctions against these entities in 1992 when the Chinese government committed to observing the MTCR Guidelines. In August 1993, the U.S. government imposed sanctions on 10 Chinese entities, upon determining that they had transferred missile technology from China to Pakistan. However, in October 1994, the State Department announced that the U.S. government would lift these sanctions on Chinese entities in exchange for new Chinese missile nonproliferation commitments, including a reaffirmed commitment to the MTCR. These sanctions subsequently were lifted. In addition, Congress legislated sanctions specifically against China in response to the June 1989 massacre at Tiananmen Square. These sanctions included suspension of (1) all exports of items on the U.S. Munitions List to China, including items for inclusion in civil products if intended for end users in Chinese military or security forces and (2) the license for the export of any U.S.-manufactured satellites for launch on launch vehicles owned by China. The President can waive either of these suspensions. In addition, exports of munitions items are approved for export to China only as exceptions to the standing U.S. policy of denial because China is a proscribed destination under the International Traffic in Arms Regulations. This prohibition also must be waived in order to approve an export. State Department and ACDA officials attribute China’s agreeing to the original MTCR as of March 1992 to the proliferation sanctions in place at that time. ACDA officials and the State Department indicated that the 1991 proliferation sanctions on two Chinese companies were effective because China met the U.S. condition for suspending the sanctions—declaring adherence to the MTCR Guidelines and Annex. Discussions with numerous experts, including those from involved U.S. government agencies, yielded several suggestions that effectiveness of sanctions could be measured in terms of (1) limits on exports to sanctioned entities, (2) changes in China’s missile proliferation behavior, and (3) China’s agreement to current MTCR Guidelines and Annex. During our review, we learned that: U.S. export licensing procedures call for automatically denying export licenses for sanctioned entities. Licenses for MTCR Annex items to sanctioned entities require presidential waivers of both the general missile sanctions and “Tiananmen Square” sanctions and must be reported to Congress. A number of such waivers were granted and duly reported. Several analysts saw no change in China’s missile program or proliferation behavior resulting from the 1993 proliferation sanctions. The 1993 proliferation sanctions have not yet resulted in China’s agreement to commit to the current MTCR Guidelines and Annex. Rather, China in October 1994 committed to further discussions on the MTCR, which will include the issue of a Chinese commitment to the current MTCR, according to a State Department official. To ensure that the appropriate licenses are referred to the MTEC Group, we recommend that the Secretary of Commerce provide periodic reports to the interagency group on those dual-use licenses for China whose commodities are classified under ECCNs containing items subject to missile technology controls. The reports should include, as a minimum, license and ECCN numbers, names of the end user and/or ultimate consignee, end-use descriptions, and descriptions of the commodities to be licensed. We further recommend that the Secretaries of DOD, Commerce, and State and the Director of ACDA use licensing information contained in these reports to establish mutually acceptable criteria and guidelines for selection of other licenses for interagency review. We recommend that the Secretary of Commerce establish criteria to determine under what conditions approval of dual-use technology exports to China should be conditioned on the successful performance of pre-license checks. Such criteria might include the nature and proliferation credentials of the end user, the potential end uses of the commodities to be exported, or the favorable outcome of the check. As requested, we did not request written agency comments. However, we discussed the results of our work with officials from DOD, the Departments of Commerce and State, and ACDA. Commerce officials said that the other agencies’ characterizations of problems with its licensing application referral efforts were unsubstantiated and unfounded. However, State, DOD, and ACDA officials generally agreed with the information in this report. Each of these agencies provided suggestions and comments to improve the clarity and technical accuracy of the report. We have incorporated their suggestions and comments into the body of the report where appropriate. We believe that implementing our recommendations would go a long way toward reconciling the concerns among the involved agencies. Our work was performed from October 1993 through October 1994 in accordance with generally accepted government auditing standards. The scope and methodology for our review is discussed in appendix IV. We plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of the report to other interested congressional committees; the Secretaries of State, Commerce, and DOD; and the Director of ACDA. Upon request, copies may also be made available to others having appropriate security clearances and a need to know. If you or your staff have any questions concerning this report, please call me on (202) 512-4128. Major contributors to this report are listed in appendix V. Commodities on export license applications that are subject to foreign policy controls on weapons delivery systems were grouped under 116 Export Control Classification Numbers (ECCN) listed in the U.S. Export Administration Regulations at the time of our review. Exporters are instructed to consult the “Reason for Control” paragraph in each number to determine the specific item subject to these foreign policy controls. In practice, the 116 ECCNs subject to control for missile technology reasons were divided at the time of our review into 85 “entire entry” ECCNs and 31 other missile technology ECCNs that would contain at least 1 item relevant to missile technology. The following figures show the dollar values of U.S. export license applications and approved licenses for dual-use commodities to China for the period fiscal years 1990 through 1993. Figure II.1 shows the value of exports to China, licensed by the Commerce Department, according to their ECCNs for fiscal years 1990 through 1993. Non-missile technology licenses ($3,192.0 million) Partial missile technology ECCNs ($3,187.7 million) 0.9% Entire entry missile technology ECCNs ($60.2 million) Figure II.2 shows the values of all Commerce Department license applications for exports to China for fiscal years 1990 through 1993. 0.9% Denied ($73.1 million) Other actions ($1,552.8 million) Approved ($6,439.1 million) Telemetering and telecontrol equipment suitable for use with aircraft (piloted or pilotless) or space vehicles, and test equipment specially designed for such equipment. 1B21 (2) Other equipment for the production of fibers, prepegs, preforms, or composites. Other test, inspection, and production equipment for materials. Tungsten, molybdenum, and alloys of these metals in the form of uniform spherical or atomized particles of 500 micrometer diameter or less with a purity of 97 percent or higher for fabrication of rocket motor components; that is, heat shields, nozzle substrates, nozzle throats, and thrust vector control surfaces. Propellants, constituent chemicals, and polymeric substances for propulsive propellants. Pipes, valves, fittings, heat exchangers, or magnetic, electrostatic or other collectors made of graphite or coated in graphite, yttrium compounds resistant to the heat and corrosion of uranium vapor. Vibration test equipment. Spin-forming and flow-forming machines specially designed or adapted for use with numerical or computer controls and specially designed parts and accessories therefor. Radiographic equipment (linear accelerators) capable of delivering electromagnetic radiation produced by “bremsstrahlung” from accelerated electrons of 2 MeV or greater or by using radioactive sources of 1 MeV or greater, except those specially designed for medical purposes. Electronic test equipment in Category 3A, not elsewhere specified. 3A96 (2) Other equipment, assemblies, and components in Category 3A, not elsewhere specified.5A20 (4)Telecontrol and telemetering equipment. Equipment specially designed for the “development,” “production,” or use of equipment, materials, or functions controlled by the entries in the telecommunications sections of Category 5 for national security reasons. Photosensitive components not controlled by ECCN 6A02. (continued) 7A23 (2) Inertial or other equipment using accelerometers or gyros described in 7A21B or 7A22B, and systems incorporating such equipment and specially designed components therefor. 9B27 (2) Test benches or stands that have the capacity to handle solid or liquid propellant rockets or rocket motors of more than 20,000 pounds of thrust, or which are capable of simultaneously measuring the three axial thrust components. To develop information for this report, we talked to cognizant officials and obtained documents in the Washington, D.C., area from the Departments of Commerce, State, and Defense, and at the Arms Control and Disarmament Agency, and the U.S. Customs Service. In addition, we discussed the MTCR, China, and missile proliferation issues with officials at the Defense Intelligence Agency, Central Intelligence Agency, National Security Agency, and the National Air Intelligence Center at Wright-Patterson Air Force Base, Ohio. We reviewed annual proliferation reports to Congress, a report on exports of sensitive technologies to Chinese sensitive end users, hard copy of a database on sensitive end users in China, and excerpts pertaining to China of the log of an MTCR interagency group. We also talked with officials at the Lawrence Livermore National Laboratory in Livermore, California, and Los Alamos National Laboratory in Los Alamos, New Mexico. We reviewed files and talked with U.S. government officials at the U.S. Embassy in Beijing, China, and the American Consulate General in Hong Kong. In addition, we met with officials of the Chinese government in Beijing, China, to discuss U.S. export controls and U.S. sanctions on China. Also, we discussed export controls, missile proliferation issues, and potential diversions of U.S. missile technology into China with Hong Kong government officials. To assist us in identifying sensitive end users in China receiving missile technology, we provided a sample of export licenses drawn from the Commerce Department’s Export Control Automated Support System and approved by the Commerce Department to teams of analysts at the Defense Intelligence Agency and National Security Agency. The licenses were categorized under ECCNs designated as controlled for missile technology reasons. The analysts provided some information on sensitive end users, but the Commerce Department, after a technical review of the data, said that the license applications did not involve restricted missile technology. To assist us in performing an independent technical evaluation of Commerce Department license approvals, we originally requested three teams of analysts from the Defense Intelligence Agency, National Security Agency, and Defense Technology Security Administration to indicate if the available information on specific exports and technology might have suggested the need for interagency review. This was important because the Commerce Department makes unilateral determinations that license applications are not MTCR-related and, therefore, do not require full interagency review for approval. We also asked that they identify sensitive end users among the listed ultimate consignees on the applications to be provided. After we presented this request to the teams of analysts and one team agreed to provide this analysis, we were told that a high-level interagency meeting of involved agencies resulted in directing the agencies of the three teams not to provide an analysis of the need for interagency review because it was not within their authority to do so. Consequently, two teams agreed to perform the analysis of sensitive end users only. As a result, we were unable to benefit from the expertise of the technical specialists in assessing the technology of the sample of licenses and the appropriateness of Commerce Department decisions. In addition, the agency of the third team of analysts did not decide within our required timeframes whether or not it would participate in the requested analysis. To evaluate the Commerce Department’s pre-license check/post-shipment verification program in China for dual-use items, we reviewed records at both the Commerce Department in Washington, D.C., and at the U.S. Embassy in Beijing. We also talked to officials at both locations. Our review included gathering statistical data and reviewing cable traffic on checks and verifications done in China for all types of technology for the period of fiscal years 1990 through 1993. This was necessary, in part, because Embassy records identified many more checks being done for missile technology concerns than shown by Commerce records. Commerce Department officials said that their records were authoritative. F. James Shafer Jeffrey D. Phillips Beryle Randall Jai Lee Douglas E. Cole The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Missile Technology Control Regime (MTCR) and U.S. missile technology-related exports to the Peoples' Republic of China, focusing on the: (1) extent to which dual-use and missile technologies are exported to sensitive end-users; (2) U.S. government's ability to monitor China's compliance with the U.S.-China bilateral agreement; and (3) effectiveness of U.S. sanctions imposed on China. GAO found that: (1) for fiscal years 1990 through 1993, the U.S. government approved 67 export licenses worth about $530 million for missile-related technology items exported to China; (2) such export licenses accounted for less than one percent of all special licenses for exports to China; (3) the Department of Defense (DOD) is concerned that the Department of Commerce might not be identifying or seeking interagency concurrence on all potential missile-technology export license applications; (4) DOD does not have an interagency agreement with Commerce regarding which export applications should be referred for comments; (5) existing licensing procedures and monitoring controls cannot ensure that most missile-technology and dual-use exports are kept from sensitive end users, but controls on satellite-related exports appear to be adequate; (6) U.S. government agencies do not share all the information they have on sensitive end users in China; (7) the lack of Chinese cooperation has made end-use monitoring of export licenses only marginally effective; (8) Commerce does not require monitoring cooperation before it grants an export license; (9) although China has recently accepted more stringent MTCR requirements, it does not accept the revised MTCR guidelines and annex; and (10) there are no criteria for measuring the effectiveness of proliferation sanctions imposed on China.
Governments are heavily involved in most defense export transactions and they support exports for a variety of reasons. European governments support defense exports primarily to maintain a desired level of defense production capability. Their national markets are not large enough to sustain the full range of weapon systems they believe necessary for their national security. The United States has traditionally supported defense exports to meet national security and foreign policy objectives through its security assistance program. In the United States more recently, however, the impact of exports on maintaining the industrial base has gained support as a rationale for providing additional assistance to defense exporters. Defense exports in general have a positive impact on the balance of trade. In 1993 defense exports represented about 0.3 of total exports for Germany, 1.7 percent for France, 2.2 percent for the United States, and 2.4 for the United Kingdom. The impact of defense exports to total exports, however, shows a general downward trend since 1990 for three of the four countries we reviewed. During 1990 defense exports represented 0.4 percent of total exports for Germany, 3.2 percent for France, and 3.4 percent for the United States. In the United Kingdom defense exports to total exports remained at about 2.4 percent in 1990 and 1993. Deliveries of global defense exports have declined 64 percent since 1987, when deliveries were $77 billion. In 1993 deliveries were $28 billion. The end of the Cold War and changes in the political and economic structure of the former Soviet Union were considered significant factors contributing to the overall decrease in arms trade. While the global defense export market has declined since the late 1980s, the United States has become the world’s leading defense exporter. The United States had the largest share of global arms deliveries at 32 percent in 1990 and increased its share to 49 percent in 1993. The overall increase in the U.S. market share from 1990 to 1993 was due, in part, to decreased sales by the former Soviet Union. In 1990 the Soviet Union’s arms deliveries were $17 billion. By 1993 Russia’s defense exports had decreased 82 percent to less than $3 billion. The dollar value of U.S. arms deliveries also decreased during this time, declining 22 percent from $18 billion in 1990 to $14 billion in 1993. Arms deliveries data for calendar year 1994 is not yet available. However, the Department of Defense (DOD), which collects data on a fiscal year basis, reported that fiscal year 1994 U.S. arms deliveries were about $10 billion. According to defense analysts, U.S. arms deliveries are likely to remain at about $10 billion annually for the rest of the decade. The market share of France, Germany, and the United Kingdom combined has increased from 26 percent of total arms deliveries in 1990 to 32 percent in 1993. Of these three countries, only the United Kingdom increased its market share, raising it from 9 percent in 1990 to 15 percent in 1993. The French market share declined from 14 percent to 13 percent during the same period, while Germany remained constant at about 4 percent of the arms market in 1990 and 1993. The total value of arms deliveries for the three European countries combined declined 40 percent, from $15 billion in 1990 to about $9 billion in 1993. Preliminary 1994 delivery data for France and the United Kingdom suggests a decline from 1993 levels. French and U.K. defense exports for 1994, in terms of deliveries, are estimated at $2.2 billion and $2.8 billion, respectively. Delivery data for Germany for 1994 is not yet available. Figures 1 and 2 show the percentage of global arms deliveries for 1990 and 1993 by supplier country. *Includes all other European countries, except France, Germany, and the United Kingdom. *Includes all other European countries, except France, Germany, and the United Kingdom. In the short term, at least, it is likely that the United States will remain strong in the world market; it has $86 billion in defense orders placed from 1990 to 1993, while France, Germany, and the United Kingdom combined have $27 billion in defense orders from the same period. Although 1994 data for the three European competitor nations, in terms of defense orders, is not yet available, U.S. defense orders for fiscal year 1994 were about $13 billion—a 59-percent decrease from fiscal year 1993 levels, when orders were $32 billion. Figure 3 shows the total value of defense orders placed with France, Germany, the United Kingdom, and the United States from 1990 to 1993. Further growth in the U.S. market share will be limited by several factors, including U.S. national security and export control policies. For example, in order to reduce dangerous or destabilizing arms transfers, the United States does not sell its defense products to certain countries, as part of its national security objectives. Those countries include Cuba, Iran, Iraq, Libya, North Korea, Syria, and several countries of the former Soviet Union. According to the State Department, U.S. sales to other countries are reviewed on a case-by-case basis against U.S. conventional arms transfer policy criteria. Certain major foreign country buyers’ practices of diversifying weapons purchases among multiple suppliers further limits U.S. market share. For example, Kuwait announced in 1994 that it planned to diversify its weapons purchases among all five permanent members of the United Nations Security Council. Prior studies conducted by the Office of Management and Budget (OMB), the Office of Technology Assessment (OTA), and our office have concluded that there are numerous factors affecting defense export sales and that no one factor is paramount in every sale. These studies indicate that (1) each sale has its own unique set of circumstances and (2) the outcome is dependent on various factors. For example, the OMB study on financing defense exports concluded that each customer’s decision-making process on defense acquisitions is sufficiently different that it is impossible to draw definitive conclusions about the relative importance of any one factor. While the study was conducted to determine the need for defense export financing, it found that other factors influence defense sales, such as price, technical sophistication of the equipment, the cost and availability of follow-on support, system performance, lead time from placement of order to delivery, the availability of training, political influence, and the financial and economic conditions of purchasing countries. The OTA study identified co-production and technology transfer as factors that can influence a defense sale. This study noted countries that desire to develop their own defense industries are likely to consider access to technology when buying defense goods. In our May 1991 testimony before the House Committee on Banking, Finance and Urban Affairs on a proposal to finance defense export sales, we pointed out that it is difficult to quantify the effect of financing on defense sales because of all the other factors involved in the decision-making process. In addition to the factors cited by OMB and OTA, we noted the importance of offsets to a buying country when deciding between competitors in a defense sale. Industry representatives and government officials in the United States and Europe cited numerous factors that are important to defense export sales, but had differing views on what factors contributed to winning a specific defense sale. These officials cited the same factors identified by earlier government studies, including offsets, political ties, and price and quality of a product. However, when discussing the reasons behind any particular sale’s outcome, U.S. government officials and industry representatives identified different reasons for the outcome of the sale. For example, in the recent German tank sale to Sweden, U.S. government officials identified offsets as the deciding factor in the sale, while an industry representative believed that the historical ties between Sweden and Germany was the reason why the German tank was chosen. In a sale of French tanks to the United Arab Emirates, U.S. government officials considered offsets to be the more important determinant in the sale, while an industry representative cited historical relationships between the buyer and the seller as the primary factor. Moreover, several U.S. and European government officials and industry representatives stated that potential customers abroad view domestic procurement of a product as an important endorsement of confidence and one that helps lower unit costs by increasing the economies of scale associated with a system. These officials added that it is very difficult for a company to sell a defense article if its own country’s defense department or ministry does not use the equipment. For example, according to a U.S. government official, Northrop’s F-20 was designed specifically for export; however, Northrop was unable to sell the aircraft overseas, in part, because the U.S. government did not purchase it for domestic use. Further, because of the large size of the U.S. domestic defense market, European businesses feel that they are at a disadvantage with respect to their U.S. competitors, according to a 1992 survey conducted by the major French land-defense industry association and the consulting firm Ernst & Young. We found that France, Germany, the United Kingdom, and the United States generally provided the same types of assistance, but the extent and structure of the assistance varies. All three European countries provide some form of government-backed export credit guarantees for both non-defense and defense exports as a means to provide security assistance and promote sales of their defense products. Data on the value of guarantees for defense exports, however, was available only in the United Kingdom. During fiscal year 1993/1994,the United Kingdom guaranteed $2.9 billion in defense exports. France and Germany report total export financing and do not differentiate between defense and non-defense export financing. Therefore, we were unable to obtain information on the extent of guarantees provided to defense exports in either country. In the United States, government financing is provided through the FMF program. According to DOD officials, FMF is provided as an instrument to advance U.S. foreign policy and national security interests rather than a means to promote U.S. exports. In fiscal year 1994 the United States used the program to provide about $3.1 billion in grants, mostly to Israel and Egypt, and $0.8 billion in loans to Greece, Turkey, and Portugal. Applicable U.S. legislation provides that FMF grants are generally intended to fund purchases of U.S. military goods and related services. It is unlikely U.S. contractors would lose sales to foreign competitors for FMF grant-funded purchases. The U.S. government is fully funding the purchase of U.S. military goods and services by other countries, thus giving U.S. companies an advantage over foreign competitors that are only offering government guarantees on loans. In addition, in fiscal year 1994, the Defense Security Assistance Agency waived about $273 million in research and development costs on foreign military sales to nine allied countries. U.S. commercial banks provide some financing of defense exports; however, the U.S. government does not guarantee such financing. The Export-Import Bank of the United States is prohibited from providing loans or guarantees for purchasing defense articles or services unless requested to do so by the President. Limited export financing is also provided at the state level. For example, from July 1988 to November 1994 the state of California provided about $26 million in loan guarantees to California-based defense companies. The French and U.K. governments have historically sent high-level government officials, such as ministers of defense, ambassadors, or prime ministers, to persuade foreign buyers to buy their national defense products. The German government has generally avoided using high-level government officials to promote defense exports, in part because defense exports are a politically sensitive issue in Germany. In the United States, defense exports have traditionally been approved to further U.S. national security and foreign policy goals. Nevertheless, as part of the U.S. government’s emphasis on overall export promotion efforts, high-ranking U.S. officials have been increasingly willing to intervene to influence competitions in favor of U.S. defense companies. However, DOD policy indicates that U.S. officials should support the marketing efforts of U.S. companies but maintain strict neutrality between U.S. competitors. During the competition for the United Kingdom’s Skynet-4 Satellite launch vehicle, U.S. government officials intervened at a high level on behalf of U.S. defense exporters. According to an industry representative involved in this sale, the U.K. Ministry of Defence split the contract between the U.S. company and the French as a result of intervention by the U.S. Ambassador and the Secretary of Commerce. The official stated that without U.S. government involvement, the French manufacturer would have received the entire $1-billion contract. France and the United Kingdom each have a single organization within their respective defense ministries with responsibility for identifying defense export opportunities abroad, promoting and facilitating defense exports, providing assistance with defense equipment demonstrations and trade shows, and providing advice to industry regarding offsets. In France this organization is known as the Delegation for International Relations. In the United Kingdom this organization is known as the Defence Export Services Organisation. Although Germany does not have a defense ministry organization comparable to that of France or the United Kingdom, German companies involved in cross-border collaborative efforts with those countries are able to benefit indirectly from the export promotion activities of the French and U.K. organizations. While the United States has no centralized government organization with a comparable export promotion role, the Departments of Defense, Commerce, and State each provide similar support for U.S. defense exports. The Departments of Commerce, Defense, and State were given the opportunity to comment on a draft of this report. Defense concurred with the report. Commerce wrote that it had reviewed the draft report and did not have any comments. State, in general, agreed with our analysis and conclusions and found the draft report to be an accurate reflection of the international competition for military export contracts. State also commented that offsets play a major role in determining which firms obtain contracts and foreign governments are eager to support offset arrangements to obtain a competitive advantage. In addition, State noted that sales of conventional arms are a legitimate instrument of U.S. foreign policy deserving U.S. government support when they help friends and allies deter aggression, promote regional stability, and increase interoperability of U.S. and allied forces. However, State pointed out that an examination of the dynamics of regional power balances and the potential for destabilizing changes in the region is required for each specific sale. We have made minor factual revisions to the report where appropriate based on technical comments provided by Defense and State. We did our work between January 1994 and February 1995 in accordance with generally accepted government auditing standards. A discussion of our scope and methodology is in appendix I. More information on government support to enhance the competitiveness of defense products is provided in appendix II. The comments of the Departments of Defense, State, and Commerce are presented in appendixes III, IV, and V, respectively. We are sending copies of this report to the Secretaries of Defense, Commerce, and State and the appropriate congressional committees. Copies will also be available to other interested parties on request. Please contact me at (202) 512-4587 if you or your staff have any questions concerning this report. Other major contributors to this report are listed in appendix VI. Because of the continuing debate on how much support to provide to defense exporters, we reviewed conditions in the global defense export market and the tools used by France, Germany, the United Kingdom, and the United States to enhance the competitiveness of their defense exports. Specifically, we compared the U.S. position in the global defense market relative to its major competitors and analyzed the various factors that can contribute to a sale, including export financing and other types of government support. For our review, we selected France, Germany, and the United Kingdom because they (1) represent the major competitors to U.S. defense exporters in terms of the value of exports sold and (2) sell to approximately the same buyers. In 1993 these four countries represented 81 percent of the world’s total defense market. Together, Russia and China represented 13 percent of the total market, but were not part of this review because a large share of Russian and Chinese defense products are sold to countries to which the United States would not sell. While several U.S. government agencies collect information on defense exports, it is difficult to compare their analyses because each agency uses different methodologies for collecting and reporting the data. We used mostly Congressional Research Service (CRS) data on defense exports for calendar years 1990 to 1993 to compare the U.S. position in the global defense market relative to its European competitors. We also used more current data on French defense exports, in terms of deliveries, provided by the U.S. government. This new data increased the level of French defense exports, both in absolute and relative terms, previously reported by CRS. Further, we use calendar year data rather than fiscal year data because data on European defense exports is reported on a calendar year basis. We did not independently verify CRS data, but the data is generally accepted among government agencies as dependable. In addition, we used the State Department’s Office of Defense Trade Controls data on deliveries of U.S. direct commercial sales, because CRS does not include that data in its annual reports on global arms sales. To determine the U.S. position in the global defense market in the near future, we used the value of U.S. defense orders as reported by CRS. However, the value of these orders includes only those placed through the Foreign Military Sales program and does not include orders placed by direct commercial means. While the State Department reports the value of export licenses approved for direct commercial sales, it does not report the value of actual defense orders placed as a result of those licenses. The value of direct commercial sales deliveries as a result of those licenses, according to government documents, may be as little as 40 to 60 percent of the value originally reported when the license was approved. The State Department reported that it issued about $87 billion in licenses from fiscal year 1990 to 1993. In analyzing the various factors that contributed to winning a defense sale, we held discussions with U.S. government and defense company officials responsible for tracking U.S. defense sales. In addition, we reviewed prior government reports on the subject. To obtain information on U.S. defense export promotion efforts, we reviewed numerous government and nongovernment studies and reports on the subject. In addition, we interviewed officials at the Departments of Defense, Commerce, and State, and the Defense Security Assistance Agency; U.S. defense company officials located in the United States and Europe; and trade organizations. We also spoke to officials from the Office of Management and Budget, the Export-Import Bank, the Banker’s Association for Foreign Trade, and six commercial banks, to obtain additional information on defense export financing. To obtain information on European countries’ export promotion programs, we discussed with, and analyzed documents from, officials involved in their countries’ defense export promotion activities. This group included officials from national governments, academia, and European defense companies. We also met with officials from the Department of Defense’s Office of Defense Cooperation and the Department of Commerce’s U.S. and Foreign Commercial Service offices. We also attended the Eurosatory Land Show in Paris, France, to observe U.S. exporters and their competitors at a major defense trade show. To convert French francs and British pounds to U.S. dollars, we used the following exchange rates. To report on France’s Delegation for International Relations annual budget, we used the average calendar year 1994 exchange rate. To report on the U.K.’s Defence Export Services Organisation annual budget and the amount of defense export financing provided by the Export Credits Guarantee Department, we used the exchange rate at the end of the U.K. fiscal years ending March 31, 1993, and March 31, 1994. We sought to report on multilateral agreements on defense trade and found that no such agreements exist. Approaches to financing defense exports vary among the four countries. Such financing includes the use of various financial instruments, including grants, loans, and guarantees. In the United States, most financing is provided through the government’s Foreign Military Financing (FMF) program, with limited financing provided by commercial banks. Some financing is also available at the state level. A 1992 decision to cancel fees on some sales that recovered part of the government’s investment in a weapon system was made to increase the competitiveness of U.S. firms. In fiscal year 1994 the United States used the FMF program to provide about $3.1 billion in grants—mostly to Israel and Egypt—and $0.8 billion in loans to Greece, Turkey, and Portugal. The FMF program enables U.S. allies to buy U.S. defense goods and related services and training. Congress often specifies the extent of assistance to certain countries. Most grants and loans are used to purchase U.S. defense products, although a designated amount of FMF funding is permitted to be spent on procurement in Israel. In fiscal year 1994 Israel was permitted to spend at least $475 million of its grant assistance on procurement in Israel. The FMF program has decreased since 1990, when the program provided over $4.8 billion in loans and grants. The U.S. government does not guarantee commercial financing for defense exports. Further, the Export-Import Bank of the United States is prohibited from providing loans or guarantees for purchasing defense equipment. Therefore, according to U.S. bank officials, U.S. commercial banks provide few financial services for defense exports, partly because of concerns that such services might generate negative publicity. Senior bank managers approve defense export financing transactions on a case-by-case basis. Financing is provided for defense transactions that are low risk and will carry a short repayment schedule. According to bank officials, repayment terms of commercial loans for defense exports generally do not exceed 2 years. These officials further stated that commercial banks are reluctant to provide financing to foreign countries without some type of U.S. government guarantee program. Moreover, even with such a program, some banks would still be reluctant to provide financing to defense exports, because of concerns about negative publicity. Some export financing is provided at the state level. For example, the state of California provides export financing for its defense companies. From July 1988 through November 1994 California provided about $26 million in loan guarantees for 77 transactions to California-based defense companies. At the time of this review, 30 states provided export financing. However, data on export financing is not separated out by defense and nondefense exports; therefore, we were not able to determine how many states, other than California, provided financing for defense exports. For years the price of U.S. military exports generally included a Department of Defense (DOD) charge to recover a portion of its non-recurring research and development costs. In 1992 the policy of recovering these costs when the sales were directly between the U.S. contractor and a foreign government was canceled. The recovery of U.S. government costs were canceled in an effort to increase the competitiveness of U.S. firms in the world market. In addition, the Arms Export Control Act, which generally requires recovery of such costs on government to government sales, permits DOD to waive or reduce such charges on sales to North Atlantic Treaty Organization countries, Australia, New Zealand, and Japan in furtherance of standardization and mutual defense treaties. In fiscal year 1994, DOD recovered $181 million in such costs but waived about $273 million. Recently, the executive branch has proposed that Congress repeal the requirement to collect such charges on future government to government sales. All three European countries provide some form of government-backed export credit guarantees for both nondefense and defense exports. Export credit guarantees are a form of insurance covering risk of loss due to such factors as exchange rate fluctuations or buyer nonpayment. They can allow access to financing for exporters extending credit to their buyers and for overseas buyers borrowing directly from banks. Data on the value of guarantees for defense exports, however, was available only in the United Kingdom. France and Germany report total export financing and do not differentiate between defense and nondefense export financing. Thus, we were unable to obtain information on the extent of guarantees provided to defense exports in either country. During fiscal year 1993/1994, the United Kingdom’s Export Credits Guarantee Department (ECGD) guaranteed about $6.1 billion in exports, of which $2.9 billion (or 48 percent) was for defense exports. About 90 percent of the $2.9 billion was for defense equipment sold to countries in the Middle East, mostly to Kuwait, Oman, Qatar, and Saudi Arabia. Among industry sectors, military aircraft represented about 40 percent of the $2.9 billion total, military vehicles represented about 39 percent, and naval vessels represented about 21 percent. In fiscal year 1992/1993, ECGD guaranteed about $5.8 billion in exports, of which $2.4 billion (or 42 percent) was for defense exports. About 57 percent of the $2.4 billion was for defense equipment sold to countries in the Far East and about 43 percent of the total was for equipment sold to the Middle East. Among industry sectors, naval vessels represented about 39 percent of the $2.4 billion total, military aircraft represented about 32 percent, and munitions and missiles represented about 27 percent. The French and U.K. governments have historically sent ministers of defense, ambassadors, or prime ministers to persuade foreign buyers to buy their national defense products. The German government has generally avoided using high-level government officials to promote defense exports, in part because such exports are a sensitive political issue in Germany. In the United States, defense exports have been approved to further U.S. national security and foreign policy goals. Nevertheless, as part of the U.S. government’s emphasis on overall export promotion efforts, high-ranking U.S. officials have been increasingly willing to intervene to influence competitions in favor of U.S. defense companies. An example of high-level government advocacy is the Swedish government’s purchase of the German Leopard 2 tank. The German Chancellor and Minister of Defense advocated on behalf of the German Leopard 2 tank, which, according to U.S. government officials, led to Sweden purchasing it over the French or U.S. tank. Other factors contributing to Sweden’s choice included the German manufacturer’s promise to buy Swedish defense material and services worth full value of the tanks they were exporting to Sweden. France and the United Kingdom each have a single organization within their respective defense ministries with responsibility for identifying defense export opportunities abroad, promoting and facilitating defense exports, providing assistance with defense equipment demonstrations and trade shows, and providing advice to industry regarding offsets. Although Germany does not have a defense ministry organization comparable to those of France or the United Kingdom, German companies involved in cross-border collaborative efforts with those countries are able to benefit indirectly from the export promotion activities of the French and U.K. organizations. While the United States has no centralized government organization with a comparable export promotion role, several U.S. government agencies provide similar support for U.S. defense exports. In France, the Ministry of Defense’s Delegation for International Relations (DRI) is responsible for facilitating and promoting French global defense sales. DRI assigns defense attachés overseas to promote military and armament relations with other countries. DRI also subsidizes missions for small business to participate in events such as trade shows. DRI employs roughly 200 staff—about 60 are involved in facilitating and promoting defense sales with the remaining staff involved in export control activities and oversight of cooperation activities with allied nations. DRI has an annual budget of $7 million which is used in a variety of ways, including Ministry of Defense participation in trade shows and subsidizing small business missions to participate in those shows. DRI also serves as a liaison between the Ministries of Defense and Industry, which, according to DRI officials, is the most important support provided to the French defense industry. While DRI promotes and facilitates sales, sales are primarily handled either by defense companies themselves or by various marketing and sales organizations. The French government owns 49.9 percent of the Défense Conseil International (DCI). DCI serves as a consultant to buying countries to help them define their operational needs, weapon requirements, and specifications. The remaining 51.1 percent is owned by private-sector marketing and sales organizations. In the United Kingdom, the Ministry of Defence’s Defence Export Services Organisation (DESO) is responsible for assisting in the marketing and sales efforts of U.K. defense companies overseas, whether manufactured nationally or in collaboration with others. DESO serves as a focal point for all defense sales and service matters, including advising firms on defense market prospects on a worldwide, regional, or country basis; providing marketing and military assistance in support of sales; organizing exhibitions, missions, and demonstrations; providing advice on export and project financing; ensuring that overseas sales consideration is given due weight in the U.K. Ministry of Defence’s own procurement process; briefing companies new to the defense sector and to exporting; and monitoring offset agreements. DESO’s budget for fiscal year 1992/1993 was about $25.9 million. DESO has approximately 350 staff—about 100 in marketing services, 50 in general policy, and 200 in direct project work. DESO concentrates primarily on supporting higher-value exports, although smaller companies also benefit from DESO guidance on such matters as how best to pursue potential subcontracts. In addition, larger companies rely on DESO to serve as a liaison with high-level U.K. and foreign government officials. The Departments of Defense, Commerce, and State each provide support in promoting U.S. defense exports. Moreover, the U.S. government has long recognized the positive impact that defense exports can have on the defense industrial base. Beginning in 1990 the U.S. government began to give more prominence to the economic value of defense exports. At that time, the Secretary of State directed overseas personnel to assist defense companies in marketing efforts. The Secretary added that individuals marketing U.S. defense products should receive the same courtesies and support offered to persons marketing any other U.S. product. More recently, the U.S. government announced its National Export Strategy, which is designed to establish a framework for strengthening U.S. export promotion efforts. Although the strategy does not target defense exports, some recommendations for improving export promotion activities could benefit defense exports. For example, the strategy recommended that overseas posts prepare country commercial guides. The guides are to include information on the host country’s best export prospects for U.S. companies, which may include defense exports. These guides are to be made available to the public through the Department of Commerce’s National Trade Data Bank. In February 1995, the President announced his conventional arms transfer policy which included, as one of its principal goals, enhancing the U.S. defense industry’s ability to meet U.S. defense requirements and maintain long-term military technological superiority at lower costs. The announcement indicated that once a proposed arms transfer is approved, the U.S. government will take such steps as (1) tasking U.S. embassy personnel to support overseas marketing efforts of American companies bidding on defense contracts, (2) actively involving senior government officials in promoting sales of particular importance to the United States, and (3) supporting DOD participation in international air and trade shows. As part of the U.S. security assistance program, the Defense Security Assistance Agency and the military services implement the Foreign Military Sales program, through which most U.S. defense sales are made. U.S. security assistance personnel stationed overseas are primarily responsible for security assistance and defense cooperation activities in the host country. When requested, these personnel provide information and support to U.S. industry on business opportunities in the host country, including information on the buying countries’ defense budget cycle, national procurement process, and estimates of equipment the country currently needs to fill defense requirements or likely future procurement plans. In addition, the Defense Security Assistance Agency coordinates DOD participation in international air shows and trade exhibitions. The military services lease equipment to U.S. defense companies for display or demonstration at such events. The Department of Commerce has primary responsibility for export promotion and has recently expanded its export promotion activities to include defense exports. For example, Commerce prepares market research reports on various countries. These reports identify trade opportunities in the host country, including those in defense trade. Other information on the host country included in these reports includes information on market assessment, best sales prospects, the competitive situation, and market access. These reports are made available to the public through the National Trade Data Bank. Other activities include preparing U.S. and Foreign Commercial Service Officer guidance on supporting defense exports. This guidance directs officers to provide information similar to that provided by the Defense Security Assistance Agency and the military services. Moreover, the Departments of Commerce, State, and Defense participate in defense industry liaison working groups to assess improving U.S. government support for U.S. defense exporters. The following is GAO’s comment on the Department of Defense’s (DOD) letter dated March 8, 1995. 1. We have not included DOD’s technical annotations to our draft report but have incorporated them in the text where appropriate. The following are GAO’s comments on the Department of State’s letter dated March 17, 1995. 1. We have modified the report to reflect this comment. 2. We have not included the attached list of suggested editorial changes but have incorporated them in the text where appropriate. Mary R. Offerdahl Cherie M. Starck The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the global defense export market and on the tools used by the United States and three major foreign competitors to enhance the competitiveness of their defense exports. GAO found that: (1) the United States has been the world's leading defense exporter since 1990; by 1993 its market share had increased to 49 percent of the global market; (2) the increased U.S. market share occurred during a period of worldwide decreases in total defense exports; (3) the three European countries reviewed (France, Germany, and the United Kingdom) had in 1993 a combined global market share of about 32 percent of total defense exports, which also increased since 1990; (4) in the short term, at least, the United States will likely remain strong in the world market; however, further growth in its market share will be limited by a number of factors, including U.S. policies to reduce dangerous or destabilizing arms transfers to certain countries and certain major foreign country buyers' practices of diversifying weapons purchases among multiple suppliers; (5) government involvement in the defense industry's sales affects the position of defense manufacturers in overseas markets, but other factors also influencing defense sales include technical sophistication and performance, the cost and availability of follow-on support and training, price, financing, and offset arrangements; (6) government policies and programs can also affect these other factors; (7) because each sale has its own unique set of circumstances, it is not possible to quantify or rank the contribution of any one factor across the board; (8) the U.S. government has long recognized the positive impact that defense exports can have on the defense industrial base; (9) in 1990, the Secretary of State directed overseas missions to support the marketing efforts of U.S. defense companies as in all other areas of commercial activity; (10) governments in France, Germany, the United Kingdom, and the United States generally provide comparable types of support, including: (a) government-backed or -provided export financing; (b) advocacy on behalf of defense companies by high-level government officials; and (c) organizational entities that promote defense exports; (11) although all four countries generally provide comparable types of assistance to their defense exporters in these areas, the extent and structure of such assistance varies; (12) central organizations support defense exports in France and the United Kingdom, while in the United States several government agencies share in supporting defense exports; and (13) all three European countries provide government-backed guarantees for commercial bank loans, while in the United States, financing is provided primarily through the Foreign Military Financing Program in the form of grants and loans and available only to a small group of countries.
Over the past 8 years, DOD has designated over 33,000 servicemembers involved in OEF and OIF as wounded in action. The severity of injuries can result in a lengthy process for a patient to either return to duty or to transition to veteran status. The most seriously injured servicemembers from these conflicts usually receive care at Walter Reed Army Medical Center or the National Naval Medical Center. According to DOD officials, once they are stabilized and discharged from the hospital, servicemembers may relocate closer to their homes or military bases and be treated as outpatients by the closest military or VA facility. Recovering servicemembers potentially navigate two different disability evaluation systems that serve different purposes. DOD’s system serves a personnel management purpose by identifying servicemembers who are no longer medically fit for duty. If a servicemember is found unfit because of medical conditions incurred in the line of duty, the servicemember is assigned a disability rating and can be discharged from duty. This disability rating, along with years of service and other factors, determines subsequent disability and health care benefits from DOD. Under VA’s system, disability ratings help determine the level of disability compensation a veteran receives and priority status for enrollment for health care benefits. To determine eligibility for disability compensation, VA evaluates all claimed medical conditions, whether they were evaluated previously by the military service’s evaluation process or not. If VA finds that a veteran has one or more service-connected disabilities that together result in a final rating of at least 10 percent, VA will pay monthly compensation and the veteran will be eligible to receive medical care from VA. Efforts have been taken to address the deficiencies reported at Walter Reed related to the care provided and transitioning of recovering servicemembers. After the press reports about Walter Reed, several high- level review groups were established to study the care and benefits provided to recovering servicemembers by DOD and VA. The studies produced from all of these groups, released from April 2007 through June 2008, contained over 400 recommendations covering a broad range of topics, including case management, disability evaluation systems, data sharing between the departments, and the need to better understand and diagnose TBI and PTSD. In May 2007, DOD and VA established the SOC as a temporary, 1-year committee with the responsibility for addressing recommendations from these reports. To conduct its work, the SOC established eight work groups called lines of action (LOA). Each LOA is co-chaired by representatives from DOD and VA and has representation from each military service. LOAs are responsible for specific issues, such as disability evaluation systems and case management. (See table 1 for an overview of the LOAs.) The committee was originally intended to expire May 2008 but it was extended to January 2009. Then, the NDAA 2009 extended the SOC through December 2009. In addition to addressing the published recommendations, the SOC assumed responsibility for addressing the policy development and reporting requirements contained in the NDAA 2008. Section 1611(a) of the NDAA 2008 directs DOD and VA, to the extent feasible, to develop and implement a comprehensive policy covering four areas—(1) care and management, (2) medical evaluation and disability evaluation, (3) the return of servicemembers to active duty, and (4) the transition of recovering servicemembers from DOD to VA. The specific requirements for each of these four areas are further enumerated in sections 1611 through 1614 of the law and would include the development of multiple policies. Table 2 summarizes the requirements for the jointly developed policies. Since its inception, the SOC has completed many initiatives, such as establishing the Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury and creating a National Resource Directory, which is an online resource for recovering servicemembers, veterans, and their families. In addition, the SOC has undertaken initiatives specifically related to the requirements contained in sections 1611 through 1614 of the NDAA 2008. Specifically, the SOC supported the development of several programs to improve the care and management of benefits to recovering servicemembers, including the disability evaluation system pilot and the Federal Recovery Coordination Program. These programs are currently in pilot or beginning phases: Disability evaluation system pilot: DOD and VA are piloting a joint disability evaluation system to improve the timeliness and resource use of their separate disability evaluation systems. Key features of the pilot include a single physical examination conducted to VA standards by the medical evaluation board that documents medical conditions that may limit a servicemember’s ability to serve in the military, disability ratings prepared by VA for use by both DOD and VA in determining disability benefits, and additional outreach and nonclinical case management provided by VA staff at the DOD pilot locations to explain VA results and processes to servicemembers. DOD and VA anticipate a final report on the pilot in August 2009. Federal Recovery Coordination Program: In 2007, DOD and VA established the Federal Recovery Coordination Program in response to the report by the President’s Commission on Care for America’s Returning Wounded Warriors, commonly referred to as the Dole-Shalala Commission. The commission’s report highlighted the need for better coordination of care and additional support for families. The Federal Recovery Coordination Program serves the most severely injured or ill servicemembers, or those who are catastrophically injured. These servicemembers are highly unlikely to be able to return to duty and will have to adjust to permanent disabling conditions. The program was created to provide uniform and seamless care, management, and transition of recovering servicemembers and their families by assigning recovering servicemembers to coordinators who manage the development and implementation of a recovery plan. Each servicemember enrolled in the Federal Recovery Coordination Program has a Federal Individual Recovery Plan, which tracks care, management, and transition through recovery, rehabilitation, and reintegration. Although the Federal Recovery Coordination Program is operated as a joint DOD and VA program, VA is responsible for the administrative duties and program personnel are employees of the agency. Beyond these specific initiatives, the SOC took responsibility for issues related to electronic health records through the work of LOA 4, the SOC’s work group focused on DOD and VA data sharing. This LOA also addressed issues more generally focused on joint DOD and VA data needs, including developing components for the disability evaluation system pilot and the individual recovery plans for the Federal Recovery Coordination Program. LOA 4’s progress on these issues was monitored and overseen by the SOC. The NDAA 2008 established an interagency program office (IPO) to serve as a single point of accountability for both departments in the development and implementation of interoperable electronic health records. Subsequently, management oversight of many of LOA 4’s responsibilities were transferred to the IPO. Also, the IPO’s scope of responsibility was broadened to include personnel and benefits data sharing between DOD and VA. As of April 2009, DOD and VA have completed 60 of the 76 requirements we identified for jointly developing policies for recovering servicemembers on (1) care and management, (2) medical and disability evaluation, (3) return to active duty, and (4) servicemember transition from DOD to VA. The two departments have completed all requirements for developing policy for two of the policy areas—medical and disability evaluation and return to active duty. Of the 16 requirements that are in progress, 10 are related to care and management and 6 are related to servicemembers transitioning from DOD to VA. (See table 3.) We found that more than two-thirds of the requirements for DOD’s and VA’s joint policy development to improve the care and management of recovering servicemembers have been completed while the remaining requirements are in progress. (See table 4.) We identified 38 requirements for this policy area and grouped them into five categories. Although 28 of the 38 requirements had been completed, one category—improving access to medical and other health care services—had most of its requirements in progress. Most of the completed requirements were addressed in DOD’s January 2009 Directive-Type Memorandum (DTM), which was developed in consultation with VA. This DTM, entitled Recovery Coordination Program: Improvements to the Care, Management, and Transition of Recovering Service Members, establishes interim policy for the improvements to the care, management, and transition of recovering servicemembers in response to sections 1611 and 1614 of the NDAA 2008. In consultation with VA, DOD created the Recovery Coordination Program in response to the NDAA 2008 requirements. This program, which was launched in November 2008, extended the same comprehensive coordination and transition support provided under the Federal Recovery Coordination Program to servicemembers who were less severely injured or ill, yet who still were unlikely to return to duty and continue their careers in the military. This program follows the same structured process as the Federal Recovery Coordination Program. However, DOD oversees this program and the coordinators are DOD employees. DOD’s January 2009 DTM includes information on the scope and program elements of the Recovery Coordination Program as well as on the roles and responsibilities of the recovery care coordinators, federal recovery coordinators, and medical care case managers and non-medical care managers. According to DOD officials, DOD took the lead in developing policy to address the requirements for care and management because it interpreted most of the requirements to refer to active duty servicemembers. According to DOD and VA officials, the January 2009 DTM serves as the interim policy for care, management, and transition until the completion of DOD’s comprehensive policy instruction, which is estimated to be completed by June 2009. This policy instruction will contain more detailed information on the policies outlined in the DTM. A VA official told us that VA also plans to issue related policy guidance as part of a VA handbook in June 2009. The VA official noted that the final form of the policy document would correspond with DOD’s instruction. DOD and VA have completed all of the requirements for developing policy to improve the medical and physical disability evaluation of recovering servicemembers. (See table 5.) We identified 18 requirements for this policy area and grouped them into three categories: (1) policy for improved medical evaluations, (2) policy for improved physical disability evaluations, and (3) reporting on the feasibility and advisability of consolidating DOD and VA disability evaluation systems. DOD issued a series of memoranda that addressed the first two categories starting in May 2007. These memoranda, some of which were developed in collaboration with VA, contained policies and implementing guidance to improve DOD’s existing disability evaluation system. To address the third category in this policy area, DOD and VA have issued a report to Congress that describes the organizing framework for consolidating the two departments’ disability evaluation systems and states that the departments are hopeful that consolidation would be feasible and advisable even though the evaluation of this approach through the disability evaluation system pilot is still ongoing. According to an agency official, further assessment of the feasibility and advisability of consolidation will be conducted. DOD and VA anticipate issuing a final report on the pilot in August 2009. However, as we reported in September 2008, it was unclear what specific criteria DOD and VA will use to evaluate the success of the pilot, and when sufficient data will be available to complete such an evaluation. DOD has completed the requirement for establishing standards for determining the return of recovering servicemembers to active duty. (See table 6.) On March 13, 2008, DOD issued a DTM amending its existing policy on retirement or separation due to a physical disability. The revised policy states that the disability evaluation system will be the mechanism for determining both retirement or separation and return to active duty because of a physical disability. An additional revision to the existing DOD policy allows DOD to consider requests for permanent limited active duty or reserve status for servicemembers who have been determined to be unfit because of a physical disability. Previously, DOD could consider such cases only as exceptions to the general policy. According to a DOD official, it is too early to tell whether the revisions will have an effect on retirement rates or return-to-duty rates. DOD annually assesses the disability evaluation system and tracks retirement and return to duty rates. However, because of the length of time a servicemember takes to move through the disability evaluation system—sometimes over a year—it will take a while before changes due to the policy revisions register in the annual assessment of the disability evaluation system. DOD and VA have completed more than two-thirds of the requirements for developing procedures, processes, or standards for improving the transition of recovering servicemembers. (See table 7.) We identified 19 requirements for this policy area, and we grouped them into five categories. We found that 13 of the 19 policy requirements have been completed, including all of the requirements for two of the categories—the development of a process for a joint separation and evaluation physical examination and development of procedures for surveys and other mechanisms to measure patient and family satisfaction with services for recovering servicemembers. The remaining three categories contain requirements that are still in progress. Most of the requirements for improving the transition from DOD to VA were addressed in DOD’s January 2009 DTM—Recovery Coordination Program: Improvements to the Care, Management, and Transition of Recovering Service Members—that establishes interim policy for the care, management, and transition of recovering servicemembers through the Recovery Coordination Program. However, we found that DOD’s DTM includes limited detail related to the procedures, processes, and standards for transition of recovering servicemembers. As a result, we could not always directly link the interim policy in the DTM to the specific requirements contained in section 1614 of the NDAA 2008. DOD and VA officials noted that they will be further developing the procedures, processes, and standards for the transition of recovering servicemembers in a subsequent comprehensive policy instruction, which is estimated to be completed by June 2009. A VA official reported that VA plans to separately issue policy guidance addressing the requirements for transitioning servicemembers from DOD to VA in June 2009. DOD and VA officials told us that they experienced numerous challenges as they worked to jointly develop policies to improve the care, management, and transition of recovering servicemembers. According to officials, these challenges contributed to the length of time required to issue policy guidance, and in some cases the challenges have not yet been completely resolved. In addition, challenges have arisen during the initial implementation of some of the NDAA 2008 policies. Finally, recent changes to the SOC staff, including DOD’s organizational changes for staff supporting the SOC, could pose challenges to the development of policy affecting recovering servicemembers. DOD and VA officials encountered numerous challenges during the course of jointly developing policies to improve the care, management, and transition of recovering servicemembers, as required by sections 1611 through 1614 of the NDAA 2008, in addition to responding to other requirements of the law. Many of these challenges have been addressed, but some have yet to be completely resolved. DOD and VA officials cited the following examples of issues for which policy development was particularly challenging. Increased support for family caregivers. The NDAA 2008 includes a number of provisions to strengthen support for families of recovering servicemembers, including those who become caregivers. However, DOD and VA officials on a SOC work group stated that before they could develop policy to increase support for such families, they had to obtain concrete evidence of their needs. Officials explained that while they did have anecdotal information about the impact on families who provide care to recovering servicemembers, they lacked the systematic data needed for sound policy decisions—such as frequency of job loss and the economic value of family-provided medical services. A work group official told us that their proposals for increasing support to family caregivers were rejected twice by the SOC, due in part to the lack of systematic data on what would be needed. The work group then contracted with researchers to obtain substantiating evidence, a study that required 18 months to complete. In January 2009, the SOC approved the work group’s third proposal and family caregiver legislation is being prepared, with anticipated implementation of new benefits for caregivers in fiscal year 2010. Establishing standard definitions for operational terms. One of the important tasks facing the SOC was the need to standardize key terminology relevant to policy issues affecting recovering servicemembers. DOD took the lead in working with its military services and VA officials to identify and define key terms. DOD and VA officials told us that many of the key terms found in existing DOD and VA policy, the reports from the review groups, and the NDAA 2008, as well as those used by the different military services are not uniformly defined. Consequently, standardized definitions are needed to promote agreement on issues such as identifying the recovering servicemembers who are subject to NDAA 2008 requirements, identifying categories of servicemembers who would receive services from the different classes of case managers or be eligible for certain benefits, managing aspects of the disability evaluation process, and establishing criteria to guide research. In some cases, standardized definitions were critical to policy development. The importance of agreement on key terms is illustrated by an issue encountered by the SOC’s work group responsible for family support policy. In this case, before policy could be developed for furnishing additional support to family members that provide medical care to recovering servicemembers, the definition of “family” had to be agreed upon. DOD and VA officials said that they considered two options: to define the term narrowly to include a servicemember’s spouse, parents, and children, or to use broader definitions that included distant relatives and unrelated individuals with a connection to the servicemember. These two definitions would result in significantly different numbers of family members eligible to receive additional support services. DOD and VA officials decided to use a broader definition to determine who would be eligible for support. Of the 41 key definitions identified for reconciliation, DOD and VA had concurred on 33 as of March 2009 and these 33 standardized definitions are now being used. Disagreement remains over the remaining definitions, including the definition of “mental health.” A DOD official stated that given the uncertainty associated with the organizational and procedural changes recently introduced to the SOC (which are discussed below), obtaining concurrence on the remaining definitions has been given lower priority. Improving TBI and PTSD screening and treatment. Requirements relat to screening and treatment for TBI and PTSD were embedded in several sections of the NDAA 2008, including section 1611, and were also discussed extensively in a task force report on mental health. DOD and VA officials told us that policy development for these issues was difficult. For example, during development of improved TBI and PTSD treatment policy, policymakers often lacked sufficient scientific information needed to help achieve consensus on policy decisions. Also, members of the SOC work group told us that they disagreed on appropriate models for screening and treatment and struggled to reorient the military services to patient-focused treatment. A senior DOD official stated that the adoption of patient-focused models is particularly difficult for the military services because, historically, the needs of the military have been given precedence over the needs of individual servicemembers. To address these challenges, the SOC oversaw the creation of the Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury—a partnership between DOD and VA. While policies continue to be developed on these issues, TBI and PTSD policy remains a challenge for DOD and VA. However, DOD officials told us that the centers of excellence have made progress with reducing knowledge gaps in psychological health and TBI treatment, identifying best practices, and establishing clinical standards of care. Release of psychological health treatment records to DOD by VA heal care providers who treat members of the National Guard and Reserve Section 1614 of the NDAA 2008 requires the departments to improve medical and support services provided to members of the National G and Reserves. In pursuing these objectives, VA faced challenges relate the release of medical information to DOD on reservists and Nationa l Guard servicemembers who have received treatment for PTSD or o mental health conditions from VA. DOD requests medical information from VA to help make command decisions about the reactivation of servicemembers, but VA practitioners face an ethical dilemma if the disclosure of medical treatment could compromise servicemembers’ medical conditions, particularly for those at risk of suicide. The challe of sharing and protecting sensitive medical information on s. nge servicemembers who obtain treatment at VA was reviewed by the Blue Ribbon Work Group on Suicide Prevention convened in 2008 at the behest of the Secretary of Veterans Affairs. DOD and VA are continuing their efforts to develop policy to clarify the privacy rights of patients who receive medical services from VA while serving in the military, and a protecting the confidential records of VA patients who may also be treat by the military’s health care system. The need to resolve this challenge assumes even greater importance in light of DOD’s and VA’s increasing capability to exchange medical records electronically, which will expand DOD’s ability to access records of servicemembers who have received medical treatment from VA. In addition to challenges encountered during the joint development of policy for recovering servicemembers, additional challenges have arisen as DOD and VA have begun implementing NDAA 2008 policy initiatives. Medical examinations conducted as part of the DOD/VA disability evaluation system pilot. In 2007, DOD and VA jointly began to develop policy to improve the disability evaluation process for recovering servicemembers and began pilot testing these new procedures in the disability system. One significant innovation of the disability evaluation system pilot is the use of a single physical examination for multiple purposes, such as for both disability determinations and disability benefits from both departments. In our review of the disability evaluation system pilot, we reported that DOD and VA had tracked challenges that arose during implementation of the pilot but had not yet resolved all of them. For example, one unresolved issue was uncertainty about who will conduct the single physical examination when a VA medical center is not located nearby. Another challenge that could emerge in the future is linked to VA’s announcement in November 2008 that it would cease providing physical reexaminations for recovering servicemembers placed on the Temporary Disability Retired List (TDRL). However, VA made an exception to its decision and will continue to provide reexaminations for TDRL servicemembers participating in the disability evaluation system pilot. In March 2009, VA officials told us that they were developing a policy to clarify this issue. Electronic health information sharing between DOD and VA. The two departments have been working for over a decade to share electronic health information and have continued to make progress toward increased information sharing through ongoing initiatives and activities. However, the departments continue to face challenges in managing the activities required to achieve this goal. As we previously reported, the departments’ plans to further increase their electronic sharing capab do not consistently identify results-oriented performance measures, whic are essential for assessing progress toward the delivery of that capabilit Further challenging the departments is the need to complete all necessary activities to fully set up their IPO, including hiring a permanent Director and Deputy Director. Defining results-oriented performance goals in its plans and ensuring that they are met is an important responsibility of this office. Until these challenges are fully addressed, the departments and their stakeholders may lack the comprehensive understanding that they need to effectively manage their progress toward achieving increased sharing of information between the departments. Moreover, not fully addressing these challenges increases the risk that DOD and VA may not develop and implement comprehensive policies to improve the care, management, and transition of recovering servicemembers and veterans. ilities h y. Recent changes to staff and working relationships within the SOC could pose future challenges to DOD’s and VA’s efforts to develop joint policy. Since December 2008, the SOC has experienced turnover in leadership and changes in policy development responsibilities. The SOC is undergoing leadership changes caused by the turnover in presidential administrations as well as turnover in some of its key staff. For example, the DOD and VA deputy secretaries who previously co-chaired the SOC departed in January 2009. As a short-term measure, the Secretaries of VA and DOD have co- chaired a SOC meeting. DOD also introduced other staffing changes to replace personnel who had been temporarily detailed to the SOC and needed to return to their primary duties. DOD had relied on temporarily-assigned staff to meet SOC staffing needs because the SOC was originally envisioned as a short-term effort. In a December 2008 memo, DOD outlined the realignment of its SOC staff. This included the transition of responsibilities from detailed, temporary SOC staff and executives to permanent staff in existing DOD offices that managed similar issues. For example, the functions of LOA 7 (Legislation and Public Affairs) will now be overseen by the Assistant Secretary of Defense for Legislative Affairs, the Assistant Secretary of Defense for Public Affairs, and the DOD General Counsel. DOD also established two new organizational structures—the Office of Transition Policy and Care Coordination and an Executive Secretariat office. The Office of Transition Policy and Care Coordination oversees transition support for all servicemembers and serves as the permanent entity for issues being addressed by LOA 1 (Disability Evaluation System), LOA 3 (Case Management), and LOA 8 (Personnel, Pay, and Financial Support). The Executive Secretariat office is responsible for performance planning, performance management, and SOC support functions. According to DOD officials, the new offices were created to establish permanent organizations that address a specific set of issues and to enhance accountability for policy development and implementation as these offices report directly to the Office of the Under Secretary of Defense for Personnel and Readiness. Currently, many of the positions in these new offices, including the director positions, are staffed by officials in an acting capacity or are unfilled. DOD’s changes to the SOC are important because of the potential effects these changes could have on the development of policy for recovering servicemembers. However, officials in both DOD and VA have mixed reactions about the consequences of these changes. Some DOD officials consider the organizational changes to the SOC to be positive developments that will enhance the SOC’s effectiveness. They point out that the SOC’s temporary staffing situation needed to be addressed, and also that the two new offices were created to support the SOC and provide focus on the implementation of key policy initiatives developed by the SOC—primarily the disability evaluation system pilot and the new case management programs. In contrast, others are concerned by DOD’s changes, stating that the new organizations disrupt the unity of command that once characterized the SOC’s management because personnel within the SOC organization now report to three different officials within DOD and VA. However, it is too soon to determine how well DOD’s new structure will work in conjunction with the SOC. DOD and VA officials we spoke with told us that the SOC’s work groups continue to carry out their roles and responsibilities. Finally, according to DOD and VA officials, the roles and scope of responsibilities of both the SOC and the DOD and VA Joint Executive Council appear to be in flux and may evolve further still. According to DOD and VA officials, changes to the oversight responsibilities of the SOC and the Joint Executive Council are causing confusion. While the SOC will remain responsible for policy matters directly related to recovering servicemembers, a number of policy issues may now be directed to the Joint Executive Council, including issues that the SOC had previously addressed. For example, management oversight of many of LOA 4’s responsibilities (DOD and VA Data Sharing) has transitioned from the SOC to the IPO, which reports primarily to the Joint Executive Council. LOA 4 continues to be responsible for developing a component for the disability evaluation system pilot and the individual recovery plans for the Federal Recovery Coordination Program. It is not clear how the IPO will ensure effective coordination with the SOC’s LOAs for the development of IT applications for these initiatives. Given that IT support for two key SOC initiatives is identified in the joint DOD/VA Information Interoperability Plan, if the IPO and the SOC do not effectively coordinate with one another, the result may impact negatively on the development of improved policies for recovering servicemembers. Mr. Chairman, this completes our prepared remarks. We would be happy to respond to any questions you or other members of the Subcommittee may have at this time. For further information about this testimony, please contact Randall B. Williamson at (202) 512-7114 or williamsonr@gao.gov, Daniel Bertoni at (202) 512-7215 or bertonid@gao.gov, or Valerie C. Melvin at (202) 512-6304 or melvinv@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. GAO staff who made key contributions to this testimony are listed in appendix II. To summarize the status of the Departments’ of Defense (DOD) and Veterans Affairs (VA) efforts to jointly develop policies for each of the four policy areas outlined in sections 1611 through 1614 of the NDAA 2008, we identified 76 requirements in these sections and grouped related requirements into 14 logical categories. Tables 8 through 11 enumerate the requirements in each of GAO’s categories and provide the status of DOD and VA’s efforts to develop policy related to each requirement, as of April 2009. In addition to the contacts named above, Bonnie Anderson, Assistant Director; Mark Bird, Assistant Director; Susannah Bloch; Catina Bradley; April Brantley; Frederick Caison; Joel Green; Lisa Motley; Elise Pressma; J. Michael Resser; Regina Santucci; Kelly Shaw; Eric Trout; and Gregory Whitney made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The National Defense Authorization Act for Fiscal Year 2008 (NDAA 2008) requires the Departments of Defense (DOD) and Veterans Affairs (VA) to jointly develop and implement comprehensive policies on the care, management, and transition of recovering servicemembers. The Senior Oversight Committee (SOC)--jointly chaired by DOD and VA leadership--has assumed responsibility for these policies. The NDAA 2008 also requires GAO to report on the progress DOD and VA make in developing and implementing the policies. This statement provides preliminary information on (1) the progress DOD and VA have made in jointly developing the comprehensive policies required in the NDAA 2008 and (2) the challenges DOD and VA are encountering in the joint development and initial implementation of these policies. GAO determined the current status of policy development by assessing the status reported by SOC officials and analyzing supporting documentation. To identify challenges, GAO interviewed the Acting Under Secretary of Defense for Personnel and Readiness, the Executive Director and Chief of Staff of the SOC, the departmental co-leads for most of the SOC work groups, the Acting Director of DOD's Office of Transition Policy and Care Coordination, and other knowledgeable officials. DOD and VA have made substantial progress in jointly developing policies required by sections 1611 through 1614 of the NDAA 2008 in the areas of (1) care and management, (2) medical and disability evaluation, (3) return to active duty, and (4) transition of care and services received from DOD to VA. Overall, GAO's analysis showed that as of March 2009, 60 of the 76 requirements GAO identified have been completed and the remaining 16 requirements are in progress. DOD and VA have completed all of the policy development requirements for medical and disability evaluations, including issuing a report on the feasibility and advisability of consolidating the DOD and VA disability evaluation systems, although the pilot for this approach is still ongoing. DOD has also completed establishing standards for returning recovering servicemembers to active duty. More than two-thirds of the policy development requirements have been completed for the remaining two policy areas--care and management and the transition of recovering servicemembers from DOD to VA. Most of these requirements were addressed in a January 2009 DOD Directive-Type Memorandum that was developed in consultation with VA. DOD officials reported that more information will be provided in a subsequent policy instruction, which will be issued in June 2009. VA also plans to issue related policy guidance in June 2009. DOD and VA officials told GAO that they have experienced numerous challenges as they worked to jointly develop policies to improve the care, management, and transition of recovering servicemembers. According to officials, these challenges contributed to the length of time required to issue policy guidance, and in some cases the challenges have not yet been completely resolved. For example, the SOC must still standardize key terminology relevant to policy issues affecting recovering servicemembers. DOD and VA agreement on key definitions for what constitutes "mental health," for instance, is important for developing policies that define the scope, eligibility, and service levels for recovering servicemembers. Recent changes affecting the SOC may also pose future challenges to policy development. Some officials have expressed concern that DOD's recent changes to staff supporting the SOC have disrupted the unity of command because SOC staff now report to three different officials within DOD and VA. However, it is too soon to determine how DOD's staffing changes will work. Additionally, according to DOD and VA officials, the SOC's scope of responsibilities appears to be in flux. While the SOC will remain responsible for policy matters for recovering servicemembers, a number of policy issues may now be directed to the DOD and VA Joint Executive Council. Despite this uncertainty, DOD and VA officials told GAO that the SOC's work groups continue to carry out their responsibilities. GAO shared the information contained in this statement with DOD and VA officials, and they agreed with the information GAO presented.
The cost of child care often creates an employment barrier for low-income parents attempting to support their families through work. To help low-income families meet their child care needs, the Congress authorized four child care subsidy programs between 1988 and the passage of the new welfare reform legislation. Under three of these programs—AFDC/JOBS Child Care, Transitional Child Care, and At-Risk Child Care—states were entitled to receive federal matching funds based on their own expenditures. States could receive matching federal funds through an open-ended entitlement for AFDC/JOBS and Transitional Child Care expenditures but were limited in the amount of matching federal funds for expenditures on At-Risk Child Care. For the fourth program—the CCDBG—states received capped federal allocations without state spending requirements. Under the previous child care programs, federal and state program guidelines determined that AFDC clients were entitled to child care assistance if they met the necessary work, education, or training requirements or left AFDC because of employment, while non-AFDC clients received child care subsidies if funds were available. Funding from federal and state governments for these four child care programs totaled about $3.1 billion in fiscal year 1995, the most recent year for which data were available. Our previous work has suggested that child care subsidies can be an important factor in poor mothers’ decisions to find and keep jobs. Yet we found that the multiple and conflicting requirements of the four previous programs discouraged states from creating systems that gave continuous help with child care needs as families’ welfare status changed. In addition, in part because of state budget constraints, states often emphasized meeting the needs of welfare families, who were entitled to subsidies, rather than those of nonwelfare families who, although not entitled to aid, were often at risk of losing their jobs and going on welfare because of lack of assistance with child care costs. The new CCDF provides federal funds to states for child care subsidies for families who are working or preparing for work and who have incomes of up to 85 percent of a state’s median income, which is an increase from 75 percent under previous law. This consolidated program with one set of eligibility criteria primarily based on income affords greater opportunities for a state to operate an integrated child care system. Such a system, often called a seamless system, could enable all potentially eligible families—welfare clients whose welfare status may change over time as well as families who do not receive welfare benefits—to access program services under the same procedures, criteria, and requirements. Such programs could enhance parents’ abilities to achieve and maintain self-sufficiency and promote continuity of care for their children. The CCDF provided states with about $3 billion in federal funds in fiscal year 1997—$605.7 million more than was available in 1996 under previous law. In the future, the amount of federal CCDF funds available could rise from about $3.1 billion in fiscal year 1998 to about $3.7 billion in fiscal year 2002. Each state’s yearly federal allocation consists of separate discretionary, mandatory, and matching funds. A state does not have to obligate or spend any state funds to receive CCDF discretionary and mandatory funds. However, to receive matching funds—and, thus, its full CCDF allocation—a state must maintain its expenditure of state funds for child care programs at specified previous levels and spend additional state funds above those levels. As figure 1 shows, states are entitled to receive a total of about $2.2 billion in federal discretionary and mandatory funds without spending any of their own funds. An additional $723 million in federal matching funds is available for states that continue child care investments from their funds. If states obligated or spent the state funds necessary to receive their full allocation, the various CCDF funding streams would make a total of about $4.4 billion in federal and state funds available for state child care programs in fiscal year 1997. Federal Mandatory ($1,199.10) Federal Matching ($723.70) The CCDF provision that states may provide child care assistance to families whose income is as high as 85 percent of the state median income (SMI) allows states to assist families at both the lowest and more moderate income levels. Nationwide, for fiscal year 1997, 85 percent of SMI for a family of four ranged from a low of $31,033 in Arkansas (1.93 times the federal poverty level) to a high of $52,791 in Connecticut (3.29 times the federal poverty level). At the same time, the CCDF requires states to use at least 70 percent of their mandatory and matching funds to provide child care to welfare recipients, those in work activities and transitioning from welfare, and those at risk of going on welfare. It also requires that a substantial portion of discretionary funds and of the remaining 30 percent of mandatory and matching funds be used to assist nonwelfare, low-income working families. Other provisions of the new welfare law that require states to place increasing numbers of welfare families in work activities may provide incentives for states to focus child care resources on welfare families. Families now receive assistance through the new TANF block grants, which have a federally mandated 5-year lifetime limit on assistance and require that families be working if they have been receiving TANF benefits for 2 years or longer. In addition, states risk losing some of their TANF allocations unless they place specified percentages of welfare families in work activities. The new law also required that 25 percent of a state’s entire adult TANF caseload participate in work and work-related activities in fiscal year 1997, and the required rate increases by 5 percentage points annually to 50 percent in fiscal year 2002. Along with these requirements, the welfare law provides states the flexibility to transfer up to 30 percent of their TANF block grant allocations to the CCDF, or use TANF funds directly for child care programs. In addition, states may spend more state funds for child care than the amount required in order to draw down the federal funds. The new welfare reform law requires states to spend at least 4 percent of their CCDF expenditures on activities to improve the quality and availability of child care and to limit their administrative costs to 5 percent of their funds. All seven states we reviewed are expanding their child care subsidy programs to assist low-income families with their child care needs. Between fiscal years 1996 and 1997, each of the seven states increased its overall expenditures on child care subsidy programs, with most of them also increasing the number of children served under these programs. However, because of limited resources, only some of the seven states planned to serve all families meeting state eligibility requirements, while none of them planned to make child care subsidies available to all families meeting federal eligibility guidelines who might benefit from such assistance. To manage their finite child care resources, these seven states have limited access to their programs through various means, including family copayments or limited income eligibility criteria. In the near term, because of additional federal funds for child care and declining welfare caseloads, states expect to meet their welfare-related child care needs. However, they are uncertain about meeting future child care needs because of the unknown impact of increasing work participation requirements under welfare reform, the capping of federal funds, and unknown future levels of state funding. In response to welfare reform, the seven states are expanding their funding for child care programs. As table 1 shows, combined federal and state child care funding in the seven states will increase by about 24 percent, from about $1.1 billion in fiscal year 1996 to about $1.4 billion in fiscal year 1997. CCDF provisions allow states to operate their child care programs exclusively with federal funds, thereby reducing or eliminating the state funds used for child care and reducing their child care programs. Nevertheless, the seven states we reviewed intend to spend at least enough state funds to qualify for the maximum amount of federal CCDF funds available for child care. Similarly, a July 1997 survey of states by the American Public Welfare Association (APWA) indicated that 47 of the 48 states that responded were planning to spend sufficient state funds to draw down all available federal funds. Table 2 shows the amount of state funds that the seven states plan to use for child care in their states’ fiscal year 1997. States are expanding their child care programs through various combinations of federal and state funds. Texas and Louisiana will increase state funding for child care during federal fiscal year 1997 to obtain their full allocation of federal CCDF funds. California, Connecticut, and Oregon have also increased their state funding and will exceed the amount required to maximize their federal CCDF allocation. Nationwide data from the APWA survey show that 20 of 48 responding states have appropriated or plan to appropriate state funds beyond the levels necessary to obtain their full federal CCDF allocations. Some states are using the flexibility provided under welfare reform to fund child care programs. For those states that have experienced welfare caseload declines in recent years, more funds are available per family in fiscal year 1997 from TANF than were available from AFDC, Emergency Assistance, and JOBS before welfare reform because federal TANF allocations are based on previous federal expenditures in the state for these programs. While Wisconsin will expand its child care funding by 38 percent between state fiscal years 1996 and 1997, the increase will come from federal, not state, funding sources. Because of significant declines in TANF caseloads over the last few years, Wisconsin will use $13 million directly from its TANF block grant for child care. Similarly, Oregon, another state that has recently experienced substantial welfare caseload declines, plans to use $17.2 million directly from its TANF block grant for child care during state fiscal year 1997. Other states, including Texas, Connecticut, and California, also expect to use some TANF funds for child care programs in the future. Similarly, 12 of 48 states responding to the APWA survey indicated they would transfer TANF funds to the CCDF; 2 said they would spend money for child care directly from the TANF block grant; and 1 plans to transfer some TANF funds to the CCDF and use some TANF funds directly on child care programs. According to child care officials, additional child care funds from these various federal and state sources have allowed most of the seven states to expand the number of children served under their child care subsidy programs. Detailed data on the number of children served in fiscal years 1996 and 1997 that are comparable across all seven states are not available. However, some data indicate that six of the seven states reviewed increased the number of children served under these programs by an average of about 17 percent between fiscal years 1996 and 1997. Only Maryland experienced a decrease in the number of children served under its child care programs during this period. According to a Maryland child care official, the decreased number of children resulted from an unexpected decline in AFDC caseloads combined with cost containment measures that froze non-AFDC child care in an effort to reduce a projected deficit. Although the state had some additional funds available for child care, they were not sufficient to both cover the increased costs of child care and provide benefits to additional families. Even though the seven states are expanding their programs, they are still unable to provide child care subsidies for all families meeting federal eligibility criteria who might benefit from such assistance. A recent Urban Institute study estimated that only about 48 percent of the potential child care needs of low-income families would be met if states maximized federal dollars available under welfare reform. To allocate resources, states have controlled access to their child care subsidy programs through state-defined criteria or by the manner in which they distribute child care subsidies to families. Key factors that states are using to allocate their program resources include the following: setting maximum family income for eligibility, requiring family copayments, providing guarantees, or entitlements, to specific groups, establishing priorities for specific groups, committing state resources to specific groups, establishing provider reimbursement rates, and instituting time-limited benefits. For additional information on how the seven states’ programs use these key factors to control access to child care subsidies, see appendix III. In addition, states may close programs to new applicants or maintain long waiting lists when their resources do not meet the demand for child care services. Income eligibility criteria and family copayments for child care are important means of limiting program access. Although the CCDF allows states to extend eligibility for subsidized child care to families earning up to 85 percent of SMI, not all states extend their eligibility to this level. Of the seven states, only Oregon has established income eligibility limits that allow subsidies for families with incomes this high. Louisiana will increase its eligibility to this level in fiscal year 1998. Income eligibility criteria can be misleading, however, since eligibility does not guarantee access to services. States with a relatively high income ceiling may not actually provide services to many families at the high end of the eligible income range. Because states use other factors in combination with their income eligibility criteria to allocate resources, even though families apply and have incomes below the state established ceiling, they may not obtain child care subsidies. For example, states also use family copayments for child care services to control access to child care subsidies and manage child care funds. Copayments from subsidized families can help states offset some of the costs of child care subsidies and thereby increase the number of families that states can afford to serve. In addition, some child care officials believe that copayment requirements, particularly for welfare families who also face work participation requirements, reinforce the concepts of self-sufficiency and responsibility for managing household budgets. According to some child care experts, however, if the family share of the cost of child care is too high a percentage of household income, a family may not be able to afford subsidized child care even if it is eligible under state eligibility rules. In some instances, the required copayment may ultimately become so large that families seek child care outside the state subsidized system. Wisconsin and Oregon both rely primarily on income as a means of determining eligibility for child care subsidies. Wisconsin has established relatively low entry-income eligibility criteria, coupled with copayments designed to make subsidized child care accessible to all eligible families. Wisconsin lowered its entry-income eligibility level, which was 75 percent of SMI before welfare reform, to about 53 percent of SMI for a family of three under the CCDF. In addition, Wisconsin’s copayments range from 6 to 16 percent of a family’s gross income in an effort to make the program more affordable for all eligible families. With these new program requirements, Wisconsin expects to serve all income-eligible families with no waiting lists, in accordance with its welfare-to-work philosophy, which bases aid on parents’ demonstrated efforts toward self-support. In Oregon, where welfare reform efforts are also focused on self-sufficiency for all low-income families, eligibility for child care subsidies has been extended to three-person families with income up to 85 percent of SMI. This relatively high entry-income eligibility is offset, however, by a relatively high family copayment level that discourages higher-income families from remaining in the subsidized child care program. Here, the family copayment requirements rise as incomes rise and can ultimately reach over 30 percent of family income. Given current budget constraints, Oregon officials said that the copayment serves to effectively target child care subsidies to the state’s poorest families, who pay proportionately lower copayments. Wisconsin and Oregon’s child care programs, which are primarily based on income eligibility, are integrated, seamless programs that enable all potentially eligible families to access program services under the same procedures, criteria, and requirements. The CCDF gives states the opportunity to create and operate such seamless child care programs to accommodate their work-based welfare reform efforts. Unlike the previous four federal child care funding programs, which segmented working low-income families into different service categories on the basis of welfare status, the CCDF provides flexibility that allows states to eliminate such artificial distinctions and create integrated programs that serve all families in similar economic circumstances. Such programs are important to ensure that families who have never been on welfare are not penalized for their work efforts and that families can move easily from welfare to self-sufficiency. In addition to the seven states we reviewed, other states also appear to be moving toward the creation of seamless programs. A study of child care in the 10 states with the largest welfare populations found that 3 of these states—Illinois, Michigan, and Washington—plan to develop child care programs with eligibility primarily based on income. In these three states, all families with income under state-established income ceilings will be eligible for subsidized child care, regardless of their welfare status. Some of the seven states we reviewed will continue to provide subsidies that target different groups of low-income families. Although all seven states expect their child care resources to be sufficient to meet welfare-related child care needs in fiscal year 1997, they vary in the extent to which they can provide subsidies to the nonwelfare working poor. For the near term, Louisiana, Maryland, Oregon, and Wisconsin report that they have sufficient funds to serve all families who seek services and meet state eligibility requirements, and, to date, they have not had to decide how to allocate funds among the different low-income groups. However, according to Louisiana state officials, many nonwelfare, working poor families are not aware that the state’s child care waiting lists have been eliminated and that they are eligible for subsidies under this program. Therefore, many eligible Louisiana families may not yet have applied for child care subsidies, and the demand could exceed the state’s resources in the near future. The remaining states—California, Connecticut, and Texas—said they have insufficient resources and are not currently serving all nonwelfare families who meet individual state eligibility requirements. California allocates funds specifically for welfare-related child care and although revising its separate programs into one child care system as of January 1998, still expects to operate distinct components for its welfare and nonwelfare populations. Because California’s resources are limited, it has over 200,000 families—mostly the nonwelfare working poor—on its waiting list for child care subsidies, and families may wait up to 2 years. Texas operates one child care program funded by multiple funding streams that are essentially invisible to clients and child care providers. However, Texas targets portions of its funds to current and former welfare recipients and provides greater access to care for some groups, such as JOBS participants. Texas’ waiting list for subsidized child care contained about 37,000 children as of June 1997. Although it plans to create a seamless child care program in the future, Connecticut currently operates three separate subsidy programs for welfare-related child care and serves its nonwelfare population through a fourth program that pays higher subsidy rates. However, because of limited resources, Connecticut’s nonwelfare child care program has been closed to applicants since 1993, except for two periods when about 5,000 new applicants were processed, although not all were approved. States also manage child care funds by limiting reimbursement rates for providers. The AFDC/JOBS Child Care, Transitional Child Care, and At-Risk Child Care programs required states to conduct biennial market surveys to establish reimbursement rates for providers. However, although states conducted these surveys as required, some reimbursed providers on the basis of relatively old surveys. Lower reimbursement rates allow states to provide subsidies to more families than they could serve if current market rates were used. However, according to some researchers, reimbursement policies can make a difference in parents’ child care options, particularly in how easily parents can obtain care and in how willing providers are to accept children who receive subsidies. At the time of our review, of the seven states reviewed, only Wisconsin and California were using the most current market rate surveys—those conducted in 1996—to establish reimbursement rates for their providers. Although the remaining five states were basing provider rates on market surveys conducted in 1991 or 1992, three had already updated or intended to update these surveys, while one indicated that it might revise its rate-setting methodology in the future. HHS is proposing that states also conduct biennial market surveys for the new CCDF and that they base rates for any 2-year period on surveys conducted not earlier than the previous 2-year period. Finally, at least one of the seven states is considering instituting time limits so more families have the opportunity to benefit from child care subsidies. Although some states have time limits for specific types of care, none of the seven states categorically limits the number of years a family can receive child care subsidies. In fact, some states have removed previous limits on the length of time that families transitioning from welfare can receive subsidies. In these states, if a subsidized family never exceeds the state’s maximum income eligibility criteria, it may continue to receive subsidies for years—until its youngest child becomes too old for program benefits, providing continuity of care for this family. However, with limited resources available, other families may be excluded from such benefits. The seven states reported that increased federal funds for child care and declining welfare caseloads were helping them expand their child care subsidy programs this fiscal year. Also, some states with declining welfare caseloads have additional TANF funds this year that they can use for child care subsidies. As a result, states reported that they could meet the immediate child care needs of welfare families and those of at least some other low-income families. The states’ ability to fund child care programs adequately in the long term, however, remains unknown. It will depend on the impact of various factors on the demand for child care—such as the size of TANF caseloads and work participation rates—as well as on future levels of federal and state child care funding. Although TANF caseloads have generally been falling, this trend may not continue. Also, TANF’s requirement that states place increasingly higher percentages of their caseloads in work activities, combined with the capping of federal child care funds, could strain the states’ capacity to expand child care programs in future years. Moreover, in the longer term, states may face additional pressures to provide child care assistance to support working families who are no longer eligible for time-limited federal cash assistance under TANF. As demand for child care subsidies increases, states will have to make difficult decisions about the levels and allocations of scarce resources. These pressures could be mitigated by any funds that become available from further possible reductions in TANF caseloads or from healthy state economies that increase the seven states’ revenues. Most of the seven states had not established funding levels for child care subsidy programs beyond 1997 at the time of our review, so their ability to meet future child care needs has not been determined. The effect of welfare reform’s work participation rates on demand for subsidized child care will not fully materialize for several years. The challenges states will face in meeting the required work participation rates will vary on the basis of previous welfare caseload reduction and current work participation levels among their welfare caseloads. For example, Oregon and Wisconsin have already experienced significant caseload reductions and, as a result, will face lower participation requirements, as allowed under TANF rules.Moreover, these states already have a higher proportion of their welfare caseloads in welfare-to-work activities than do the other states reviewed. The other states, such as California, have experienced smaller percentage reductions in their caseloads and have placed proportionately fewer participants in welfare-to-work activities. The seven states we reviewed expect demand for child care subsidies to increase under welfare reform as more families become subject to work requirements and as states attempt to provide assistance to other nonwelfare working poor families. These states also recognize that certain types of child care arrangements on which working welfare families are likely to rely, such as those involving infants or nonstandard hours, are already scarce in some areas. Consequently, they have initiated a variety of efforts to expand their supply of providers. In addition, the states expect that informal child care arrangements will remain an important child care option for many low-income working families. Although the provider supply appears to be adequate to meet families’ immediate needs, the states do not know whether it will be adequate to meet low-income families’ long-term child care needs. As we previously reported, more welfare participants are likely to need child care assistance as states try to meet the new work participation requirements under welfare reform. The child care administrators in the seven states we reviewed also expect this to occur. Although most of the states have not formally estimated how much the demand for child care is expected to increase over the next few years, some data suggest that the increase could be significant. For example, in the seven states, several of which had initiated their own welfare reform efforts before federal welfare reform, the number of children served by the federally funded Transitional Child Care program for families leaving welfare because of employment grew from 21,112 to 27,673—about 31 percent— between fiscal years 1993 and 1995. In Oregon, which began in 1992 to require more welfare parents to participate in welfare-to-work activities and has emphasized child care assistance as a way to help welfare and other low-income families support themselves through work, the number of children served by the state’s Employment-Related Day Care program increased from 9,005 to 21,322—137 percent—from July 1992 to February 1997. Connecticut has estimated that an additional 5,000 TANF-related families will need child care assistance during its next two fiscal years, and Maryland estimates the number of families needing child care will more than double from 1997 to 1999. Oregon, Texas, and Wisconsin sometimes offer child care assistance in lieu of immediate cash benefits to families who apply for welfare. Although six of the seven states expect that the general supply of child care will be adequate to meet short-term needs, all the states reported that certain types of child care already can be difficult to find. These types include child care for infants, sick children, and children with special needs, as well as child care during nonstandard hours or in rural areas. According to some child care administrators, child care providers are less inclined to offer these types of care for various reasons. For example, providing infant care can involve more staff-intensive or less profitable arrangements. We previously reported that shortages of such types of child care can make it difficult to meet the child care needs of working welfare families. Some child care administrators are concerned that the work participation requirements of federal welfare reform could particularly exacerbate existing problems in finding infant care. Under federal welfare reform legislation, states may opt to exempt single welfare parents with infants under 1 year old from work participation requirements. Four of the seven states we reviewed—Connecticut, Louisiana, Maryland, and Texas—have chosen to grant such an exemption. In contrast, Oregon and Wisconsin have chosen to exempt welfare parents from work requirements only until a child is 3 months old. In California, individual counties decide the child’s age up to which parents are exempted from work requirements, which can range from 3 to 12 months. Consequently, the demand for infant child care in these last three states may be greater than in the four other states. Requiring welfare families to work could also increase the demand for child care during nonstandard work hours. According to some child care experts and researchers, many welfare parents, because of their low job skills and experience, are likely to find jobs in the service industry, working at hotels, restaurants, hospitals, and discount department stores where nonstandard hours and shift work are common. However, child care arrangements corresponding to these work hours are generally more difficult to find in more regulated settings, such as child care centers, which generally operate on a standard business schedule and only during weekdays. Recognizing the challenges of the increasing child care demand among welfare families required to work and the difficulties in finding certain types of care, the seven states are pursuing diverse activities to expand the supply of child care. The states are funding these activities in part through the CCDF, which requires them to spend at least 4 percent of their total allocations on activities to improve the availability and quality of child care. Planned activities include efforts to recruit new providers, fiscal incentives to establish or expand child care facilities, and collaboration with early childhood development and education programs. Most of the seven states are also planning to fund activities that involve or expand child care resource and referral agencies (CCR&R), organizations that assist states in developing strategies to increase child care capacity and help families find child care. In some of the states, particularly California and Oregon, local planning organizations are also involved in activities to expand child care in cooperation with the state. According to documents that the 50 states and the District of Columbia submitted to HHS detailing their plans for implementing programs under the CCDF, 48 states plan to fund activities involving CCR&Rs. All seven states are funding efforts to support and encourage the entrance of new child care providers into the market. Specifically, the seven states plan to fund activities involving training and technical assistance for existing or potential child care providers. In some states, such as California, Oregon, Texas, and Wisconsin, these programs involve such forms of assistance as offering formal accreditation and scholarships and helping develop mentor relationships for child care providers in an attempt to make the field more attractive to potential entrants. California funds separately support training programs targeting providers of care for school-aged children, care for infants and toddlers, and family child care. In Oregon, counties are funneling state grants at the local level to child care providers for start-up and ongoing program operations and to CCR&Rs for activities that increase and stabilize the supply of child care. These grants emphasize infant and toddler care, school-aged child care, nonstandard hours care, and extended day care linked with Head Start or other preschool programs. Some states, such as California, Texas, and Wisconsin, are also experimenting with programs to train TANF recipients to become child care providers. These programs aim to help welfare recipients meet their work participation requirements while simultaneously increasing the supply of child care providers. According to the CCDF plans of the 51 states, California, Connecticut, Maryland, and Wisconsin, along with 15 other states, plan to offer funds to help child care providers increase staff compensation. In addition, all 51 states plan to fund efforts involving training and technical assistance for existing or potential child care providers. Some states are working to engage the private sector in expanding or improving the provider supply. Six of the seven states we reviewed plan to make grants or loans available to providers or businesses to establish or expand child care facilities. For example, Connecticut has recently established a Child Care Facilities Loan Fund that provides grants or loans to help providers meet state and local standards. The fund offers three loan programs: tax-exempt bonds for constructing, renovating, or expanding nonprofit child care facilities; loan guarantees for capital and noncapital loans; and a small revolving loan program for noncapital loans. Connecticut also offers tax credits for businesses to establish child care facilities on or near their work sites. Maryland funds a grant program to help registered family child care providers comply with regulations and to enhance or expand their child care services. The National Conference of State Legislatures reported a variety of similar approaches that state lawmakers have used to create incentives for employers to provide child care assistance and make the work environment responsive to family needs. These approaches include loan and grant programs, corporate tax incentives, policies to require or encourage developers to set aside space for child care centers in business sites, and information referral and technical assistance to increase private sector involvement. Overall, according to their CCDF plans, 38 of the 51 states plan to make grants or loans available for establishing or expanding child care facilities. Finally, some of the seven states are also attempting to expand child care opportunities through increased collaboration between the child care community and existing early childhood development and education programs, such as Head Start and preschool programs. The economic circumstances of welfare families and families with children eligible for Head Start or some state-subsidized preschool programs are often similar. Consequently, many welfare families who are required to participate in work activities could place their children in such programs to help meet their child care needs. However, most Head Start and preschool programs are not currently structured to meet the needs of working families. For example, Head Start programs are generally open to children between the ages of 3 and 5 and are half-day, school-year programs, while working welfare families may have younger children and generally need full-day, year-round programs. Similarly, preschool programs are generally open only during the school year and are restricted to children of certain ages. Some of the seven states’ collaborative initiatives involve expanding local Head Start or other preschool programs so that they offer services on the full-day, full-year basis that working welfare parents need. At the time of our review, several of the seven states had already received Head Start collaboration grants from HHS to explore such initiatives, and Head Start received additional funds for this purpose in fiscal year 1997. However, according to some child care experts, differences between child care and Head Start program requirements and philosophies can make such collaboration difficult. Child care administrators and researchers expect that informal providers will meet some of the increased demand for child care. States differ in their definition of and requirements for informal providers, many of whom are relatives. In addition, neighbors and family friends who provide care in their own or the child’s home are considered informal providers, who are usually subject to fewer registration, certification, or regulatory requirements than other more formal child care providers, such as child care centers. In some states, informal care arrangements are widely used by welfare families. For example, in Connecticut, state officials estimated that about 80 percent of welfare families using child care services used such informal arrangements. Similarly, state officials in Oregon estimated that nearly half of their JOBS program clients used informal care. Regardless of income level or subsidy status, families choose informal child care arrangements over more formal providers for various reasons. Researchers report that some families prefer informal child care providers because they offer more flexible arrangements than formal providers—particularly, care during nonstandard work hours or on weekends. In other instances, informal providers may be geographically close to parents, solving transportation problems associated with getting children to and from their providers. In addition, some families prefer informal providers because they are trusted and well known, are willing to care for infants, or charge lower fees than formal providers. Some researchers believe that many welfare families who are required to work will be more likely to choose informal child care arrangements, since they are likely to find work during nonstandard work hours, experience transportation difficulties, need infant care, or earn less than other parents and be unable to afford more formal arrangements. Nevertheless, as discussed in the next section, some child care officials and advocates are concerned about the relative lack of standards for informal child care providers, despite the benefits they offer some families. Welfare and child care program officials in six of the seven states report that with the additional funds available under the CCDF, the supply of child care appears so far to have kept pace with increases in demand. They noted that they have granted few exemptions from work requirements because of unavailability of child care, and most did not expect to grant such exemptions on a large scale in the near future. According to welfare and child care program staff in some states, instances of parents with problems finding child care arrangements have involved children with special needs, infants, or families living in remote locations. In these cases, some welfare and child care program staff report that they have generally made alternative arrangements for parents to meet work requirements, rather than granting exemptions. In addition, most of the seven states are emphasizing the use of CCR&Rs to help families find suitable child care arrangements. Therefore, for the near term, the supply of providers appears adequate to meet demands resulting from welfare reform. In the longer term, however, as the full effects of work participation requirements materialize and states’ welfare reform programs evolve, the adequacy of the child care supply is uncertain. Questions remain about how much child care will actually be needed and how the child care market will respond over time to increased demand. Moreover, it is not yet known how effective the efforts of these and other states will be in increasing the supply overall and for those types of care often in short supply. Under the new welfare reform law and CCDF regulations, states retain primary responsibility for the regulation and oversight of child care providers. As under the former CCDBG, states must still establish minimum child care standards for CCDF-subsidized care in the areas of physical premise safety, control of infectious disease, and provider health and safety training. Some advocates and researchers are concerned that states may lower standards for providers to ease their entry into the expanding child care market. They are also concerned that welfare families, with their lower incomes and inexperience with child care choices, may be more likely—or feel pressured by state policies—to choose informal child care arrangements that are subject to fewer regulatory requirements than are other types of providers. Furthermore, advocates note that informal care arrangements may offer fewer developmental opportunities for children. Some of the seven states are making incremental changes to their standards for child care providers as they expand their child care subsidy programs. Most of these changes will tend to maintain or strengthen existing standards. According to some child care officials, pressure from the public regarding abuse or neglect in child care settings is encouraging states to strengthen, rather than weaken, standards for child care providers. For example, to encourage and reward efforts to improve quality, Wisconsin has initiated a statewide requirement that maximum reimbursement rates for child care providers be set 10-percent higher for child care programs accredited as meeting high-quality standards. Similarly, APWA reported that its survey of all states showed that quality standards have generally been maintained and, in many cases, enhanced. Some of the seven states may be making changes in staffing ratios at child care facilities and in the size of their state regulatory staff. For example, Texas officials reported that between 1997 and 1999 they will phase in a new requirement that will increase the number of staff per child served at licensed child care centers. As of September 1997, the ratio of staff to infants changed from one staff person for five infants up to 6 months old to one staff person for four infants. In September 1999, Texas plans to increase the minimum staff required for children aged 13 to 17 months from one staff person for six children to one staff person for five children. To be effective, standards for child care providers must be enforced. Enforcement is important to ensure that standards are maintained and children receive adequate care. Recognizing this, none of the seven states plans to reduce the size of its staff responsible for inspecting or regulating child care providers. In fact, in the last year, Wisconsin has increased the number of regulatory and inspection staff from 46.5 to 60. Nevertheless, some child welfare advocates remain concerned about the adequacy of state enforcement of standards for child care providers. We previously reported, for example, that 20 states did not conduct at least one unannounced visit to each child care center every year. Appendix II provides additional information on state regulatory staffing. Generally, not all child care providers in a state are equally regulated. Parents can choose from three types of child care settings: in-home care, where a child is cared for in the child’s home; family care, where the child is cared for in the home of a provider; and center care, where a child is cared for in a nonresidential setting. Additionally, care can be provided in family child care or in-home settings by someone related to the child other than the parents, which is called relative care. Most states regulate only a small portion of their providers and may exempt a significant number of providers from their standards. Also, in-home care and care provided by relatives are almost always exempt, although a relative provider must be at least 18 years old to receive CCDF-funded subsidies. Other types of child care that states may exempt are those sponsored by religious organizations, in government entities like schools, or operating for part of the day. Further, for those providers that are regulated, different standards apply to different types of providers. Centers generally must meet more rigorous standards than other types of providers, in that states license and conduct regular inspections of the facilities. Standards for family providers vary among the states, but family providers generally receive fewer inspections than child care centers. To address concerns about informal child care providers who generally are regulated only minimally, some states impose additional requirements on those that receive subsidies. For example, to better ensure the safety of children in informal care arrangements, California and Oregon conduct background checks on the criminal histories of subsidized providers, some of whom are otherwise exempt from regulatory or licensing requirements. In one state, such checks on informal providers have revealed that about 10 percent of the applicants were known criminals. In these instances, after due process, the state refuses to reimburse the provider if his or her appeal is denied and works with the parents to find other, more appropriate care for their children. Additionally, to help monitor providers who care for children receiving subsidies more closely and prevent fraud, Maryland, Oregon, and Wisconsin reimburse most of these providers directly instead of issuing reimbursements to parents and expecting the parents to reimburse the provider. One of the seven reviewed states, Wisconsin, recently created a new category of child care provider that is subject to less stringent training requirements than its other categories of providers are. Wisconsin imposes training requirements on all licensed and certified providers. Until recently, certified day care providers, who received subsidies and cared for three or fewer unrelated children under the age of 7 primarily in the providers’ homes, were required to complete 15 hours of training before receiving permanent certification. To help meet the expected demand for child care from welfare clients who are expected to work, Wisconsin now has an additional category of day care providers who are “provisionally certified.” These providers are subject to the same inspection requirements as are regularly certified day care providers but are not required to complete any training. Provisionally certified providers are reimbursed at two-thirds the rate of regularly certified day care providers. According to Wisconsin officials, however, standards for these provisionally certified providers are still among the highest in the nation for small family day care settings that are exempt from state licensing. Further, provisionally certified providers who complete 15 hours of training receive a 50-percent increase in reimbursement rates, an incentive that many providers are exercising. The effect of welfare reform on states’ efforts to regulate and ensure that children receive quality child care is as yet unknown. As we previously reported, fiscal pressures could ultimately lead states to devote fewer state resources to monitoring and regulating child care providers in the future. Some child care advocates and researchers are also concerned that decisions to expand the supply of state-subsidized child care could create more providers that are exempt from state licensing or regulatory requirements, leaving no protection in place for children in these settings.Further concerns are that some low-income families may choose informal child care arrangements over more regulated providers because these arrangements are less costly. It is not yet known what types of child care providers will be used by families affected by welfare reform. As the supply of child care providers grows to meet the new demand, some of the growth may be in that part of the market that states already exempt from standards. Increasing numbers of children may be placed with child care providers about which states have little information. As we noted in 1994 as welfare reform was being considered, assessing state efforts to protect children in child care in the face of expanding child care services is critical. Child care subsidy programs are critical to the success of states’ overall welfare reform efforts. The infusion of additional federal funds for child care has provided states with an opportunity to better meet the child care needs of low-income families. Our findings from seven states provide an early indication that these states are using additional federal dollars and their own funds to expand their child care programs to serve both increasing numbers of welfare recipients required to work and at least some of the working poor. In addition, states are making efforts to further increase the supply of child care. At the same time that states are expanding their programs and attempting to increase supply, they appear to be maintaining child care standards and enforcement practices. It is too early to know, however, how effective states’ programs will be in meeting the child care needs of low-income families. Even as states began to expand their programs, they already faced tough choices about balancing the needs of welfare and nonwelfare families in ways that best support families’ work efforts. In addition, although states have many initiatives under way to expand the supply of child care providers, the outcomes of their efforts are not yet known. It is also too early to assess the types of child care that states and parents will rely on as more and more parents are expected to support themselves through work. States’ efforts to increase the number of children receiving child care services while at the same time ensuring safe care for children will deserve attention as welfare reform evolves. States’ initial efforts under welfare reform have been assisted by declining welfare caseloads, which have provided some states with additional funds to invest in child care. Much remains unknown, however, about the impact of economic conditions, TANF caseload size, work participation requirements, and the capping of federal child care funds on child care demand and states’ ability to fund programs over the long term. An economic downturn could cause welfare caseloads to rise at the same time that states are required to place increasing percentages of their caseloads in work activities. These pressures could force states to use more funds for welfare benefits and, thus, make it difficult for them to maintain current levels of child care spending as welfare reform progresses. We obtained comments on a draft of this report from HHS and child care officials in the seven states we reviewed. HHS officials said that the report’s findings reflect some of the child care issues that they have heard across the country, such as states facing difficult choices in balancing the child care needs of welfare and nonwelfare families to best support these families’ work efforts; concerns about the ability of and opportunity for all families to select safe, high-quality child care; the gap between the supply and demand for infant and school-aged child care and child care during nonstandard work hours; and the impact of economic conditions, work participation requirements under federal welfare reform, and capped federal child care funds on state efforts to expand the supply of safe, high-quality child care. HHS officials also noted that this report and earlier GAO reports are important in identifying the critical role child care plays in the lives of working families. HHS’ written comments appear in appendix IV. State officials generally agreed with our report and some provided information on recent developments in their child care programs, which we noted in the report as appropriate. We emphasize that our findings present an early look at states’ child care programs and that states will continue to modify them as their welfare reform efforts progress. HHS and the states also provided technical comments, which we incorporated in the report as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the Chairmen and Ranking Minority Members of the House Committees on Ways and Means and Education and the Workforce, and the Chairmen and Ranking Minority Members of the Senate Committees on Finance and Labor and Human Resources. We will also make copies available to others upon request. If you have any questions concerning this report or need additional information, please call me on (202) 512-7215. Major contributors to this report are listed in appendix V. To meet our objectives, we focused our work on the efforts of seven states—California, Connecticut, Louisiana, Maryland, Oregon, Texas, and Wisconsin—to modify their child care subsidy programs under the new welfare reform law. We chose these states because they represent a diverse range of socioeconomic characteristics, geographic locations, and experiences with state welfare reform initiatives. According to U.S. Bureau of the Census and Department of Health and Human Services (HHS) estimates, the states ranged in population from about 3.2 million (Oregon) to about 31.9 million (California) in 1996; in median income for three-person families, from about $33,377 (Louisiana) to about $52,170 (Connecticut) in fiscal year 1997; and in overall poverty rates, from 8.5 percent (Wisconsin) to 19.7 percent (Louisiana) in 1995. Some states, such as Wisconsin, have had reform initiatives in place for several years that include elements similar to those in federal welfare reform legislation, such as time limits for welfare benefits and work participation requirements; others, such as Louisiana, have been operating more traditional cash assistance programs with welfare-to-work components and were beginning more extensive reform efforts in fiscal year 1997. We obtained information from the seven states through a combination of site visits, personal interviews, telephone conversations, and written correspondence involving officials from state and county child care, budget, regulatory, and welfare offices. We also reviewed program data and documents. In addition, we interviewed and obtained data from representatives of child care and resource and referral agencies (CCR&R) and child advocacy organizations. We did not independently verify the data we obtained from these various sources. To obtain nationwide data on state child care subsidy programs under welfare reform, we reviewed the Child Care and Development Fund (CCDF) plans submitted to HHS by all 50 states and the District of Columbia. We also reviewed work conducted by a variety of researchers, experts, and other organizations related to federal and state welfare programs and child care subsidy programs. Estimated federal payments and allocations to states for child care, federal FY 1996 (in thousands) Estimated maximum federal allocation for CCDF child care, federal FY 1997 (in thousands) State intends to provide sufficient funding for child care assistance to TANF families who are working or transitioning to work. For California Department of Education programs, priority is given to (1) child protective services and (2) families with lowest incomes (by special needs, then time on waiting list); for California Department of Social Services programs (TANF), priority is given to TANF recipients or transitional families. None, but state treats TANF child care funding as an entitlement. Priority is given to (1) working TANF and transitional families; (2) teen parents completing high school; (3) pregnant women in substance abuse programs; (4) special needs children or families with incomes less than 25% of SMI; (5) children in protective services or foster care and families with multiple children in child care; and (6) other eligible families. Maximum monthly reimbursement rate to providers (center-based care) No copayment is required if income is below state poverty level (50% of SMI); for others, from $2/day if income is 50% of SMI to $20.80/day if income equal to or greater than 100% of SMI. Former TANF families may not receive transitional child care benefits longer than 2 years after losing eligibility for TANF. Maximum regional (county) rates are set at 1.5 standard deviations above mean market rates; for Los Angeles: $490 for child 6 years old or older, $602 for child 2-5 years old, $797.50 for child under 2 years old. No copayment is required for TANF recipients; for others, 2% to 10% of income. For Hartford: $420 ($105/week) for school-aged and pre-school children; $580 ($145/week) for infants/toddlers. (continued) Guaranteed to TANF families in work activities if funds available. Priority is given to children with special needs and families whose eligibility for transitional child care has expired before other eligible low-income groups. Priority is given to (1) TANF families, (2) transitional families, and (3) families at risk of welfare dependency. Within each group, children with disabilities receive priority. State intends to provide sufficient funding for child care assistance to TANF families who are working or making a transition to work and to other families who are eligible. Copayment structure effectively gives priority to poorest of working low-income families; additional priority given to teen parents, migrant and seasonal farm workers, and children at risk because of prenatal substance abuse. Maximum monthly reimbursement rate to providers (center-based care) No copayment is required if income is below poverty; 10% to 100% of the cost of care if income is above poverty. 1 year for transitional families; none for others. $216.50 for child under 2 years old; $238.50 for child 2 years old or older. No copayment is required for TANF recipients; for others from $3 to $291, depending on income, region of state, and age of child. For Baltimore: $369 for regular child; $411 for special needs child; $704 for infant. 1991, but all rates increased 5% in January 1997; new rates planned that will be based on 1997 survey. No copayment is required for TANF recipients and for high-risk very low income populations (see priority); for others, from $25 to $632 (31% of income up to $2,042) and 100% of cost of care (if income is greater than $2,042). For Portland: $350 for school-aged child; $350 for pre-school; $480 for toddler; $495 for infant; $495 for special needs. 1992, increased by 5% in 1994 to reflect market changes.(continued) State guarantees child care to former TANF families transitioning to work. No priorities, but groups are “targeted” by separate funding allocations: (1) entitlement for transitional families fully funded by state; (2) legislature appropriates separate funds for TANF; (3) child care funds remaining allocated to “at-risk” groups, such as teens, child protective services, and general low-income families. No priorities are assigned since all eligible low-income families are being served. Maximum monthly reimbursement rate to providers (center-based care) No copayment is required for TANF recipients; for others, 9% of gross income for families with one subsidized child and 11% of gross income for families with two or more subsidized children. 2 years for parents in post-secondary education; 1 year for transitional families; none for others. Local area market rates set at levels to purchase 75% of area slots. 1991, but rates increased in early 1997 because of federal minimum wage increase. County-specific market rates set at levels to purchase 75% of county slots. TANF = Temporary Assistance for Needy Families. Oregon increased its monthly reimbursement rates in October 1997. Oregon approved a 6% increase for 1997-98. In addition to the persons named above, David G. Artadi coauthored the report and contributed significantly to all data-gathering and analysis efforts. Welfare Reform: Three States’ Approaches Show Promise of Increasing Work Participation (GAO/HEHS-97-80, May 30, 1997). Welfare Reform: Implications of Increased Work Participation for Child Care (GAO/HEHS-97-75, May 29, 1997). Head Start: Research Provides Little Information on Impact of Current Programs (GAO/HEHS-97-59, Apr. 15, 1997). Early Childhood Programs: Multiple Programs and Overlapping Target Groups (GAO/HEHS-95-4FS, Oct. 31, 1995). Welfare to Work: Child Care Assistance Limited; Welfare Reform May Expand Needs (GAO/HEHS-95-220, Sept. 21, 1995). Early Childhood Programs: Many Poor Children and Strained Resources Challenge Head Start (GAO/HEHS-94-169BR, May 17, 1995). Early Childhood Centers: Services to Prepare Children for School Often Limited (GAO/HEHS-95-21, Mar. 21, 1995). Child Care: Child Care Subsidies Increase Likelihood That Low-Income Mothers Will Work (GAO/HEHS-95-20, Dec. 30, 1994). Child Care: Promoting Quality in Family Child Care (GAO/HEHS-95-93, Dec. 9, 1994). Child Care: Working Poor and Welfare Recipients Face Service Gaps (GAO/HEHS-94-87, May 13, 1994). Infants and Toddlers: Dramatic Increases in Numbers Living in Poverty (GAO/HEHS-94-74, Apr. 7, 1994). Child Care Quality: States’ Difficulties Enforcing Standards Confront Welfare Reform Plans (GAO/T-HEHS-94-99, Feb. 11, 1994). Poor Preschool-aged Children: Numbers Increase but Most Not in Preschool (GAO/HRD-93-111BR, July 21, 1993). Child Care: States Face Difficulties Enforcing Standards and Promoting Quality (GAO/HRD-93-13, Nov. 20, 1992). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed states' implementation of child care subsidy programs, focusing on: (1) how much federal and state funding is being spent on child care subsidy programs and how they are allocating these resources among welfare families, families making the transition from welfare to work, and working poor families; (2) how states are trying to increase the supply of child care to meet the projected demand under welfare reform; and (3) the extent to which states are changing standards for child care providers in response to welfare reform. GAO noted that: (1) the seven states it reviewed have used federal and state funding to increase overall expenditures on their fiscal year (FY) 1997 child care subsidy programs, with increases ranging from about 2 percent to 62 percent over FY 1996 expenditures; (2) six of the seven states also reported an increase in the number of children served under these programs, although detailed data on the extent of this expansion are not available; (3) all seven states expected to meet the FY 1997 child care needs of families required to work under welfare reform and those of families transitioning off welfare; (4) states vary, however, in the extent to which they will provide subsidies to nonwelfare, working poor families, and all seven states are unable to fund child care for all families meeting the federal eligibility criteria who might benefit from such assistance; (5) to allocate their limited resources, states are controlling access to their child care programs through various state-defined criteria or by the manner in which they distribute subsidies to families; (6) the seven states' ability to meet child care needs beyond FY 1997 is unknown and will depend partially on future state funding levels for child care as well as changes in demand for child care subsidies resulting from welfare reform's work participation requirements; (7) to meet the future demand for child care among welfare families required to work and to address existing difficulties with finding certain types of child care, states have initiated various efforts to expand the supply of providers; (8) the seven states report that the supply of child care providers will generally be sufficient to meet the needs of welfare parents required to work; (9) however, in the future, additional providers may be needed as states comply with increasing numbers of welfare families become employed; (10) the seven states do not know whether their efforts to expand the supply of providers will be sufficient to meet the increased demand expected to result from welfare reform; (11) as state child care subsidy programs expand, some states are making incremental changes to strengthen their standards for child care providers; (12) some child care advocates and officials remain concerned that efforts to expand the supply of providers will result in larger numbers of children in care of unknown quality; and (13) the effect of welfare reform on states' efforts to protect children in child care still needs to be assessed.
The US-VISIT expenditure plan, including related program documentation and program officials’ statements, satisfies or partially satisfies some, but not all, of the legislative conditions. Specifically, the legislative conditions that DHS certify that an independent verification and validation agent is currently under contract for the program and that the DHS Investment Review Board, the Secretary of Homeland Security, and the Office of Management and Budget (OMB) review and approve the plan were satisfied. However, DHS only partially satisfied the legislative conditions that it (1) meet the capital planning and investment control review requirements established by OMB, including OMB Circular A-11, part 7; (2) comply with DHS’ enterprise architecture; and (3) comply with federal acquisition rules, requirements, guidelines, and systems acquisition management practices. In addition, DHS did not satisfy the legislative conditions that the plan include (1) a comprehensive US-VISIT strategic plan and (2) a complete schedule for biometric exit implementation. DHS has partially implemented our recommendations pertaining to US- VISIT that have been open for 4 years. These recommendations, along with their status, are summarized here. Recommendation: Develop and begin implementing a system security plan and perform a privacy impact analysis and use the results of this analysis in near term and subsequent system acquisition decision making. DHS has partially implemented this recommendation. In December 2006, the program office developed a US-VISIT security strategy and has since begun implementing it. However, the scope of this strategy does not extend to all the systems that comprise US-VISIT, such as the Treasury Enforcement Communications System (TECS). We recently testified that TECS has neither the security controls and defensive perimeters in place for preventing an intrusion, nor the capability to detect an intrusion should one occur. Until a more comprehensive security strategy is developed, the systems that comprise US-VISIT could place it at increased risk. Recommendation: Develop and implement a plan for satisfying key acquisition management controls, including acquisition planning, solicitation, requirements management, project management, contract tracking and oversight, evaluation, and transition to support, and implement the controls in accordance with Software Engineering Institute (SEI) guidance. DHS has partially implemented this recommendation. Since 2005, the program office reports progress in implementing 113 practices associated with six SEI key process areas. However, the six areas of focus do not include all of the management controls that our recommendations cover, such as solicitation and transition to support. As long as the program office does not address all of the management controls that we have recommended, it will unnecessarily increase program risks. Recommendation: Ensure that expenditure plans fully disclose what system capabilities and benefits are to be delivered, by when, and at what cost, as well as how the program is being managed. DHS has partially implemented this recommendation. The fiscal year 2007 expenditure plan discloses planned system capabilities, estimated schedules and costs, and expected benefits. However, schedules, costs, and benefits are not always defined in sufficient detail to be measurable and to permit oversight. Finally, the plan does not fully disclose challenges or changes associated with program management. Without such information, the expenditure plan may not provide Congress with enough information to exercise effective oversight and to hold the department accountable. Recommendation: Ensure that the human capital and financial resources provided are sufficient to establish a fully functional and effective program office and associated management capability. DHS has partially implemented this recommendation. At one point in 2006, all of the program office’s 115 government positions were filled. However, 21 positions have since become vacant. Without adequate human capital, particularly in key positions and for extended periods, program risks will increase. Recommendation: Clarify the operational context within which US- VISIT must operate. DHS has partially implemented this recommendation. DHS has yet to define the operational context in which US-VISIT is to operate, such as having a departmentally approved strategic plan or a well-defined department enterprise architecture (EA). While the expenditure plan includes a departmentally approved US-VISIT strategic plan, it does not address key elements of relevant federal strategic planning guidance. Moreover, we recently reported that the version of the department’s EA that DHS has been using for US-VISIT alignment purposes was missing architecture content and was developed with limited stakeholder input. Finally, although program officials have met with related programs to coordinate their respective efforts, specific coordination efforts have not been assigned to any DHS entity. Until a well-defined operational context exists, the department will be challenged in its ability to define and implement US-VISIT and related border security and immigration management programs in a manner that promotes interoperability, minimizes duplication, and optimizes departmental capabilities and performance. Recommendation: Determine whether proposed US-VISIT increments will produce mission value commensurate with costs and risks and disclose to its executive bodies and Congress the results of these business cases and planned actions. DHS has partially implemented this recommendation. We recently reported that, while a business case was prepared for Increment 1B, the analysis performed met only four of the eight criteria in OMB guidance. Since then, the program office has developed business cases for two projects: Unique Identity and U.S. Travel Documents-ePassports (formerly Increment 2A), and we have ongoing work to address, among other things, these business cases. Further, the program office has yet to develop a business case for another project that it plans to begin implementing this year—biometric exit at air ports of entry (POE). Until the program office has reliable business cases for each US-VISIT project in which alternative solutions for meeting mission needs are evaluated on the basis of costs, benefits, and risks, it will not be able to adequately inform its executive bodies and Congress about its plans and will not provide the basis for prudent investment decision making. Recommendation: Develop and implement a human capital strategy that provides for staffing open positions with individuals who have the requisite core competencies (knowledge, skills, and abilities). DHS has partially implemented this recommendation. In February 2006, we reported that the program office issued a human capital plan and had begun implementing it. However, DHS stopped doing so during 2006 pending departmental approval of a DHS-wide human capital initiative and because all program office positions were filled. However, as noted earlier, the program office now reports that it has 21 government positions— including critical leadership positions—that are now vacant. Moreover, it has stated that it developed a new human capital plan but we did not review this plan because it is still undergoing departmental review. Until the department approves the human capital plan and the program office begins to implement it, the program will continue to be at risk. Recommendation: Develop and implement a risk management plan and ensure that all high risks and their status are reported regularly to the appropriate executives. DHS has partially implemented this recommendation. US-VISIT has approved a risk management plan and has begun implementing it. However, the current risk management plan does not address when risks should be elevated beyond the level of the US-VISIT Program Director. According to program officials, elevation of US-VISIT risks is at the discretion of the Program Director, and no risks have been elevated to DHS executives since December 2005. Until the program office ensures that high risks are appropriately elevated, department executives will not have the information they need to make informed investment decisions. Recommendation: Define performance standards for US-VISIT that are measurable and reflect the limitations imposed on US-VISIT capabilities by relying on existing systems. DHS has partially implemented this recommendation. The program office has defined technical performance standards for several increments, but these standards do not contain sufficient information to determine whether they reflect the limitations imposed by relying on existing systems. As a result, the ability of these increments to meet performance requirements remains uncertain and the ability to identify and effectively address performance shortfalls is missing. While available data show that prime contract cost and schedule expectations are being met, aspects of the US-VISIT program continue to lack definition and justification. Each of our observations in this regard are summarized here. Earned value management (EVM) data on ongoing prime contract task orders show that cost and schedule baselines are being met. EVM is a program management tool for measuring progress by comparing the value of work accomplished with the amount of work expected to be accomplished. Data provided by the program office show that the cumulative cost and schedule variances for the overall prime contract and all 12 ongoing task orders are within an acceptable range of performance. DHS continues to propose a heavy investment in program management- related activities without adequate justification or full disclosure. Program management is an important and integral aspect of any system acquisition program and should be justified in relation to the size and significance of the acquisition activities being performed. In 2006, program management costs represented 135 percent of planned development. This means that for every dollar spent on new capabilities, $1.35 was spent on management. The fiscal year 2007 expenditure plan similarly proposed investing $1.25 on management-related activities for every dollar invested in new development. However, the plan does not explain the reasons for the sizable investment in management-related activities or otherwise justify it on the basis of measurable expected value. Without disclosing and justifying its proposed investment and program management-related efforts, it is unclear that such a large amount of funding for these activities represents the best use of resources. Lack of a well-defined and justified exit solution introduces the risk of repeating failed and costly past exit efforts. DHS has issued a high-level schedule for air exit, but information supporting that schedule is not yet available. In addition, there are no other exit program plans available that define what will be done, by what entities, and at what cost in order to define, acquire, deliver, deploy, and operate this capability. This includes developing plans describing expected system capabilities, identifying key stakeholder roles/responsibilities and buy-in, coordinating and aligning with related programs, and allocating funding to activities. Furthermore, DHS has not performed an analysis comparing the life cycle costs of the air exit solution to its expected benefits and risks. Since 2004, we have reported on a similar lack of definition and justification of prior US-VISIT exit efforts, even though prior expenditure plans have allocated funding of $250 million to completing these efforts. As of today, these prior efforts have not produced an operational exit solution. Without better definition and justification of its future exit efforts, the department runs the serious risk of repeating its past failures. US-VISIT’s prime contract cost and schedule metrics show that expectations are being met, according to available data, although the EVM system that the metrics are based on has yet to be independently certified. Notwithstanding this, such performance is a positive sign. However, most of the many management weaknesses raised in this report have been the subject of our prior US-VISIT reports and testimonies and, thus, are not new. Accordingly, we have already made a litany of recommendations to correct each weakness, as well as follow-on recommendations to increase DHS attention to and accountability for doing so. Despite this, recurring legislative conditions associated with US- VISIT expenditure plans continue to be less than fully satisfied and recommendations that we made 4 years ago have still not been fully implemented. Exacerbating this situation is the fact that DHS did not satisfy two new legislative conditions associated with the fiscal year 2007 expenditure plan, and serious questions continue to exist about DHS’ justification for and readiness to invest current, and potentially future, fiscal year funding relative to an exit solution and program management-related activities. DHS has had ample opportunity to address these many issues, but it has not. As a result, there is no reason to expect that its newly launched exit endeavor, for example, will produce results different from past endeavors—namely, DHS will not have an operational exit solution despite expenditure plans allocating about a quarter of a billion dollars to various exit activities. Similarly, on the basis of past efforts, there is no reason to believe that the program’s disproportionate investment in management- related activities represents a prudent and warranted course of action. All told, this means that needed improvements in US-VISIT program management practices are long overdue. Both the legislative conditions and our open recommendations are aimed at accomplishing these improvements, and they need to be addressed quickly and completely. Thus far, they have not been, and the reasons that they have not are unclear. Because our outstanding US-VISIT recommendations already address all of the management weaknesses discussed in this report, we are reiterating our prior recommendations and recommending that the Secretary of DHS report to the department’s authorization and appropriations committees on its reasons for not fully addressing its expenditure plan legislative conditions and our prior recommendations. We received written comments on a draft of this report from DHS, which were signed by the Director, Departmental GAO/IG Liaison Office, and are reprinted in appendix II. In its comments, DHS stated that it agreed with the majority of our findings, adding that the department realizes, and our report supports the fact, that improvements to US-VISIT’s management controls, operational context, and human capital are needed. DHS also stated that the US-VISIT program office would aggressively engage with us to address our open recommendations, noting that it appreciates the guidance provided by our reports. In this regard, DHS’s comments described efforts completed, underway, and planned to address our recommendations, most of which were already reflected in the draft report. New information in DHS’s comments covered its intentions relative to the next US-VISIT expenditure plan and the next US-VISIT strategic plan, both of which are to be issued in fiscal year 2008. This new information is consistent with the intent of our open recommendations. New information also included the US-VISIT Director’s intention to communicate high-priority risks to the Under Secretary of the National Protection and Programs Directorate, which is also in line with our open recommendations. However, DHS also stated that it disagreed with the “partially complete” status that we assigned to one of our open recommendations. It also stated that our observation characterizing past US-VISIT exit efforts as failed and costly implicitly devalued the experience and empirical data that the department gained from these proof-of-concept efforts, and this observation did not recognize relevant information about the program’s use of biographic exit procedures. We do not agree with either of these comments, as discussed below. With the respect to the “partially complete” status that our report assigns to the open recommendation for the program to develop and begin implementing a system security plan, and to perform a privacy impact analysis and use the results of this analysis in near term and subsequent system acquisition decision making, DHS stated that it considers this recommendation satisfied. In this regard, the department describes a number of actions that the program has taken with respect to US-VISIT security and privacy. We do not take issue with the actions that DHS described, and would note that our draft report already recognizes them. Moreover, we too consider the privacy component of our recommendation satisfied. However, we do not agree with the department’s position relative to the scope of US-VISIT’s security strategy in that it does not address known vulnerabilities associated with a US-VISIT component system—TECS. As we state in our report, TECS is an integral component of US-VISIT and, according to federal security standards, a system security plan, or in US-VISIT’s case the system security strategy, typically covers such component systems. Therefore, we believe that the US-VISIT security risk assessment and security strategy need to explicitly address such vulnerabilities, and thus we do not consider the entire recommendation as being fully satisfied. With respect to our characterization of past US-VISIT exit efforts, the department stated that we incorrectly viewed these past efforts as “ends in themselves” and as “failed and costly” because they did not immediately conclude with operational systems. According to DHS, the program never intended for these efforts to be more than proof-of- concept learning experiences that would form the basis for more workable future system solutions. We do not agree with these comments. As we state in our report, the program first committed to full deployment of a biometric exit capability in 2003, and it has continued to make similar deployment commitments in subsequent years. At the same time, we have chronicled a pattern of inadequate analysis surrounding the expected costs, benefits, and risks of these exit efforts since 2004, and thus an absence of reliable information upon which to view their expected value and base informed exit-related investment decisions. Nevertheless, the program continued to invest each year in these biometric exit efforts, thus far having allocated about $250 million in funding to them. At no time, however, was any analysis produced to justify investing a quarter of a billion dollars to gain “experiences and empirical data” for such a sizeable investment. Rather, commitments were repeatedly made in expenditure plans for deploying an operational exit solution. While we recognize the value and role of demonstration and pilot efforts as a means for learning and informing future development efforts, our point is that exit-related efforts have been inadequately defined and justified over the last 4 years, despite being allocated $250 million, and the fiscal year 2007 expenditure proposes more of the same. With respect to not recognizing the program’s use of biographic exit procedures in the above described observation, the department is correct that we describe these procedures in other sections of our report but not as part of this observation. We do not include this information under this observation because its focus is on the 4 years and $250 million that has been devoted to biometric-based exit efforts, and the lack of definition and justification in the fiscal year 2007 expenditure plan for these biometric efforts going forward. We are sending copies of this report to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have authorization and oversight responsibilities for homeland security. We are also sending copies to the Secretary of Homeland Security, Secretary of State, and the Director of OMB. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at www.gao.gov. If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-3439 or at hiter@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who have made significant contributions to this report are listed in appendix III. facilitate legitimate travel and trade, ensure the integrity of the U.S. immigration system, and protect the privacy of our visitors. complies with DHS’s enterprise architecture; complies with the acquisition rules, requirements, guidelines, and systems acquisition management practices of the federal government; includes a certification by the DHS Chief Information Officer (CIO) that an independent verification and validation (IV&V) agent is currently under contract for the project; is reviewed and approved by the DHS Investment Review Board (IRB), the Secretary of Homeland Security, and OMB; is reviewed by GAO; includes a comprehensive US-VISIT strategic plan; and includes a complete schedule for biometric exit implementation. On March 20, 2007, DHS submitted its fiscal year 2007 expenditure plan for $362.494 million to the House and Senate Appropriations Subcommittees on Homeland Security. 1. determine whether the US-VISIT fiscal year 2007 expenditure plan satisfies 2. determine the status of our oldest open recommendations pertaining to US- 3. provide observations about the expenditure plan and management of the program. We conducted our work at US-VISIT offices in Arlington, Virginia, from March 2007 through June 2007 in accordance with generally accepted government auditing standards. Details of our scope and methodology are described in attachment 1. Meets the capital planning and investment control review requirements established by OMB, including OMB A-11, part 7 Complies with the DHS enterprise architecture Complies with the acquisition rules, requirements, guidelines, and systems acquisition management practices of the federal government Includes a certification by the DHS CIO that an IV&V agent is currently under contract for the program Is reviewed and approved by the DHS IRB, the DHS Secretary, and OMB Includes a comprehensive US-VISIT strategic plan Includes a complete schedule for biometric exit implementation Does not satisfy or provide for satisfying all key aspects of the condition we reviewed. Satisfies or provides for satisfying some, but not all, key aspects of the condition that we reviewed. 1. Develop and begin implementing a system security plan and perform a privacy impact analysis and use the results of this analysis in near term and subsequent system acquisition decision making. 2. Develop and implement a plan for satisfying key acquisition management controls, including acquisition planning, solicitation, requirements management, project management, contract tracking and oversight, evaluation, and transition to support, and implement the controls in accordance with Software Engineering Institute (SEI) guidance.3. Ensure that expenditure plans fully disclose what system capabilities and benefits are to be delivered, by when, and at what cost, as well as how the program is being managed. 4. Ensure that the human capital and financial resources are provided to establish a fully functional and effective program office and associated management capability. 5. Clarify the operational context within which US-VISIT must operate. Summary of Status of Open Recommendations (cont’d) 6. Determine whether proposed US-VISIT increments will produce mission value commensurate with costs and risks and disclose to its executive bodies and the Congress the results of these business cases and planned actions.7. Develop and implement a human capital strategy that provides for staffing open positions with individuals who have the requisite core competencies (knowledge, skills, and abilities). 8. Develop and implement a risk management plan and ensure that all high risks and their status are reported regularly to the appropriate executives. 9. Define performance standards for US-VISIT that are measurable and reflect the limitations imposed by relying on existing systems. DHS data show that the US-VISIT prime contract is being executed according to cost and schedule expectations, as defined and measured by a well- accepted progress measurement technique known as earned value management. DHS continues to propose disproportionately heavy investment in US-VISIT program management-related activities without adequate justification or full disclosure, to the point of spending $1.25 on management for every dollar invested in new development. Without justifying and fully disclosing such a large investment in program management, questions persist as to whether this represents the best use of DHS resources. DHS continues to propose spending tens of millions of dollars on exit projects that are not well-defined, planned, or justified on the basis of costs, benefits and risks. Without properly positioning itself for effectively and efficiently investing in an exit solution, DHS risks repeating its prior failed and costly exit efforts. Because our outstanding US-VISIT recommendations already address all of the management weaknesses discussed in this briefing, we are reiterating our prior recommendations, and recommending that DHS report to its congressional authorization and appropriations committees the reasons for it not fully satisfying its US-VISIT expenditure plan legislative requirements and our prior recommendations. In comments on a draft of this briefing, DHS stated that the briefing was factually correct, that GAO's guidance provided value to the program, and that it would continue to address our recommendations. collecting, maintaining, and sharing biometric and other information on certain foreign nationals who enter and exit the United States; identifying foreign nationals who (1) have overstayed or violated the terms of their admission; (2) can receive, extend, or adjust their immigration status; or (3) should be apprehended or detained by law enforcement officials; detecting fraudulent travel documents, verifying traveler identity, and determining traveler admissibility through the use of biometrics; and facilitating information sharing and coordination within the immigration and border management community. DHS originally planned to deliver biometric entry and exit capability in four major increments. Increments 1 through 3 were to be interim, or temporary, solutions that focus on building interfaces among existing (legacy) systems; enhancing the capabilities of these systems; and deploying these systems to air, sea, and land ports of entry (POEs). Increment 4 was to be a series of incremental releases, or mission capability enhancements, that were to deliver long-term strategic capabilities for meeting program goals. In May 2004, DHS awarded an indefinite-delivery/indefinite-quantityprime contract to Accenture and its partners for delivering future US-VISIT capabilities. Increment 1 was intended to establish entry and exit capabilities at air and sea POEs. Increment 1 air and sea entry capabilities were deployed on January 5, 2004, at 115 airports and 14 seaports for individuals requiring nonimmigrant visas to enter the United States.These capabilities include collecting and matching biographic information, biometric data (two digital index finger scans) and a digital photograph for selected foreign nationals. In addition, several types of increment 1 air and sea exit devices for collecting biometric data were piloted at 12 airports and 2 seaports. This 3-year pilot focused on the technical feasibility of a biometric exit solution at air and sea POEs. The pilot ended in May 2007. Increment 2 was originally to extend US-VISIT entry and exit capabilities to the 50 busiest land POEs by December 31, 2004. Subsequently, the increment was divided into three parts—2A, 2B, and 2C. Increment 2A established entry capabilities at land, sea, and air POEs to biometrically authenticate machine-readable visas and other travel and entry documents issued by Department of State (State) and DHS to foreign nationals.These capabilities were deployed to all POEs by October 23, 2005, except for e-Passports, which were deployed to 33 POEs by November 14, 2006. These 33 POEs account for 97 percent of all travelers entering with e- Passports. Increment 2B extended the increment 1 entry solution to the 50 busiest land POEs and included redesigning the process for issuing a handwritten form I- 94to enable the electronic capture of biographic, biometric (unless the traveler is exempt),and related travel documentation for travelers arriving in secondary inspection. This capability was deployed to the 50 busiest land POEs as of December 29, 2004. Increment 2C was a proof-of-concept demonstration of the feasibility of using passive radio frequency identification (RFID) technologyto record travelers’ entry and exit via a unique ID number tag embedded in the form I-94. It was originally deployed at five land POEs. The demonstration was terminated in November 2006. Increment 3 was to extend increment 2B entry capabilities to 104 land POEs by December 31, 2005. It was essentially completed as of December 19, 2005. Increment 4 – Unique Identity All expenditure plans prior to fiscal year 2006 have described increment 4 as a yet- to-be-defined, strategic solution. The fiscal year 2006 plan described increment 4 as the combination of two projects: (1) Transition to 10 fingerprints in the Automated Biometric Identification System (IDENT) and (2) Interoperability between IDENT and the Federal Bureau of Investigation’s Integrated Automated Fingerprint Identification System (IAFIS). The fiscal year 2007 expenditure plan combines the two projects, with a third called enumeration (developing a single identifier for each individual), into a single project referred to as Unique Identity. (in thousands) (in thousands) (in thousands) Areas of expenditure/Projects (see next slides for descriptions) (costs in thousands) Exit: Includes planning and implementation of the chosen deployment option for the implementation of an exit screening program at air and sea ports. U.S. travel documents and e-Passports: Includes development, testing, and deployment of public key directory (PKD) validation servicesfor e-Passport readers. Unique Identity: Includes implementing the 10-fingerprint scanners and the interim data sharing model (iDSM);related systems interoperability; associated facilities and engineering support; and systems architecture, engineering and integration, and design. Data Integrity and Biometric Support Services: Includes providing qualified leads and actionable information to the U.S. Customs and Border Protection Service and U.S. Immigration and Customs Enforcement; establishment of lookout records for visa denials and adverse actions by border officials. Program management and operations : Includes the government salaries and benefits for 115 government program office positions necessary to manage and operate the program, including relocation costs, personnel security checks, and training. Contractor services-program management: Includes the program office support contractors. Operations and maintenance: Includes operations and maintenance of Increment 1, 2, and 3 systems, including technical, application, system, network, and infrastructure support costs. Program management reserve: Includes funds allocated to accommodate unknown timing and magnitude of risks. US-VISIT has adopted its own methodology for managing its projects throughout their respective life cycles. This methodology is known as the US-VISIT Enterprise Life Cycle Methodology (ELCM). Within the ELCM is a component methodology for managing software-based system projects known as the US-VISIT Delivery Methodology (UDM). According to version 4.3 of UDM (April 2007), it Applies to new development projects and existing, operational projects. Specifies the documentation and reviews that should take place within each of the methodology’s six phases: plan, analyze, design, build, test, and deploy. Allows for tailoring to meet the needs and requirements of individual projects, in which specific activities, deliverables, and milestone reviews that are appropriate for the scope, risk, and context of the project can be set for each phase of the project. The chart on the following page shows where US-VISIT projects are in terms of the life cycle methodology. Background: US-VISIT Project Status (New Development and Operational) Planning, development and implementation of the Biometric Identification Systems Project, now referred to as Unique Identity (IDENT/IAFIS integration and IDENT 10-print) DHS recently changed its investment management process. Prior to 2006, DHS IT programs, including US-VISIT, were subject to key decision point reviews. According to DHS, this approach was adopted from the Department of Defense’s investment management process, and while well-suited for the acquisition of fighter jets, ships, etc., was not well-suited for acquisition of IT systems. Accordingly, DHS drafted an Investment Review Process guide that adopts an approach using milestone decision points (MDP) linking five life cycle phases: (1) project initiation, (2) concept and technology development, (3) capability development and demonstration, (4) production and deployment, and (5) operations and support. According to DHS, this guide provides more flexibility, allowing DHS to tailor the number of phases and milestone reviews based on risk and visibility. MDP reviews can be performed concurrently with an expenditure plan review. The draft guide was issued in March 2006; as of May 2007, the draft guide had not been approved. Under the draft guide, a program sends an investment review request to the Integrated Project Review Team (IPRT) prior to the initial MDP. The IPRT assigns the program to a portfolio, and schedules the program for a Joint Requirements Council and/or IRB review. According to the official from DHS’s Program Analysis and Evaluation Directorate who is responsible for overseeing program adherence to the investment control process, it is being used for all DHS programs. Objective 1: Legislative Conditions Condition 1 The fiscal year 2007 US-VISIT expenditure plan, related program documentation, and program officials’ statements satisfy (in part or total) most, but not all, of the legislative conditions. Condition 1. The plan, including related program documentation and program officials’ statements, satisfies or partially satisfies all aspects of the capital planning and investment control review requirements established by OMB, including OMB Circular A-11, part 7. The table that follows provides examples of the results of our analysis, including areas in which the A-11 requirements have been and have not been fully satisfied. Given that the A-11 requirements are intended to minimize a program’s exposure to risk, permit performance measurement and oversight, and promote accountability, any areas in which the program falls short of the requirements reduce the chances of delivering cost-effective capabilities and measurable results on time and within budget. Provide a brief description of the investment and its status in the capital planning and investment control review, including major assumptions made about the investment. The expenditure plan and fiscal year 2007 Exhibit 300 provide a description of US-VISIT but do not include its status in the DHS capital planning and investment control process. According to program officials, the program was re- evaluated under the MDP process defined in the draft DHS investment review process guide. On February 7, 2007, it passed its first MDP and is now undergoing its second MDP review. Also, the expenditure plan and related program documents identify a number of program assumptions. Examples of assumptions cited in the fiscal year 2007 Exhibit 300 submission include (1) existing facilities at land POEs will not support the proposed incorporation of biometric devices without investment in equipment and infrastructure, and (2) improved exit processes are needed to collect accurate data on departures. Provide a summary of the investment’s risk assessment, including how 19 OMB- identified risk elements are being addressed. The US-VISIT enterprise risk assessment was completed in December 2005. It identified a number of risks, their likelihood of occurrence, their potential impact, and recommended controls to address each risk. The most recent version of the risk management plan was approved in February 2007. Under the processes defined in this plan, risks are to be monitored and reviewed by program management and stakeholders through integrated project teams. All identified risks are to be logged in the risk database and are to be individually reviewed by the Director. Both the Exhibit 300 and the Risk Management Plan address the 19 OMB-identified risk elements. Demonstrate that the investment is included in the agency’s enterprise architecture and capital planning and investment control process. Illustrate agency’s capability to align the investment to the Federal Enterprise Architecture (FEA). The plan does not describe US-VISIT relative to the DHS enterprise architecture (EA) or the capital planning and investment control process. Moreover, the last review of program compliance with the DHS EA was in August 2004, and since then US-VISIT and the DHS architecture have changed significantly. With regard to the FEA, the fiscal year 2007 OMB Exhibit 300 budget submission contains tables that satisfy OMB’s requirement for listing the various aspects of the FEA that the program supports. In February 2007, the program completed a MDP1 review, which program officials told us revalidated the program. The program has since submitted to the Enterprise Architecture Center of Excellence its MDP2 review package. US-VISIT’s architecture alignment is further discussed under the legislative condition 2 section of this briefing. Provide a description of an investment's security and privacy issues. Summarize the agency's ability to manage security at the system or application level. Demonstrate compliance with the certification and accreditation processes as well as the mitigation of IT security weaknesses. As we previously reported, US-VISIT’s 2004 security plan and privacy impact assessments generally satisfied OMB and the National Institute of Standards and Technology (NIST) security guidance. Further, the expenditure plan states that all of the US-VISIT component systems have been certified and accredited and given authority to operate. Also, the program office developed a security strategy in December 2006 that was based on the 2005 risk assessment. However, this security strategy was limited to the systems under US-VISIT control and does not mention, for example, the Treasury Enforcement Communications System (TECS) which provides biographic information to US- VISIT and is owned by Customs and Border Protection. According to NIST Special Publication 800-18, a comprehensive security strategy should include all component systems. We have ongoing work to evaluate the quality of US-VISIT security documents and practices. Provide a summary of the investment’s status in accomplishing baseline cost and schedule goals through the use of an earned value management (EVM) system or operational analysis, depending on the life-cycle stage. Condition 2. The plan, including related program documentation and program officials’ statements, partially provides for satisfying the condition that it comply with DHS’s EA. According to federal guidelines and best practices, investment compliance with an EA is essential for ensuring that an organization’s investment in new and existing systems is defined, designed, and implemented in a way that promotes integration and interoperability and minimizes overlap and redundancy, thus optimizing enterprisewide efficiency and effectiveness. A compliance determination is not a one-time event that occurs when an investment begins, but is, rather, a series of determinations that occurs throughout an investment’s life cycle as changes to both the EA and the investment’s architecture are made. The DHS Enterprise Architecture Board, supported by the Enterprise Architecture Center of Excellence, is responsible for ensuring that projects demonstrate adequate technical and strategic compliance with the department’s EA. The DHS Enterprise Architecture Board has not conducted a detailed review of US- VISIT architecture compliance in more than 2 years. In August 2004, the board reviewed US-VISIT’s architectural alignment with some aspects of the DHS EA, and it recommended that US-VISIT be given conditional approval to proceed.However, we reportedin February 2005 that this architectural compliance was limited because: DHS’ determination was based on version 1.0 of the EA, which was missing, in part or in whole, all the key elements expected in a well-defined architecture, such as a description of business processes, information flows among these processes, and security rules associated with these information flows. DHS did not provide sufficient documentation to allow us to understand the methodology and criteria for architecture compliance or to verify analysis justifying the conditional approval. Moreover, the next architecture alignment review did not occur until more than 2 years later, in November 2006. This review was part of US-VISIT’s MDP1 revalidation review, and it focused on compliance with 2 components of the DHS EA 2006. In February 2007 US-VISIT received MDP1 approval with the stipulation that the program undergo a MDP2 review within 60 days. This February 2007 MDP1 alignment determination does not fully satisfy the legislative condition for several reasons. The review was based on US-VISIT documentation that was not current. In particular, the US-VISIT Mission Needs Statementdid not reflect recent changes to the program, such as the IDENT/IAFIS interoperability and expansion of IDENT to collect 10, rather than 2, prints. The review assessed compliance with only general aspects of the DHS EA, such as the investment portfolio, the architecture principles, and the business model. It did not include US-VISIT’s compliance with other relevant aspects of the EA, such as the data and information security components. The review was based on DHS EA 2006. We reportedin May 2007 that this version was missing important architectural content and did not address most of the comments made by DHS stakeholders. As a result, we concluded that it was not complete, consistent, understandable, or usable. Program officials told us that they submitted documentation for a more comprehensive MDP2 alignment review to the Enterprise Architecture Centers of Excellence in April 2007. They also stated that they have mitigated the risks of US- VISIT being misaligned with the DHS EA through other means. These included: submitting the technical baseline of existing hardware and software to the EA Center for Excellence for inclusion in the DHS EA; submitting technology insertion requests for new equipment planned for US- VISIT, such as RFID technology, to the EA Center of Excellence for review and inclusion in the DHS EA, and relating US-VISIT capabilities with the business and services models of the FEA reference models. Notwithstanding these steps, DHS has yet to demonstrate, through verifiable documentation and methodologically-based analysis, that US-VISIT is aligned with a well-defined DHS EA. As a result, the program will remain at risk of being defined and implemented in a way that does not support optimized departmentwide operations, performance, and achievement of strategic goals and outcomes. Condition 3. The plan, including related program documentation and program officials’ statements, partially provides for satisfying the condition that it comply with the acquisition rules, requirements, guidelines, and systems acquisition management practices of the federal government. Federal IT acquisition requirements, guidelines, and management practices provide an acquisition management framework that is based on the use of rigorous and disciplined processes for planning, managing, and controlling the acquisition of IT resources.Effective acquisition management processes are embodied in published best practices models, such as the Software Engineering Institute (SEI) Capability Maturity Models®. These models explicitly define, among other things, acquisition process management controls that are recognized hallmarks of successful organizations and that, if implemented effectively, can greatly increase the chances of acquiring software-intensive systems that provide promised capabilities on time and within budget. We reported in September 2003that the program office had not defined key acquisition management controls to support the acquisition of US-VISIT, and therefore its efforts to acquire, deploy, operate, and maintain system capabilities were at risk of not meeting system requirements and benefit expectations on time and within budget. Subsequently, the program adopted SEI Capability Maturity Model Integration(CMMI®) to guide its efforts to employ effective acquisition management practices and approved an acquisition management process improvement plan dated May 16, 2005. One of the goals of this plan was to achieve a CMMI® level 2 capability rating from SEI by October 2006. In September 2005, DHS’s initial assessment of 13 US-VISIT key acquisition process areas revealed a number of weaknesses. In light of this, US-VISIT updated its acquisition management process improvement plan, narrowing the scope of the process improvement activities to six of the CMMI process areas--project planning, project monitoring and control, requirements management, risk management, configuration management, and product and process quality assurance—and focusing on two US-VISIT projects—U.S. Travel Documents-ePassports (formerly Increment 2A) and Unique Identity. Under the updated plan, the goal for an external CMMI evaluation remained October 2006. Insufficient definition of processes and preparation of supporting documents for areas such as systems development, budget and finance, facilities, and strategic planning (e.g., product work flow among organizational units was unclear and not documented, and roles, responsibilities, and assignments for performing work tasks and activities were not adequately defined and documented). Lack of policies, process descriptions, and templates for requirements development and management. Lack of definition of roles, responsibilities, work products, expectations, resources, and accountability of external stakeholder organizations. The program has since revised its process improvement plan. Among other things, the revised plan delays the date for having an external CMMI evaluation from October 2006 to November 2007. At the same time, it has continued to address the weaknesses discovered during earlier internal assessments. Based on its latest periodic assessment (March 2007), the program office reports that 83 percent of key practices are now either fully or largely implemented, up from 26 percent in August 2005 (see chart on next slide). In addition, the fiscal year 2007 expenditure plan reported progress in a seventh key process area not included in the program’s CMMI improvement efforts— contract tracking and oversight. In 2006, we reportedthat the program office had not effectively overseen US-VISIT related contract work performed on its behalf by other DHS and non-DHS agencies, and these agencies did not always establish and implement the full range of controls associated with effective management of contractor activities. Further, neither the program office nor the other agencies had implemented effective financial controls. Since this report was issued, the program office has instituted the use of oversight plans for new task order and contract awards and is developing a set of requirements for reimbursable contracts that address our recommendations to enhance the probability of successful performance and reduce risks. Notwithstanding this reported progress in implementing acquisition management process areas, the program’s acquisition management improvement efforts are focused on only seven acquisition management process areas. Other areas are also relevant to the program and need to be addressed. The status of the program office’s efforts to implement our recommendations aimed at implementing the full range of acquisition management controls is discussed later in this briefing. Condition 4. The plan satisfies the condition that it include a certification by the DHS CIO that an IV&V agent is currently under contract for the project. On February 21, 2007, the DHS Deputy CIO certified in writing that two independent verification and validation agentswere under contract for US-VISIT and that these agents met the requirements and standards for an IV&V agent. Condition 5. The plan, including related program documentation and program officials’ statements, satisfies the requirement that it be reviewed and approved by the DHS Investment Review Board, the Secretary of Homeland Security, and OMB. The DHS Deputy Secretary, who is also the chair of the Investment Review Board, approved the fiscal year 2007 expenditure plan, and OMB approved the plan on March 20, 2007. Condition 6. The plan satisfies the requirement that it be reviewed by GAO. Our review was completed on June 15, 2007. Objective 1: Legislative Conditions Condition 7. The plan does not satisfy the condition that it include a comprehensive US-VISIT strategic plan. Strategic plans are the starting point and basic underpinning for results-oriented management. Such plans articulate the fundamental mission of an organization, or program, and lay out its long-term goals and objectives for implementing that mission, including the resources needed to reach these goals. Federal legislation and guidelinesrequire that agencies’ strategic plans include six key elements: (1) a comprehensive mission statement, (2) strategic goals and objectives, (3) strategies and the various resources needed to achieve the goals and objectives, (4) a description of the relationship between the strategic goals and objectives and annual performance goals, (5) an identification of key external factors that could significantly affect the achievement of strategic goals, and (6) a description of how program evaluations were used to develop or revise the goals and a schedule for future evaluations. As we have previously reported,(cont’d) strategic plans should also include a discussion of management challenges facing the program that may threaten its ability to meet long-term, strategic goals and efforts to coordinate among cross-cutting programs, activities, or functions. While the US-VISIT program is not required to explicitly follow these guidelines, the guidelines nonetheless provide a framework for effectively developing strategic plans and the basis for program accountability. However, the US-VISIT strategic plandoes not include any of these key elements associated with effective strategic plans. In summary, the plan describes eight desired program capabilitiesand provides an implementation strategy that describes how each of these capabilities will be delivered over a multi-year investment horizon through three categories of activities – Foundation, Transformation, and Globalization. Foundation activities, which are described as modernization, enhancement, and expansion of capabilities and technologies, as well as leveraging current capabilities and technologies. Transformation activities, which are described as the implementation of processes and technologies that cut across the particular functions and entities that make up the immigration and border management system. Globalization activities, which are described as the coordination and sharing of information with foreign governments to improve the ability to detect and prevent potential threats from either entering the United States or remaining here. However, the plan does not provide time frames for the completion of these broad investment categories. The plan also does not include strategic goals and objectives or strategies for achieving goals and objectives. As a result, it is not clear what program capabilities will be delivered when and whether they are aligned with the program’s goals and objectives. Further, the plan does not include a comprehensive mission statement, describe the relationships between strategic goals and annual performance goals, the external factors that could affect the program, and the program evaluations used to establish or revise the goals. In addition, the US-VISIT strategic plan does not address management challenges facing the program, such as those addressed in our past recommendations. And although the strategic plan identifies the ability to communicate with external stakeholders as a desired capability, the plan does not provide any evidence of such past communication or explain the relationship between US-VISIT and other organizations within the border and immigration management enterprise. For example, it does not describe the relationship between US-VISIT and DHS’s Western Hemisphere Travel Initiative, even though both programs involve the entry of certain foreign individuals at U.S. POEs. While the strategic plan is missing important content, other related program documentation includes some of this content. For example, the fiscal year 2007 expenditure plan and the US-VISIT Mission Needs Statement state the program’s mission and goals. In addition, the US-VISIT Program Blueprint describes eight core capabilities, which are very similar to those described in the strategic plan, and maps those capabilities to four business outcomes. However, the Blueprint does not include strategic goals, so it is not clear whether the business outcomes are aligned with US-VISIT’s goals. Further, the outcomes are not described in the strategic plan. The Program Blueprint also notes that responsibilities for immigration and border management are spread across multiple agencies and departments. However, it does not provide clear delineations of these organizations’ respective tasks, services, or efforts. Further, the strategic plan does not cite or describe any coordination efforts to address this situation. Additionally, the Blueprint identifies border and immigration management enterprise stakeholders and identifies, for each stakeholder, needs and priorities, challenges, how the business outcomes will benefit the stakeholder, and stakeholder constraints that will affect business outcomes. This means that while some of the content of a US-VISIT strategic plan is captured in a fragmented fashion across a range of documents, the full range of content needed to define an authoritative strategic direction, focus, and roadmap for the program that is approved by departmental leadership is missing. Without it, DHS reduces the chances that the US-VISIT program will achieve desired results and succeed in achieving the program’s goals and objectives. Condition 8. The plan, including related program documentation and program officials’ statements, does not satisfy the condition that it include a complete schedule for biometric exit implementation. The fiscal year 2007 expenditure plan addresses DHS’ near-term deployment plans for biometric exit capabilities at air and sea POEs. Further, it notes the absence of near-term biometric options for land POEs and mentions only a possible near-term, interim option that is being considered. In addition, the expenditure plan addresses all three locations of US-VISIT technology (air, sea, and land). However, the expenditure plan’s discussion of exit capabilities is conceptual and general and does not contain a schedule for the full implementation of US-VISIT exit capabilities at air, sea and land POEs. The plan states that DHS plans to incorporate air exit into the airline check-in process. However, the plan does not provide any details as to what capabilities will be acquired and deployed when and at what cost. Instead, it states that DHS plans to integrate US-VISIT’s efforts with CBP’s pre-departure Advance Passenger Information Systemand TSA’s Secure Flightfor purposes of partnering with the airline industry. Further, the plan does not include any schedule of air exit implementation activities, but rather, simply states that DHS plans to initiate efforts on its air exit solution at an unspecified time during the third quarter of fiscal year 2007, and will fully deploy the air exit solution by an unspecified time during calendar year 2008. On June 11, 2007, DHS provided us with a schedule for air exit, which the department characterized as high-level. For example, it does not include the underlying details supporting the timelines for such areas of activity as system design, system testing, and system development. However, program officials told us that greater detail existed to support the schedule, but that because this had not been approved by DHS, could not be provided. The schedule provided indicates that the air exit solution will be fully deployed by June 2009, which is at least six months after the deployment date provided in the expenditure plan. The plan states that DHS will initiate planning efforts on the sea exit deployment at an unspecified time during fiscal year 2007, and that it will emulate the technology and operational plans used for the air exit solution. However, the plan does not provide any details about how, when, and at what cost the sea exit solution will be accomplished, or provide a completion date or any interim dates. GAO, Border Security: US-VISIT Program Faces Strategic, Operational, and Technological Challenges at Land Ports of Entry, GAO-07-248 (Washington, D.C.: Dec. 6, 2006). Consistent with our December 2006 report,the plan states that implementing a biometric exit solution at land POEs is significantly more complicated and costly than air or sea exit because it would require a costly expansion of existing exit capacity, including physical infrastructure, land acquisition, and staffing. Because of this, the plan concludes that land exit cannot be practically based on biometric validation in the short term. In lieu of biometric-based exit at land POEs in the near term, the plan states that DHS will initially seek to match entry and exit records using biographic information in instances where departure information is not collected from an individual who leaves the country, as in the case of an individual who does not submit their Form I-94upon departure. However, the plan does not specify what this near-term focus entails and how, when, and at what cost it will be accomplished. Rather, it says that DHS has not yet determined a time frame or any cost estimates for the initiation of a land exit solution. Recommendation 1: Develop and begin implementing a system security plan and perform a privacy impact analysis and use the results of this analysis in near-term and subsequent system acquisition decision-making. A system security plan and privacy impact assessment are important to understanding system requirements and ensuring that the proper safeguards are in place to protect system data, resources, and individuals’ privacy. Both best practices and federal guidance advocate their development and use. The purpose of a system security plan is to define the steps that will be taken (i.e., security controls that will be implemented) to cost-effectively address known security risks. We reportedin 2005 that the program office developed a US-VISIT system security plan that was generally consistent with federal practice. However, we also reported at that time that the plan was not based on a security risk assessment. In December 2005, the program office developed a US-VISIT risk assessment that addressed the risk elements required by OMB, including having an inventory of known risks, their probability of occurrence and impact, and recommended controls to address them. At that time, program officials told us that they intended to develop a US-VISIT security strategy that reflected the results of this risk assessment. In December 2006, the program office developed a US-VISIT security strategy and has since begun implementing it. For example, it has conducted security evaluations of commercial off-the-shelf software products before adding them to the program’s technical baseline. However, the scope of this strategy does not extend to all the systems that comprise US-VISIT. For example, the Treasury Enforcement Communications System (TECS), an integral component of US- VISIT, is not under the US-VISIT inventory of systems because it is owned by Customs and Border Protection. The fact that the US-VISIT security strategy’s scope is limited to only systems that the program office owns is not consistent with our recommendation. We have ongoing work to evaluate the quality of US-VISIT security documents and practices, including TECS implementation of security controls. The purpose of a privacy impact assessment is to ensure handling of information conforms to applicable legal, regulatory, and policy requirements regarding privacy, determine the risks and effects of collecting, maintaining, and disseminating information in identifiable formin an electronic information system, and examine and evaluate protections and alternative processes for handling information to mitigate potential privacy risks. In February 2006, we reportedthat the program office had developed and periodically updated a privacy impact assessment. However, we also reported that system documentation only partially addressed privacy. Since then, program officials told us that they have taken steps to ensure that the impact assessment’s results are used in deciding and documenting the content of US-VISIT projects. For example, they said that privacy office representatives are included in key project definition, design, and development meetings to ensure that privacy issues are addressed and that key system documentation now reflects privacy-based needs. Furthermore, US-VISIT privacy officials recently conducted an audit of system documentation to ensure that privacy is being addressed. They found only a single instance where privacy should have been addressed in system documentation but was not. Finally, our review of recently issued system documentation shows privacy concerns are being addressed. Recommendation 2: Develop and implement a plan for satisfying key acquisition management controls, including acquisition planning, solicitation, requirements management, project management, contract tracking and oversight, evaluation, and transition to support, and implement the controls in accordance with Software Engineering Institute (SEI) guidance. Effective acquisition management controls are important contributors to the success of programs like US-VISIT. SEI has defined a range of acquisition management controls as part of its capability maturity models, which, when properly implemented, have been shown to increase the chances of delivering promised system capabilities on time and within budget. In June 2003, we first reportedthat the program did not have key acquisition management controls in place, and we reiterated this point in September 2003. In May 2005, the program office developed a plan for satisfying SEI acquisition management guidance and began implementing it. Its 2005 assessment addressed 13 SEI key process areas, a number of which were consistent with the seven management controls that we recommended. In April 2006, the program office updated its plan to focus on six key process areas: acquisition project planning, requirements management, project monitoring and control, risk management, configuration management, and product and process quality assurance. Since 2005, the program office reports that it has made progress in implementing the 113 practices associated with these six key process areas, as previously discussed. However, the six areas of focus do not include all of the management controls that we recommended. For example, solicitation, contract tracking and oversight, and transition to support are not included. While the program office reports that it has also addressed contract tracking and oversight as part of responding to a later recommendation that we made (not one of the nine recommendations addressed in this briefing), it also reports that it has yet to address the other two management controls. It is important for the program office to address all of the management controls that we recommended. If it does not, it will unnecessarily increase program risks. Recommendation 3: Ensure that expenditure plans fully disclose what system capabilities and benefits are to be delivered, by when, and at what cost, as well as how the program is being managed. The fiscal year 2007 expenditure plan discloses planned system capabilities, estimated schedules and costs, and expected benefits, but meaningful information about schedules, costs, and benefits is missing. Further, while the plan does provide information on some acquisition activities, it does not adequately describe how the program is being managed in a number of areas and does not disclose the management challenges that it continues to face. Without such information, the expenditure plan does not provide Congress with enough information to exercise effective oversight and hold the department accountable. The fiscal year 2007 expenditure plan provides time commitments for some capabilities; however, these are not specific. For example, the plan states the following: Deployment of 10-print pilot to 10 air locations to begin in late 2007. Initial Operating Capability functionality targeted for September 2008. Air exit solution deployment will begin in third quarter 2007 and continue through 2008. Begin work in fiscal year 2007 on sea exit deployment that will emulate technology and operational plans adopted for commercial aviation environment. Moreover, no schedule commitments are made for the development and deployment of PKD validation capabilities. The fiscal year 2007 expenditure plan identifies each project’s funding. In some cases, this information is provided with meaningful detail that allows for understanding of how the funds will be used. For example Unique Identity shows the following activities and costs: Acquisition and Procurement ($21.2 million)—purchase and initial deployment of 10-print capture devices and upgrades in network capabilities (bandwidth and technology refreshes) at 119 airports, 9 seaports, and 155 land ports. Update DHS Border and Process Technology ($2.0 million)—update device to client biometric interfaces and further 10-print prototype testing and evaluation. However, in other cases, costs are not described at a level that would permit such understanding. For example: Contractor Services (Project Assigned) ($12.1 million) - contractor services and support for the project-related resource planning and management (including the areas of configuration, acquisition, and risk), as well as project performance metrics and reporting in the areas of cost, schedule, scope, and quality management. This exact wording is also used for this category in two other projects with different costs. In addition, unlike prior expenditure plans, carryover funds from prior years that are planned for use in 2007 are not allocated to 2007 activities. For example: Exit - A total of $7.3 million in fiscal year 2007 funds, plus fiscal year 2006 carryover funds of $20 million are mentioned as being allocated to begin the process of deploying DHS’ integrated air exit strategy and initial planning for sea exit. However, only the $7.3 million is allocated among the activities listed. No information is presented regarding the allocation of the $20 million in carryover funds to these activities or any others. The fiscal year 2007 expenditure plan cites benefits associated with the projects. However, the benefits are broadly stated. For example, the plan describes exit benefits as “Safer and more secure travel” and Unique Identity benefits as “Facilitation of efficient, yet secure, trade and travel.” The 2007 expenditure plan describes a range of key acquisition management activities and control areas. These include: However, the plan does not fully disclose challenges that the program faces in managing acquisition activities, nor does it discuss key areas in which change is occurring, such as capital planning and investment controls and human capital management. Recommendation 4: Ensure that the human capital and financial resources are provided to establish a fully functional and effective program office and associated management capability. DHS established the US-VISIT program office in July 2003 and determined the office’s staffing needs to be 115 government and 117 contractor personnel. In September 2003, we reportedthat the program office lacked adequate human capital and financial resources. In August 2004, the program office, in conjunction with OPM developed a draft human capital plan. Agency officials stated that, at one point in 2006, all of the 115 government positions were filled. In addition, the program has received about $1.4 billion in funding, and we recently reported that it has devoted an increasing proportion of its annual appropriation to program office and related management activities. Since then, however, 21 of the government positions have become vacant. According to program officials, they have taken interim steps to address this void in leadership by temporarily assigning other staff to cover them. They added that they plan to fill all the positions through aggressive recruitment and that they do not consider the vacancies to present a risk to the program. However, without adequate human capital, particularly in key positions and for extended periods, program risks will invariably increase. Recommendation 5: Clarify the operational context within which US-VISIT must operate. As we have previously reported, all programs exist within a larger operational (and technological) context or frame of reference that is captured in such strategically focused instruments as strategic plans and an EA. Additionally, having a strategic plan and an EA are recognized best practices and provided for in federal guidance. In 2003, we reportedthat DHS had yet to define the operational context in which US-VISIT is to operate, such as a well-defined department EA or a departmentally approved strategic plan. In the absence of this operational context, we stated that program officials could make assumptions and decisions that, if they proved inconsistent with subsequent departmental policy decisions, would require US- VISIT rework to make it interoperable with related programs and systems, such as the FBI’s 10-print biometric identity system known as IAFIS. Moreover, we stated that US-VISIT could be defined and implemented in a way that made it duplicative of other programs and systems, such as the Secure Border Initiative or the Western Hemisphere Travel Initiative. Since then, we have continued to report on the absence of this context. Most recently, we reportedin February 2006 that this operational context was still a work in process. Specifically, we found that although a strategic plan was drafted that program officials said showed how US-VISIT was aligned with DHS’s organizational mission and defined an overall vision for immigration and border management across multiple departments and external stakeholders with common objectives, strategies, processes, and infrastructures, this plan had been awaiting departmental approval at that time for more than 11 months. Western Hemisphere Travel Initiative (WHTI), which is to implement the provisions of the Intelligence Reform and Terrorism Prevention Act of 2004requiring citizens of the United States, Canada, Bermuda, and Mexico to have a designated document for entry or re-entry into the United States that establishes the bearer’s identity and citizenship. US-VISIT continues to lack a well-defined operational context. As discussed earlier in this briefing, the fiscal year 2007 expenditure plan includes an appendix titled “Comprehensive Strategic Plan for US-VISIT,” which the Program Director told us is the department’s officially approved US- VISIT strategic plan. However, as we discussed in the legislative conditions section of the briefing, key elements of relevant federal guidance for a strategic plan are not addressed in this plan. For example, no specific outcome-related goals for major functions and operations of US-VISIT or specific objectives to meet those goals are provided, nor does it address external factors that could affect achievement of program goals. Finally, this strategic plan does not address the explicit relationships between US-VISIT and either the SBI or WHTI programs. We recently reportedthat DHS’s EA has evolved beyond prior versions. However, the DHS EA 2006was not complete for several reasons. For example, it was missing architecture content, such as a transition plan and evidence of a gap analysis between the “as is” and “to be” architectures, and it was developed with limited stakeholder input: support contractors and organizational stakeholders provided a range of comments on completeness, internal consistency, and understandability of a draft of the EA, but the majority of comments were not addressed. Because the EA was not complete, internally consistent and understandable, we concluded that its usefulness was limited, in turn limiting DHS’s ability to guide and constrain IT investments in a way that promotes interoperability and reduces overlap and duplication. Program officials told us that they have met with related programs to coordinate their respective efforts. They stated that DHS’s Office of Screening Coordination and Operations (SCO) has been trying to coordinate and unify the departmental components’ initiatives by bringing border management stakeholders together. However, specific coordination efforts have not been assigned to the SCO or any other DHS entity. The absence of a well-defined operational context within which to define and pursue US-VISIT has been long-standing. Until this context exists, the department will be challenged in its ability to define and implement US-VISIT and related border security and immigration management programs in a manner that promotes interoperability, minimizes duplication, and optimizes departmental capabilities and performance. Recommendation 6: Determine whether proposed US-VISIT increments will produce mission value commensurate with costs and risks and disclose to its executive bodies and the Congress the results of these business cases and planned actions. The decision to invest in any system capability should be based on reliable analysis of return on investment. Moreover, according to relevant guidance, incremental investments in major systems should be individually supported by such analyses of benefits, costs, and risks. Without such analyses, an organization cannot adequately know that a proposed investment is a prudent and justified use of limited resources. In June and September 2003, and in February 2005, we reportedthat proposed investments in the then entry/exit system, US-VISIT Increment 1, and US-VISIT Increment 2B, respectively, were not justified by reliable business cases. Further, in February 2006 we reportedthat while a business case was prepared for Increment 1B, the analysis performed met only four of the eight criteria in OMB guidance. For example, it did not include a complete uncertainty analysis for the alternatives evaluated. More recently, the program office has developed business cases for two projects: Unique Identity and U.S. Travel Documents-ePassports (formerly Increment 2A).However, the program office has not developed a business case for another project that it plans to begin implementing this year—biometric exit at air POEs. As discussed later in the observations section of this briefing, the program office has defined very little about its proposed solution to meeting its exit needs at air POEs, including an analysis of alternative solutions to meeting this need on the basis of their relative costs, benefits, and risks. Until the program office has reliable business cases for each US-VISIT project in which alternative solutions for meeting mission needs are evaluated on the basis of costs, benefits, and risks, it will not be able to adequately inform its executive bodies and the Congress about its plans and will not provide the basis for prudent investment decision making. Recommendation 7: Develop and implement a human capital strategy that provides for staffing open positions with individuals who have the requisite core competencies (knowledge, skills, and abilities). Strategic management of human capital involves proactive efforts to understand an entity’s future workforce needs, existing workforce capabilities, and the gap between the two and to chart a course of action defining how this gap will be continuously addressed. Such an approach to human capital management is both a best practice and provision in federal guidance. In September 2003, we reportedthat US-VISIT did not have a human capital strategy. In February 2006, we reportedthat the program office issued a human capital plan and began implementing it. However, it stopped doing so during 2006 pending a departmental approval of a DHS-wide human capital initiative, known as MAXHR, and because all program office positions were filled. However, as noted earlier, the program office now reports that it has 21 government positions, including critical leadership positions, vacant. According to program officials, US-VISIT recently developed a new human capital plan as part of their Organizational Improvement Initiative and this plan is now being reviewed by the department. Because its approval is pending, we were not provided a copy. Recommendation 8: Develop and implement a risk management plan and ensure that all high risks and their status are reported regularly to the appropriate executives. In September 2003, we reportedthat US-VISIT was a risky undertaking due to several factors, including its large scope and complexity and various program weaknesses. We concluded that these risks, if not effectively managed, would likely cause program cost, schedule, and performance problems. Since then, US-VISIT approved a risk management plan and began to put into place a risk management process that included, among other things, subprocesses for identifying, analyzing, managing, and monitoring risk. It also defined and began implementing a governance structure to oversee and manage the process, and it maintains a risk database that is available to program management and staff. In February 2006,we reported that the risk management process detailed in the risk management plan was not being consistently applied across the program. In addition, we reported that thresholds for elevating risks to department executives were not being applied and risk elevation was being left to the discretion of the Program Director. Since then, the program has provided training to its employees to ensure that they understood how to apply the risk management process. However, program officials told us that they have eliminated the thresholds for elevating risks beyond the US-VISIT Program Office. Further, no risks have been elevated to department executives since December 2005, and no specific guidance on when risks should be elevated beyond the US-VISIT Program Director is provided in the current risk management plan. Until the program office ensures that high risks are appropriately elevated, department executives will not have the information they need to make informed investment decisions. Recommendation 9: Define performance standards for US-VISIT that are measurable and reflect the limitations imposed on US-VISIT capabilities by relying on existing systems. The operational performance of US-VISIT depends largely on the performance of the existing systems that have been integrated to form it. This means that, for example, the availability of US-VISIT is constrained by the downtime of existing systems. In February 2006, we reportedthat the program office had defined technical performance standards for several increments (e.g., Increments 1, 2B, and 2C), but these standards did not contain sufficient information to determine whether or not they reflected the limitations imposed by reliance on existing systems. Since then, program officials told us that they have not updated the performance standards for Increments 1-3 to reflect limitations imposed by relying on existing systems. As a result, the ability of these increments to meet performance requirements remains uncertain. Recently, the program office has developed requirements-related documentation on Unique Identity elements, including the iDSM. While this documentation specifies a requirement that the model be able to exchange information with external systems, and refers to this as a system constraint, it does not assess the quantitative impact that these changes would impose on the system. In order to determine such impacts, it is necessary to assess such factors as the response time and throughput of US-VISIT feeder systems on US-VISIT. Until the program defines performance standards that reflect the limitations of the existing systems upon which US-VISIT relies, the program lacks the ability to identify and effectively address performance shortfalls. Observation 1: Earned value management data on ongoing prime contract task orders show that cost and schedule baselines are being met. Earned value management (EVM) is a program management tool for measuring progress by comparing, during a given period of time, the value of work accomplished with the amount of work expected to be accomplished. This comparison permits performance to be evaluated based on calculated variances from the planned (baselined) cost and schedule. EVM is both an industry accepted practice and an OMB requirement. The program office requires its prime contractor to use EVM,and the data provided by the program office show that the cumulative cost and schedule variances for the overall prime contract and all 12 ongoing task orders are within an acceptable range of performance. Our analysis of baseline and actual performance data using generally accepted earned value analysis techniques show that as of February 2007, the prime contractor had an overall Positive cost variance for all task orders combined (i.e., was under budget) by about $17.1 million (about 7 percent of the $ 238.9 million worth of work to be completed). Negative schedule variance for all task orders combined (i.e., had a schedule slip) of only about $1.3 million worth of work (less than 1 percent of the work scheduled for the period). The six-month (September 2006-February 2007) trend in cost and schedule variances for the prime contract are shown on the next two pages. $(200,000) $(400,000) $(600,000) $(800,000) $(1,000,000) $(1,200,000) $(1,400,000) Our analysis of these data for two specific task orders showed similar results. Specifically, Task order 4: Program Level Engineering. This task order includes the development and maintenance of the US-VISIT target architecture, related standards, engineering plans, and guidance as well as performance modeling and technology assessments. As of February 2007, it Showed a positive cost variance (i.e., was under budget) by about $4.1 million (about 9.6 percent of the $ 42.7 million worth of work to be completed). Showed a negative schedule variance (i.e., had a schedule slip) by about $230,000 worth of work (less than one percent of the work scheduled for the period). Task order 7: IT Solutions Delivery. This task order contains several Unique Identity project subtasks including (1) operation and maintenance of US- VISIT’s IDENT biometric identification system, (2) development and maintenance of the iDSM, (3) IDENT expansion to 10 prints, and (4) development and testing of enumeration functionality for the U.S. Citizenship and Immigration Services. As of February 2007, it Showed a positive cost variance (i.e., was under budget) by about $747,000 (less than 2 percent of the $44.5 million worth of work to be completed). Showed a negative schedule variance (i.e., had a schedule slip) of about $384,000 worth of work (less than one percent of the work scheduled for the period). All of the above cited variances are within the expected range of 10 percent. Observation 2: DHS continues to propose a heavy investment in program management-related activities without adequate justification or full disclosure. Program management is an important and integral aspect of any system acquisition program. Our recommendations to DHS aimed at strengthening US- VISIT program management are grounded in our research, OMB requirements, and recognized best practices relative to the importance of strong program management capabilities. The importance of this area, however, does not in and of itself justify the level of investment in such activities. Rather, investment in program management-related activities, similar to investment in any program capability, should be based on full disclosure of the scope, nature, size, and value of the program and such investments should be justified in relation to the size and significance of the acquisition activities being performed. Earlier this year, we reported,that the program’s investment in program management had risen significantly over the past 4 years, particularly in relation to the program’s declining level of new system development. The fiscal year 2007 expenditure plan proposes a level of investment in program management similar to that for 2006. At the same time, no explanation or justification of such a relatively large investment in program management-related funding has been provided. Specifically, The fiscal year 2003 expenditure plan provided $30 million for program management and operations. In contrast, the fiscal year 2006 plan provided $126 million for program management-related functions. At the same time, funds provided for new development fell from $325 million in 2003 to $93 million in 2006. Restated, program management costs represented about 9 percent of planned development costs in 2003 but 135 percent of planned development in 2006. This means that in 2006, for every dollar spent on new capabilities, $1.35 was spent on management. According to program officials, the fiscal year 2006 plan did not properly categorize proposed program management-related funding according to its intended use. They added that future expenditure plans would provide greater clarity into funds used for management versus development. The fiscal year 2007 expenditure plan proposed investing a comparable percentage of funding on management-related activities vis-à-vis new development. Specifically, our analysis shows that, for every dollar invested in new development, $1.25 is to be spent on management-related activities at either the program or project level. Charts showing this trend in management-related funding in relation to new development funding are on the following two pages. The fiscal year 2007 expenditure plan does not explain the reasons for the sizable investment in management-related activities or otherwise justify it on the basis of measurable expected value. Without disclosing and justifying its proposed investment and program management-related efforts, it is unclear that such a large amount of funding for these activities represents the best use of resources. Objective 3: Observation 3 Exit Remains Inadequately Defined and Justified Observation 3: Lack of a well-defined and justified exit solution introduces the risk of repeating failed and costly past exit efforts. The decision to invest in a system or system component should be based on a clear definition of what capabilities, what stakeholders, and what will be delivered according to what schedule and at what cost. Moreover, it should be economically justified via reliable analysis showing that execution of the plan will produce mission value commensurate with expected costs and risks. the sea exit solution will emulate the technology and operational plans adopted for air exit. However, while US-VISIT has developed a high-level schedule for air exit, information supporting that schedule was not provided to GAO and no other exit program plans are available that define what will be done, by what entities, and at what cost to define, acquire, deliver, deploy, and operate this capability, including plans describing expected system capabilities, identifying key stakeholder (e.g., airlines) roles/responsibilities and buy-in, coordinating and aligning with related programs, and allocating funding to activities. In addition, the exit schedule provided by the program office indicates that the air exit solution is to be fully implemented by June 2009, which is at least 6 months after the full deployment date provided in the expenditure plan. Further, available documentation (e.g., the expenditure plan) does not define what key terms mean, such as “full implementation” and “integrated;” does not specify what the $20 million in fiscal year 2006 carryover funding will be spent on, and only allocates the $7.3 million in fiscal year 2007 funding to such broad categories of activities as project management, contractor services, and planning and design; and does not describe what has been done and what is planned to engage with commercial airlines, even though the recently-provided air exit schedule states that the department plans to issue a proposed federal regulation requiring airlines to participate in this effort by end of calendar year 2007. Moreover, no analysis comparing the life cycle costs of the air exit solution to its expected benefits and risks is available. In particular, neither the 2007 expenditure plan nor any other program documentation describe measurable outcomes (benefits and results) that will result from an air exit solution. According to the expenditure plan, significant air exit planning and testing has been conducted over the past 3 years and the air exit solution is based in part on these efforts. However, during this time we have continued to report on fundamental limitations in the definition and justification of those efforts. For example, In September 2003,we reported that DHS had not economically justified the initial US-VISIT increment (which was to include an exit capability at air and sea POEs) on the basis of benefits, costs, and risks. As result, we recommended that DHS determine whether proposed incremental capabilities will produce value commensurate with program costs and risks. In May 2004,we reported that an exit capability (including biometric capture) was not deployed to the 80 air and 14 sea POEs as part of Increment 1 deployment in December 2003, as originally intended. Instead, a pilot exit capability was deployed to only one air and one sea POE on January 5, 2004. At that time, program officials told us that it was being piloted at only two locations because they decided to evaluate other exit alternatives and planned to select an alternative for full deployment by December 31, 2004. In February 2005,we reported that DHS had not adequately planned for evaluating the air and sea exit alternatives because the scope and timeline of the pilot evaluations were compressed. We recommended that the program office reassess its plans for deploying an exit capability to ensure that the scope of the pilot provided an adequate evaluation of alternatives. In February 2006,we reported that DHS had analyzed the cost, benefits, and risks for its air and sea exit capability, but the analyses did not demonstrate that the program was producing or would produce mission value commensurate with expected costs and benefits, and the costs upon which the analyses were based were not reliable. We also raised questions about the adequacy of the program’s air exit pilot evaluation, noting that the results showed an average compliance of only 24 percent across the three alternatives. We concluded that until exit alternatives were adequately evaluated, the program office would not be in a position to select the best solution. We further noted that without an effective exit capability, the benefits and the mission value of US-VISIT would be greatly diminished. We did not make a recommendation to address this because we had already addressed the situation through a prior recommendation. In December 2006,we reported that US-VISIT officials had concluded that a biometric US-VISIT land exit capability could not be implemented without incurring a major impact on land POE facilities. We also reported that the land exit pilots had surfaced several performance problems, such as RFID devices not reading a majority of travelers’ tags during testing and multiple RFID devices installed on poles or structures over roads reading information from the same traveler tag. We recommended that DHS report to Congress information on the costs, benefits, and feasibility of deploying biometric and nonbiometric exit capabilities at land POEs. In February 2007,we reported that DHS had not adequately defined and justified its past investment in exit pilots and demonstration projects. We noted that the program had devoted considerable time and resources to exit but still did not have either an operational exit capability or a viable exit solution to deploy. Further, exit-related program documentation did not adequately define what work was to be done or what these efforts would accomplish and did not describe measurable outcomes from the pilot or demonstration efforts, or related cost, schedule, and capability commitments that would be met. We recommended that planned expenditures be limited for exit pilots and demonstration projects until such investments are economically justified and until each investment has a well-defined evaluation plan. Notwithstanding these long-standing limitations in planning for and justifying its exit efforts, and notwithstanding that funding for exit-related efforts in US-VISIT expenditure plans for fiscal years 2003 through 2006totals about $250 million, no operational exit capability exists. Unless the department better plans and justifies its new exit efforts, it runs the serious risk of repeating this past failure. US-VISIT’s prime contract cost and schedule metrics show that expectations are being met, according to available data, although their earned value management system that the metrics are based on has yet to be independently certified. Nothwithstanding this, such performance is a positive sign. However, the vast majority of the many management weaknesses raised in this briefing have been the subject of our prior US-VISIT reports and testimonies, and thus are not new. Accordingly, we have already made a litany of recommendations to correct each weakness, as well as follow-on recommendations to increase DHS attention to and accountability for doing so. Despite this, recurring legislative conditions associated with US-VISIT expenditure plans continue to be less than fully satisfied, and recommendations that we made 4 years ago are still not fully implemented. Exacerbating this situation is the fact that the DHS did not satisfy two new legislative conditions associated with the fiscal year 2007 expenditure plan, and serious questions continue to exist about DHS’s justification for and readiness to invest current, and potentially future, fiscal year funding relative to an exit solution and program management-related activities. reviewed documentation to determine whether an independent verification and validation agent was currently under contract, reviewed documentation to determine whether the expenditure plan received the required certification and approvals, reviewed US-VISIT’s strategic plan submission and compared it against federal legislation and guidelines, and GAO strategic planning criteria to determine whether US-VISIT’s strategic plan met best practices, and reviewed US-VISIT’s exit submission to determine the extent to which it described the exit capabilities to be deployed and included a schedule for deploying these capabilities. relevant systems acquisition documentation, including the program’s process improvement plan, risk management plan, and configuration management plan; the program’s security plan, privacy impact assessment, and related the program’s most recent draft human capital strategy and related We also reviewed the fiscal year 2007 plan to determine whether it disclosed key aspects of how the acquisition is being managed, including management areas that our prior reports on US-VISIT identified as important but missing (e.g., governance structure, organizational structure, human capital, systems configuration, and system capacity); and fully disclosed system capabilities and related benefits as well as cost and schedule information. To accomplish our third objective, we reviewed the fiscal year 2007 plan and other available program documentation related to each of the following areas. In doing so, we examined completed and planned actions and steps, including program officials’ stated commitments to perform them. For earned value, we reported data provided by the contractor to US-VISIT that is verified by US- VISIT. To assess its reliability, we reviewed relevant documentation and interviewed the system owner for the earned value data. More specifically, we addressed US-VISIT efforts to: track and manage cost and schedule commitments by applying established earned value analysis techniques to baseline and actual performance data from cost performance reports, define and justify program management costs by reviewing and analyzing data on costs provided as part of the expenditure plan; and define and implement an exit strategy for air, sea, and land by reviewing and analyzing information provided as part of the expenditure plan. Additionally, in February 2007,we reported that the system that US-VISIT uses to manage its finances (U.S. Immigration and Customs Enforcement’s Federal Financial Management System (FFMS)) has reliability issues. In light of these issues, the US-VISIT Budget Office tracks program obligations and expenditures separately using a spreadsheet and comparing this spreadsheet to the information in FFMS. Based on a review of this spreadsheet, there is reasonable assurance that the US-VISIT budget numbers being reported by FFMS are accurate. For DHS-provided data that our reporting commitments did not permit us to substantiate, we have made appropriate attribution indicating the data’s source. To assess the reliability of US-VISIT’s electronic document repository, we reviewed relevant documentation and talked with an agency official about data quality control procedures. We determined the data were sufficiently reliable for the purposes of this report. We conducted our work at US-VISIT program offices in Arlington, Virginia, from March 2007 through June 2007, in accordance with generally accepted government auditing standards. Homeland Security: DHS Enterprise Architecture Continues to Evolve But Improvements Needed. GAO-07-564. Washington D.C.: May 9, 2007 Homeland Security: US-VISIT Program Faces Operational, Technological, and Management Challenges. GAO-07-632T. Washington D.C.: March 20, 2007. Homeland Security: US-VISIT Has Not Fully Met Expectations and Longstanding Program Management Challenges Need to Be Addressed. GAO- 07-499T. Washington D.C.: February 16, 2007. Homeland Security: Planned Expenditures for U.S. Visitor and Immigrant Status Program Need to Be Adequately Defined and Justified. GAO-07-278. Washington D.C.: February 14, 2007. Border Security: US-VISIT Program Faces Strategic, Operational, and Technological Challenges at Land Ports of Entry. GAO-07-378T. Washington D.C.: January 31, 2007. Border Security: US-VISIT Program Faces Strategic, Operational, and Technological Challenges at Land Ports of Entry. GAO-07-248. Washington D.C.: December 6, 2006. Homeland Security: Contract Management and Oversight for Visitor and Immigrant Status Program Need to Be Strengthened. GAO-06-404. Washington, D.C.: June 9, 2006. Homeland Security: Progress Continues, but Challenges Remain on Department’s Management of Information Technology. GAO-06-598T. Washington, D.C.: March 29, 2006. Homeland Security: Recommendations to Improve Management of Key Border Security Program Need to Be Implemented. GAO-06-296. Washington, D.C.: February 14, 2006 Homeland Security: Visitor and Immigrant Status Program Operating, but Management Improvements Are Still Needed. GAO-06-318T. Washington, D.C.: January 25, 2006. Information Security: Department of Homeland Security Needs to Fully Implement Its Security Program. GAO-05-700. Washington, D.C.: June 17, 2005. Information Technology: Customs Automated Commercial Environment Program Progressing, but Need for Management Improvements Continues. GAO-05-267. Washington, D.C.: March 14, 2005. Homeland Security: Some Progress Made, but Many Challenges Remain on U.S. Visitor and Immigrant Status Indicator Technology Program. GAO-05-202. Washington, D.C.: February 23, 2005. Border Security: State Department Rollout of Biometric Visas on Schedule, but Guidance Is Lagging. GAO-04-1001. Washington, D.C.: September 9, 2004. Border Security: Joint, Coordinated Actions by State and DHS Needed to Guide Biometric Visas and Related Programs. GAO-04-1080T. Washington, D.C.: September 9, 2004. Homeland Security: First Phase of Visitor and Immigration Status Program Operating, but Improvements Needed. GAO-04-586. Washington, D.C.: May 11, 2004. Homeland Security: Risks Facing Key Border and Transportation Security Program Need to Be Addressed. GAO-04-569T. Washington, D.C.: March 18, 2004. Homeland Security: Risks Facing Key Border and Transportation Security Program Need to Be Addressed. GAO-03-1083. Washington, D.C.: September 19, 2003. Information Technology: Homeland Security Needs to Improve Entry Exit System Expenditure Planning. GAO-03-563. Washington, D.C.: June 9, 2003. The US-VISIT program consists of nine organizations and uses contractor support services in several areas. The roles and responsibilities of each of the nine organizations include the following: Chief Strategist is responsible for developing and maintaining the strategic vision and related documentation, transition plan, and business case. Budget and Financial Management is responsible for establishing the program’s cost estimates; analysis; and expenditure management policies, processes, and procedures that are required to implement and support the program by ensuring proper fiscal planning and execution of the budget and expenditures. Mission Operations Management is responsible for developing business and operational requirements based on strategic direction provided by the Chief Strategist. Outreach Management is responsible for enhancing awareness of the US-VISIT requirements among foreign nationals, key domestic audiences, and internal stakeholders by coordinating outreach to media, third parties, key influencers, Members of Congress, and the traveling public. Information Technology Management is responsible for developing technical requirements based on strategic direction provided by the Chief Strategist and business requirements developed by Mission Operations Management. Implementation Management is responsible for developing accurate, measurable schedules and cost estimates for the delivery of mission systems and capabilities. Acquisition and Program Management is responsible for establishing and managing the execution of program acquisition and management policies, plans, processes, and procedures. Administration and Training is responsible for developing and administering a human capital plan that includes recruiting, hiring, training, and retaining a diverse workforce with the competencies necessary to accomplish the mission. Facilities and Engineering Management is responsible for establishing facilities and environmental policies, procedures, processes, and guidance required to implement and support the program office. The program uses contractor support services in the following six subject matter areas: Facilities and Infrastructure – provides the infrastructure and facilities support necessary for current and anticipated future staff for task orders awarded under the prime contract. Program-Level Management – defines the activities required to support the prime contractor’s program management office, including quality management, task order control, acquisition support, and integrated planning and scheduling. Program-Level Engineering – assures integration across incremental development of US-VISIT systems and maintains interoperability and performance goals. Data Management Support – analyzes data for errors and omissions, corrects data, reports changes to the appropriate system of record owners, and provides reports. Data Management and Governance – provides support in the implementation of data management architecture and transition and sequencing plans, conducts an assessment of the current data governance structure and provides a recommendation for the future data governance structure, including a data governance plan. Mission Operations Data Integrity Improvements – determines possible ways to automate some of the data feeds from legacy systems, making the data more reliable. Pub. L. 107-173 (May 14, 2002). Attachment 4 Detailed Description of Increments and Component Systems ADIS also provides the ability to run queries on foreign nationals who have entry information but no corresponding exit information. ADIS receives status information from the Computer Linked Application Information Management System and the Student and Exchange Visitor Information System on foreign nationals. The exit process includes the carriers’ electronic submission of departure manifest data to APIS. This biographic information is passed to ADIS, where it is matched against entry information. traveler arrives for inspection. Travelers subject to US-VISIT are processed at secondary inspection, rather than at primary inspection. Inspectors’ workstations use a single screen, which eliminates the need to switch between the TECS and IDENT screens. obtained when the machine-readable zone of the travel document is swiped. If visa information about the traveler exists in the Datashare database,it is used to populate the form. Fields that cannot be populated electronically are manually entered. A copy of the completed form is printed and given to the traveler for use upon exit. No electronic exit information is captured. non-citizen traveler arrival and departure data received from air and sea arrival data captured by CBP officers at air and sea POEs, Form I-94 issuance data captured by CBP officers at Increment 2B land status update information provided by the Student and Exchange Visitor Information System (SEVIS) and the Computer Linked Application Information Management System (CLAIMS 3) (described below). ADIS provides record matching, query, and reporting functions. The passenger processing component of the Treasury Enforcement Communications Systems (TECS) includes two systems: Advance Passenger Information System (APIS) captures arrival and departure manifest information provided by air and sea carriers, and Interagency Border Inspection System (IBIS) maintains lookout data and interfaces with other agencies’ databases. CBP officers use these data as part of the admission process. The results of the admission decision are recorded in TECS and ADIS. biometric data on foreign visitors, including data such as Federal Bureau of Investigation informationon all known and suspected terrorists, selected wanted persons (foreign-born, unknown place of birth, previously arrested by DHS), and previous criminal histories for high-risk countries; DHS Immigration and Customs Enforcement information on deported felons and sexual registrants; and DHS information on previous criminal histories and previous IDENT enrollments. Attachment 4 Detailed Description of Increments and Component Systems US-VISIT also exchanges biographic information with other DHS systems, including SEVIS and CLAIMS 3: SEVIS is a system that contains information on foreign students and CLAIMS 3 is a system that contains information on foreign nationals who request benefits, such as change of status or extension of stay. Some of the systems involved in US-VISIT, such as IDENT and AIDMS, are managed by the program office, while some systems are managed by other organizational entities within DHS. For example: TECS is managed by CBP, SEVIS is managed by Immigration and Customs Enforcement, CLAIMS 3 is under United States Citizenship and Immigration Services, ADIS is owned by US-VISIT, but is managed by CBP.
The Department of Homeland Security (DHS) has established a program known as U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT) to collect, maintain, and share information, including biometric identifiers, on certain foreign nationals who travel to the United States. By congressional mandate, DHS is to develop and submit an expenditure plan for US-VISIT that satisfies certain conditions, including being reviewed by GAO. GAO reviewed the plan to (1) determine if the plan satisfied these conditions, (2) follow up on certain recommendations related to the program, and (3) provide any other observations. To address the mandate, GAO assessed plans and related documentation against federal guidelines and industry standards and interviewed the appropriate DHS officials. The US-VISIT expenditure plan, including related program documentation and program officials' statements, satisfies or partially satisfies some but not all of the legislative conditions required by the Department of Homeland Security Appropriations Act, 2007. For example, the department satisfied the condition that it provide certification that an independent verification and validation agent is currently under contract for the program and partially satisfied the condition that US-VISIT comply with DHS's enterprise architecture. However, the department did not satisfy the conditions that the plan include a comprehensive US-VISIT strategic plan and a complete schedule for biometric exit implementation. DHS partially implemented GAO's oldest open recommendations pertaining to US-VISIT. For example, while the department partially completed the recommendation that it develop and begin implementing a US-VISIT system security plan, the scope of the plan does not extend to all the systems that comprise US-VISIT. In addition, while the expenditure plan provides some information on US-VISIT's cost, schedule, and benefits associated with planned capabilities, the information provided is not sufficiently defined and detailed to address GAO's recommendation and provide a reasonable basis for measuring progress and holding the department accountable for results. GAO identified several additional observations. On the positive side, DHS data show that the US-VISIT prime contract is being executed according to cost and schedule expectations. However, DHS continues to propose disproportionately heavy investment in US-VISIT program management-related activities without adequate justification or full disclosure. Further, DHS continues to propose spending tens of millions of dollars on US-VISIT exit projects that are not well-defined, planned, or justified on the basis of costs, benefits, and risks. Overall, the US-VISIT fiscal year 2007 expenditure plan and other available program documentation do not provide a sufficient basis for effective program oversight and accountability. Both the legislative conditions and GAO's open recommendations are aimed at accomplishing both, and thus they need to be addressed quickly and completely. However, despite ample opportunity to do so, DHS has not done so and the reasons why are unclear. Until these recommendations are addressed, GAO does not believe that the program's disproportionate investment in management-related activities represents a prudent and warranted course of action or to expect that the newly launched exit endeavor will produce results different from past results--namely, no operational exit solution despite expenditure plans allocating about a quarter of a billion dollars to various exit activities.
Demand for GAO’s analysis and advice remains strong across the Congress. During the past 3 years, GAO has received requests or mandated work from all of the standing committees of the House and the Senate and over 80 percent of their subcommittees. In fiscal year 2007, GAO received over 1,200 requests for studies. This is a direct result of the high quality of GAO’s work that the Congress has come to expect as well as the difficult challenges facing the Congress where it believes having objective information and professional advice from GAO is instrumental. Not only has demand for our work continued to be strong, but it is also steadily increasing. The total number of requests in fiscal year 2007 was up 14 percent from the preceding year. This trend has accelerated in fiscal year 2008 as requests rose 26 percent in the first quarter and are up 20 percent at the mid-point of this fiscal year from comparable periods in 2007. As a harbinger of future congressional demand, potential mandates for GAO work being included in proposed legislation as of February 2008 totaled over 600, or an 86 percent increase from a similar period in the 109th Congress. The following examples illustrate this demand: Over 160 new mandates for GAO reviews were imbedded in law, including the Consolidated Appropriations Act of 2008, the Defense Appropriations Act of 2008, and 2008 legislation implementing the 9/11 Commission recommendations; New recurring responsibilities were given to GAO under the Honest Leadership and Open Government Act of 2007 to report annually on the compliance by lobbyists of registration and reporting requirements; and Expanded bid protest provisions applied to GAO that (1) allow federal employees to file protests concerning competitive sourcing decisions (A- 76), (2) establish exclusive bid protest jurisdiction at GAO over issuance of task and delivery orders valued at over $10 million, and (3) provide GAO bid protest jurisdiction over contracts awarded by the Transportation Security Administration. Further evidence of GAO’s help in providing important advice to the Congress is found in the increased numbers of GAO appearances at hearings on topics of national significance and keen interest (see table 1). In fiscal year 2007 GAO testified at 276 hearings, 36 more than fiscal year 2006. The fiscal year 2007 figure was an all-time high for GAO on a per capita basis and among the top requests for GAO input in the last 25 years. This up tempo of GAO appearances at congressional hearings has continued, with GAO already appearing at 140 hearings this fiscal year, as of April 4th. Our FTE level in fiscal year 2008 is 3,100—the lowest level ever for GAO. We are proud of the results we deliver to the Congress and our nation with this level, but with a slightly less than 5 percent increase in our FTEs to 3,251 we can better meet increased congressional requests for GAO assistance. While this increase would not bring GAO back to the 3,275 FTE level of 10 years ago, it would allow us to respond to the increased workload facing the Congress. GAO staff are stretched in striving to meet Congress’s increasing needs. People are operating at a pace that cannot be sustained over the long run. I am greatly concerned that if we try to provide more services with the existing level of resources, the high quality of our work could be diminished in the future. But I will not allow this to occur. This is not in the Congress’s nor GAO’s interest. One consequence of our demand vs. supply situation is the growing list of congressional requests that we are not able to promptly staff. While we continue to work with congressional committees to identify their areas of highest priority, we remain unable to staff important requests. This limits our ability to provide timely advice to congressional committees dealing with certain issues that they have slated for oversight, including Safety concerns such as incorporating behavior-based security programs into TSA’s aviation passenger screening process, updating our 2006 study of FDA’s post-market drug safety system, and reviewing state investigations of nursing home complaints. Operational improvements such as the effectiveness of Border Security checkpoints to identify illegal aliens, technical and programmatic challenges in DOD’s space radar programs, oversight of federally-funded highway and transit projects and the impact of the 2005 Bankruptcy Abuse Prevention and Consumer Protection Act. Opportunities to increase revenues or stop wasteful spending including reducing potential overstatements of charitable deductions and curbing potential overpayments and contractor abuses in food assistance programs. Our fiscal year 2009 budget request seeks to better position us to maintain our high level of support for the Congress and better meet increasing requests for help. This request would help replenish our staffing levels at a time when almost 20 percent of all GAO staff will be eligible for retirement. Accordingly, our fiscal year 2009 budget request seeks funds to ensure that we have the increased staff capacity to effectively support the Congress’s agenda, cover pay and uncontrollable inflationary cost increases, and undertake critical investments, such as technology improvement. GAO is requesting budget authority of $545.5 million to support a staff level of 3,251 FTEs needed to serve the Congress. This is a fiscally prudent request of 7.5 percent over our fiscal year 2008 funding level, as illustrated in table 2. Our request includes about $538.1 million in direct appropriations and authority to use about $7.4 million in offsetting collections. This request also reflects a reduction of about $6 million in nonrecurring fiscal year 2008 costs. Our request includes funds needed to increase our staffing level by less than 5 percent to help us provide more timely responses to congressional requests for studies; enhance employee recruitment, retention, and development programs, which increase our competitiveness for a talented workforce; recognize dedicated contributions of our hardworking staff through awards and recognition programs; address critical human capital components, such as knowledge capacity building, succession planning, and staff skills and competencies; pursue critical structural and infrastructure maintenance and restore program funding levels to regain our lost purchasing power; and undertake critical initiatives to increase our productivity. Key elements of our proposed budget increase are outlined as follows: Pay and inflationary cost increases We are requesting funds to cover anticipated pay and inflationary cost increases resulting primarily from annual across-the-board and performance-based increases and annualization of prior fiscal year costs. These costs also include uncontrollable, inflationary increases imposed by vendors as the cost of doing business. GAO generally loses about 10 percent of its workforce annually to retirements and attrition. This annual loss places GAO under continual pressure to replace staff capacity and renew institutional memory. In fiscal year 2007, we were able to replace only about half of our staff loss. In fiscal year 2008, we plan to replace only staff departures. Our proposed fiscal year 2009 staffing level of 3,251 FTEs would restore our staff capacity through a modest FTE increase, which would allow us to initiate congressional requests in a timelier manner and begin reducing the backlog of pending requests. Critical technology and infrastructure improvements We are requesting funds to undertake critical investments that would allow us to implement technology improvements, as well as streamline and reengineer work processes to enhance the productivity and effectiveness of our staff, make essential investments that have been deferred year after year but cannot continue to be delayed, and implement responses to changing federal conditions. Human capital initiatives and additional legislative authorities GAO is working with the appropriate authorization and oversight committees to make reforms that are designed to benefit our employees and to provide a means to continue to attract, retain, and reward a top- flight workforce, as well as help us improve our operations and increase administrative efficiencies. Among the requested provisions, GAO supports the adoption of a “floor guarantee” for future annual pay adjustments similar to the agreement governing 2008 payment adjustments reached with the GAO Employees Organization, IFPTE. The floor guarantee reasonably balances our commitment to performance-based pay with an appropriate degree of predictability and equity for all GAO employees. At the invitation of the House federal workforce subcommittee, we also have engaged in fruitful discussions about a reasonable and practical approach should the Congress decide to include a legislative provision to compensate GAO employees who did not receive the full base pay increases of 2.6 percent in 2006 and 2.4 percent in 2007. We appreciate their willingness to provide us with the necessary legal authorities to address this issue and look forward to working together with you and our oversight committee to obtain necessary funding to cover these payments. The budget authority to cover the future impact of these payments is not reflected in this budget request. As you know, on September 19, 2007, our Band I and Band II Analysts, Auditors, Specialists, and Investigators voted to be represented by the GAO Employees Organization, IFPTE, for the purpose of bargaining with GAO management on various terms and conditions of employment. GAO management is committed to working constructively with employee union representatives to forge a positive labor-management relationship. Since September, GAO management has taken a variety of steps to ensure it is following applicable labor relations laws and has the resources in place to work effectively and productively in this new union environment. Our efforts have involved delivering specialized labor-management relations training to our establishing a new Workforce Relations Center to provide employee and labor relations advice and services; hiring a Workforce Relations Center director, who also serves as our chief negotiator in collective bargaining deliberations; and postponing work on several initiatives regarding our current performance and pay programs. In addition, we routinely notify union representatives of meetings that may qualify as formal discussions, so that a representative of the IFPTE can attend the meeting. We also regularly provide the IFPTE with information about projects involving changes to terms and conditions of employment over which the union has the right to bargain. We are pleased that GAO and the IFPTE reached a prompt agreement on 2008 pay adjustments. The agreement was overwhelmingly ratified by bargaining unit members on February 14, 2008, and we have applied the agreed-upon approach to the 2008 adjustments to all GAO staff, with the exception of the SES and Senior Level staff, regardless of whether they are represented by the union. In fiscal year 2007, we addressed many difficult issues confronting the nation, including the conflict in Iraq, domestic disaster relief and recovery, national security, and criteria for assessing lead in drinking water. For example, GAO has continued its oversight on issues directly related to the Iraq war and reconstruction, issuing 20 products in fiscal year 2007 alone—including 11 testimonies to congressional committees. These products covered timely issues such as the status of Iraqi government actions, the accountability of U.S.-funded equipment, and various contracting and security challenges. GAO’s work spans the security, political, economic, and reconstruction prongs of the U.S. national strategy in Iraq. Highlights of the outcomes of GAO work are outlined below. See appendix II for a detailed summary of GAO’s annual measures and targets. Additional information on our performance results can be found in Performance and Accountability Highlights Fiscal Year 2007 at www.gao.gov. GAO’s work in fiscal year 2007 generated $45.9 billion in financial benefits. These financial benefits, which resulted primarily from actions agencies and the Congress took in response to our recommendations, included about $21.1 billion resulting from changes to laws or regulations, $16.3 billion resulting from improvements to core business processes, and $8.5 billion resulting from agency actions based on our recommendations to improve public services. Many of the benefits that result from our work cannot be measured in dollar terms. During fiscal year 2007, we recorded a total of 1,354 other improvements in government resulting from GAO work. For example, in 646 instances federal agencies improved services to the public, in 634 other cases agencies improved core business processes or governmentwide reforms were advanced, and in 74 instances information we provided to the Congress resulted in statutory or regulatory changes. These actions spanned the full spectrum of national issues, from strengthened screening procedures for all VA health care practitioners to improved information security at the Securities and Exchange Commission. See table 4 for additional examples. In January 2007, we also issued our High-Risk Series: An Update, which identifies federal areas and programs at risk of fraud, waste, abuse, and mismanagement and those in need of broad-based transformations. Issued to coincide with the start of each new Congress, our high-risk list focuses on major government programs and operations that need urgent attention. Overall, this program has served to help resolve a range of serious weaknesses that involve substantial resources and provide critical services to the public. GAO added the 2010 Census as a high-risk area in March 2008. GAO’s achievements are of great service to the Congress and American taxpayers. With your support, we will be able to continue to provide the high level of performance that has come to be expected of GAO. GAO exists to support the Congress in meeting its constitutional responsibilities and to help improve the performance and ensure the accountability of the federal government for the benefit of the American people. Provide Timely, Quality Service to the Congress and the Federal Government to . . . . . . Address Current and Emerging Challenges to the Well-being and Financial Security of the American People relted to . . . Viable commnitieNl rerce usnd environmentl protection Phyicl infrastrctre . . . Respond to Changing Security Threats and the Challenges of Global Interdependence involving . . . Advncement of U.S. intereGlobal mrket forceHelp Transform the Federal Government’s Role and How It Does Business to Meet 21st Century Challenges assssing . . . Key mgement chllenge nd progrm riFil poition nd finncing of the government Maximize the Value of GAO by Being a Model Federal Agency and a World-Class Professional Services Organization in the reas of . . . Our employee feedback survey asks staff how often the following occurred in the last 12 months (1) my job made good use of my skills, (2) GAO provided me with opportunities to do challenging work, and (3) in general, I was utilized effectively. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The budget authority GAO is requesting for fiscal year 2009--$545.5 million--represents a prudent request of 7.5 percent to support the Congress as it confronts a growing array of difficult challenges. GAO will continue to reward the confidence Congress places in us by providing a strong return on this investment. In fiscal year 2007 for example, in addition to delivering hundreds of reports and briefings to aid congressional oversight and decisionmaking, our work yielded: financial benefits, such as increased collection of delinquent taxes and civil fines, totaling $45.9 billion--a return of $94 for every dollar invested in GAO; over 1,300 other improvements in government operations spanning the full spectrum of national issues, ranging from helping Congress create a center to better locate children after disasters to strengthening computer security over sensitive government records and assets to encouraging more transparency over nursing home fire safety to strengthening screening procedures for VA health care practitioners; and expert testimony at 276 congressional hearings to help Congress address a variety of issues of broad national concern, such as the conflict in Iraq and efforts to ensure drug and food safety. GAO's work in fiscal year 2007 generated $45.9 billion in financial benefits. These financial benefits, which resulted primarily from actions agencies and the Congress took in response to our recommendations, included about $21.1 billion resulting from changes to laws or regulations, $16.3 billion resulting from improvements to core business processes, and $8.5 billion resulting from agency actions based on our recommendations to improve public services. Many of the benefits that result from our work cannot be measured in dollar terms. During fiscal year 2007, we recorded a total of 1,354 other improvements in government resulting from GAO work. For example, in 646 instances federal agencies improved services to the public, in 634 other cases agencies improved core business processes or governmentwide reforms were advanced, and in 74 instances information we provided to the Congress resulted in statutory or regulatory changes. These actions spanned the full spectrum of national issues, from strengthened screening procedures for all VA health care practitioners to improved information security at the Securities and Exchange Commission. In January 2007, we also issued our High-Risk Series: An Update, which identifies federal areas and programs at risk of fraud, waste, abuse, and mismanagement and those in need of broad-based transformations. Issued to coincide with the start of each new Congress, our high-risk list focuses on major government programs and operations that need urgent attention. Overall, this program has served to help resolve a range of serious weaknesses that involve substantial resources and provide critical services to the public. GAO added the 2010 Census as a high-risk area in March 2008. GAO's achievements are of great service to the Congress and American taxpayers. With Congressional support, we will be able to continue to provide the high level of performance that has come to be expected of GAO.
As you know, Mr. Chairman, the decennial census is a constitutionally mandated enterprise critical to our nation. Census data are used to apportion seats and redraw congressional districts, and to help allocate over $400 billion in federal aid to state and local governments each year. We added the 2010 Census to our list of high-risk areas in March 2008, because improvements were needed in the Bureau’s management of IT systems, the reliability of handheld computers (HHC) that were designed in part to collect data for address canvassing, and the quality of the Bureau’s cost estimates. Compounding the risk was that the Bureau canceled a full dress rehearsal of the census that was scheduled in 2008, in part, because of performance problems with the HHCs during the address canvassing portion of the dress rehearsal, which included freeze-ups and unreliable data transmissions. In response to our findings and recommendations, the Bureau has strengthened its risk management efforts, including the development of a high-risk improvement plan that described the Bureau’s strategy for managing risk and key actions to address our concerns. Overall, since March 2008, the Bureau has made commendable progress in getting the census back on track, but still faces a number of challenges moving forward. One of the Bureau’s long-standing challenges has been building an accurate address file, especially locating unconventional and hidden housing units, such as converted basements and attics. For example, as shown in figure 1, what appears to be a single-family house could contain an apartment, as suggested by its two doorbells. The Bureau has trained address listers to look for extra mailboxes, utility meters, and other signs of hidden housing units, and has developed training guides for 2010 to help enumerators locate hidden housing. Nonetheless, decisions on what is a habitable dwelling are often difficult to make—what is habitable to one worker may seem uninhabitable to another. If the address lister thought the house in figure 1 was a single family home, but a second family was living in the basement, the second family is at greater risk of being missed by the census. Conversely, if the lister thought a second family could be residing in the home, when in fact it was a single family house, two questionnaires would be mailed to the home and costly nonresponse follow-up visits could ensue in an effort to obtain a response from a phantom housing unit. Under the LUCA program, the Bureau partners with state, local, and tribal governments, tapping into their knowledge of local populations and housing conditions in order to secure a more complete count. Between November 2007 and March 2008, over 8,000 state, local, and tribal governments provided approximately 42 million addresses for potential addition, deletion, or other actions. Of those submissions, approximately 36 million were processed as potential address additions to the MAF—or what the Bureau considers “adds.” According to Bureau officials, one reason LUCA is important is because local government officials may be better positioned than the Bureau to identify unconventional and hidden housing units due to their knowledge of particular neighborhoods, or because of their access to administrative records in their jurisdictions. For example, local governments may have alternate sources of address information (such as utility bills, tax records, information from housing or zoning officials, or 911 emergency systems). In addition, according to Bureau officials, providing local governments with opportunities to actively participate in the development of the MAF can enhance local governments’ understanding of the census and encourage them to support subsequent operations. The preliminary results of address canvassing show that the Bureau added relatively few of the address updates submitted for inclusion in the MAF through LUCA. Of approximately 36 million addresses submitted, about 27.7 million were already in the MAF. Around 8.3 million updates were not in the MAF and needed to be field-verified during address canvassing. Of these, about 5.5 million were not added to the MAF because they did not exist, were a duplicate address, or were nonresidential. Address canvassing confirmed the existence of around 2.4 million addresses submitted by LUCA participants that were not already in the MAF (or about 7 percent of the 36 million proposed additions). Bureau officials have indicated that they began shipping out detailed feedback to eligible LUCA participants on October 8, 2009, that includes information on which addresses were accepted. On November 1, 2009, the Office of Management and Budget is scheduled to open the LUCA appeals office that will enable LUCA participants who disagree with the Bureau’s feedback to challenge the Bureau’s decisions. This appeals process allows governments to provide evidence of the existence of addresses that the Bureau missed. If the government’s appeal is sustained, then Bureau will include those addresses in later enumeration activities, and enumerate them if they are located in the field. The LUCA program is labor intensive for both localities and the Bureau because it involves data reviews, on-site verification, quality control procedures, and other activities, but produced marginal returns. While these were unique additions to the MAF that may not have been identified in any other MAF-building operation, they were costly additions nonetheless. As a result, as the Bureau prepares for the 2020 Census, it will be important for it to explore options that help improve the efficiency of LUCA, especially by reducing the number of duplicate and nonexistent addresses submitted by localities. The Bureau conducted address canvassing from March to July 2009. During that time, about 135,000 address listers went door to door across the country, comparing the housing units they saw on the ground to what was listed in the database of their HHCs. Depending on what they observed, listers could add, delete, or update the location of housing units. Although the projected length of the field operation ranged from 9 to 14 weeks, most early opening local census offices completed the effort in less than 10 weeks. Moreover, the few areas that did not finish early were delayed by unusual circumstances such as access issues created by flooding. The testing and improvements the Bureau made to the reliability of the HHCs prior to the start of address canvassing, including a final field test that was added to the Bureau’s preparations in December 2008, played a key role in the pace of the operation; but other factors, once address canvassing was launched, were important as well, including the (1) prompt resolution of problems with the HHCs as they occurred and (2) lower than expected employee turnover. With respect to the prompt resolution of problems, the December 2008 field test indicated that the more significant problems affecting the HHCs had been resolved. However, various glitches continued to affect the HHCs in the first month of address canvassing. For example, we were informed by listers or crew leaders in 14 early opening local census offices that they had encountered problems with transmissions, freeze-ups, and other problems. Moreover, in 10 early opening local census offices we visited, listers said they had problems using the Global Positioning System function on their HHCs to precisely locate housing units. When such problems occurred, listers called their crew leaders and/or the Bureau’s help desk to resolve the problems. When the issues were more systemic in nature, such as a software issue, the Bureau was able to quickly fix them using software patches. Moreover, to obtain an early warning of trouble, the Bureau monitored key indicators of the performance of the HHCs, such as the number of successful and failed HHC transmissions. This approach proved useful as Bureau quality control field staff were alerted to the existence of a software problem when they noticed that the devices were taking a long time to close out completed assignment areas. The Bureau also took steps to address procedural issues. For example, in the course of our field observations, we noticed that in several locations listers were not always adhering to training for identifying hidden housing units. Specifically, listers were instructed to knock on every door and ask, “Are there any additional places in this building where people live or could live?” However, we found that listers did not always ask this question. On April 28, 2009, we discussed this issue with senior Bureau officials. The Bureau, in turn, transmitted a message to its field staff emphasizing the importance of following training and querying residents if possible. Lower than expected attrition rates and listers’ availability to work more hours than expected also contributed to the Bureau’s ability to complete the Address Canvassing operation ahead of schedule. For example, the Bureau had planned for 25 percent of new hires to quit before, during, or soon after training; however, the national average was 16 percent. Bureau officials said that not having to replace listers with inexperienced staff accelerated the pace of the operation. Additionally, the Bureau assumed that employees would be available 18.5 hours a week. Instead, they averaged 22.3 hours a week. The Bureau’s address list at the start of address canvassing consisted of 141.8 million housing units. Listers added around 17 million addresses and marked about 21 million for deletion because, for example, the address did not exist. All told, listers identified about 4.5 million duplicate addresses, 1.2 million nonresidential addresses, and about 690,000 addresses that were uninhabitable structures. Importantly, these preliminary results represent actions taken during the production phase of address canvassing and do not reflect actual changes made to the Bureau’s master address list as the actions are first subject to a quality control check and then processed by the Bureau’s Geography Division. The preliminary analysis of addresses flagged for add and delete shows that the results of the operation (prior to quality control) were generally consistent with the results of address canvassing for the 2008 dress rehearsal. Table 1 compares the add and delete actions for the two operations. According to the Bureau’s preliminary analysis, the estimated cost for address canvassing field operations was $444 million, or $88 million (25 percent) more than its initial budget of $356 million. As shown in table 2, according to the Bureau, the cost overruns were because of several factors. One such factor was that the address canvassing cost estimate was not comprehensive, which resulted in a cost increase of $41 million. The Bureau underestimated the initial address canvassing workload and the fiscal year 2009 budget by 11 million addresses. Further, the additional 11 million addresses increased the Bureau’s quality control workload, where the Bureau verifies certain actions taken to correct the address list. Specifically, the Bureau did not fully anticipate the impact these additional addresses would have on the quality control workload, and therefore did not revise its cost estimate accordingly. Moreover, under the Bureau’s procedures, addresses that failed quality control would need to be recanvassed, but the Bureau’s cost model did not account for the extra cost of recanvassing addresses. As a result, the Bureau underestimated its quality control workload by 26 million addresses which resulted in $34 million in additional costs, according to the Bureau. Bringing aboard more staff than was needed also contributed to the cost overruns. For example, according to the Bureau’s preliminary analysis, training additional staff accounted for about $7 million in additional costs. Bureau officials attributed the additional training cost to inviting additional candidates to initial training due to past experience and anticipated no show and drop out rates, even though (1) the Bureau’s staffing plans already accounted for the possibility of high turnover and (2) the additional employees were not included in the cost estimate or budget. The largest census field operation will be next summer’s nonresponse follow-up, when the Bureau is to go door to door in an effort to collect data from households that did not mail back their census questionnaire. Based on the expected mail response rate, the Bureau estimates that over 570,000 enumerators will need to be hired for that operation. To better manage the risk of staffing difficulties while simultaneously controlling costs, several potential lessons learned for 2010 can be drawn from the Bureau’s experience during address canvassing. For example, we found that the staffing authorization and guidance provided to some local census managers were unclear and did not specify that there was already a cushion in the hiring plans for local census offices to account for potential turnover. Also, basing the number of people invited to initial training on factors likely to affect worker hiring and retention, such as the local unemployment rate, could help the Bureau better manage costs. According to Bureau officials, they are reviewing the results from address canvassing to determine whether they need to revisit the staffing strategy for nonresponse follow-up and have already made some changes. For example, in recruiting candidates, when a local census office reaches 90 percent of its qualified applicant goal, it is to stop blanket recruiting and instead focus its efforts on areas that need more help, such as tribal lands. However, in hiring candidates, the officials pointed out that they are cautious not to underestimate resource needs for nonresponse follow-up based on address canvassing results because they face different operational challenges in that operation than for address canvassing. For example, for nonresponse follow-up, the Bureau needs to hire enumerators who can work in the evenings when people are more likely to be at home and who can effectively deal with reluctant respondents, whereas with address canvassing, there was less interaction with households and the operation could be completed during the day. Problems with accurately estimating the cost of address canvassing are indicative of long-standing weaknesses in the Bureau’s ability to develop credible and accurate cost estimates for the 2010 Census. Accurate cost estimates are essential to a successful census because they help ensure that the Bureau has adequate funds and that Congress, the administration, and the Bureau itself can have reliable information on which to base decisions. However, in our past work, we noted that the Bureau’s estimate lacked detailed documentation on data sources and significant assumptions, and was not comprehensive because it did not include all costs. Following best practices from our Cost Estimating and Assessment Guide, such as defining necessary resources and tasks, could have helped the Bureau recognize the need to update address canvassing workload and other operational assumptions, resulting in a more reliable cost estimate. To better screen its workforce of hundreds of thousands of temporary census workers, the Bureau plans to fingerprint its temporary workforce for the first time in the 2010 Census. In past censuses, temporary workers were subject to a name background check that was completed at the time of recruitment. The Federal Bureau of Investigation (FBI) will provide the results of a name background check when temporary workers are first recruited. At the end of the workers’ first day of training, Bureau employees who have received around 2 hours of fingerprinting instruction are to capture two sets of fingerprints on ink fingerprint cards from each temporary worker. The cards are then sent to the Bureau’s National Processing Center in Jeffersonville, Indiana, to be scanned and electronically submitted to the FBI. If the results show a criminal record that makes an employee unsuitable for employment, the Bureau is to either terminate the person immediately or place the individual in nonworking status until the matter is resolved. If the first set of prints are unclassifiable, the National Processing Center is to send the FBI the second set of prints. Fingerprinting during address canvassing was problematic. Of the over 162,000 employees hired for the operation, 22 percent—or approximately 35,700 workers—had unclassifiable prints that the FBI could not process. The FBI determined that the unclassifiable prints were generally the result of errors that occurred when the prints were first made. Factors affecting the quality of the prints included difficulty in first learning how to effectively capture the prints and the adequacy of the Bureau’s training. Further, the workspace and environment for taking fingerprints was unpredictable, and factors such as the height of the workspace on which the prints were taken could affect the legibility of the prints. Consistent with FBI guidance, the Bureau relied on the results of the name background check for the nearly 36,000 employees with unclassifiable prints. Of the prints that could be processed, fingerprint results identified approximately 1,800 temporary workers (1.1 percent of total hires) with criminal records that name check alone failed to identify. Of the 1,800 workers with criminal records, approximately 750 (42 percent) were terminated or were further reviewed because the Bureau determined their criminal records—which included crimes such as rape, manslaughter, and child abuse—disqualified them from census employment. Projecting these percentages to the 35,700 temporary employees with unclassifiable prints, it is possible that more than 200 temporary census employees might have had criminal records that would have made them ineligible for census employment. Importantly, this is a projection, and the number of individuals with criminal backgrounds that were hired for address canvassing, if any, is not known. Applying these same percentages to the approximately 600,000 people the Bureau plans to fingerprint for nonresponse follow-up, unless the problems with fingerprinting are addressed, approximately 785 employees with unclassifiable prints could have disqualifying criminal records but still end up working for the Bureau. Aside from public safety concerns, there are cost issues as well. The FBI charged the Bureau $17.25 per person for each background check, whether or not the fingerprints were classifiable. The Bureau has taken steps to improve image quality for fingerprints captured in future operations by refining instruction manuals and providing remediation training on proper procedures. In addition, the Bureau is considering activating a feature on the National Processing Center’s scanners that can check the legibility of the image and thus prevent poor quality prints from reaching the FBI. These are steps in the right direction. As a further contingency, it might also be important for the Bureau to develop a policy for refingerprinting employees to the extent that both cards cannot be read. The scale of the destruction in those areas affected by Hurricanes Katrina, Rita, and Ike made address canvassing in parts of Mississippi, Louisiana, and Texas especially challenging (see fig. 2). Hurricane Katrina alone destroyed or made uninhabitable an estimated 300,000 homes. Recognizing the difficulties associated with address canvassing in these areas because of shifting and hidden populations and changes to the housing stock, the Bureau, partly in response to recommendations made in our June 2007 report, developed supplemental training materials for natural disaster areas to help listers identify addresses where people are, or may be, living when census questionnaires are distributed. For example, the materials noted the various situations listers might encounter, such as people living in trailers, homes marked for demolition, converted buses and recreational vehicles, and nonresidential space such as storage areas above restaurants. The training material also described the clues that could alert listers to the presence of nontraditional places where people are living and provided a script they should follow when interviewing residents on the possible presence of hidden housing units. Additional steps taken by the city of New Orleans also helped the Bureau overcome the challenge of canvassing neighborhoods devastated by Hurricane Katrina. As depicted in figure 3 below, city officials replaced the street signs even in abandoned neighborhoods. This assisted listers in locating the blocks they were assigned to canvass and expedited the canvassing process in these deserted blocks. To further ensure a quality count in the hurricane-affected areas, the Bureau plans to hand-deliver an estimated 1.2 million questionnaires (and simultaneously update the address list) to housing units in much of southeast Louisiana and south Mississippi that appear inhabitable, even if they do not appear on the address list updated by listers during address canvassing. Finally, the Bureau stated that it must count people where they are living on Census Day and emphasized that if a housing unit gets rebuilt and people move back before Census Day, then that is where those people will be counted. However, if they are living someplace else, then they will be counted where they are living on Census Day. To help ensure group quarters are accurately included in the census, the Bureau is conducting an operation called Group Quarters Validation, an effort that is to run during September and October 2009, and has a workload of around 2 million addresses in both the United States and Puerto Rico. During this operation, census workers are to visit each group quarter and interview its manager or administrator using a short questionnaire. The goal is to determine the status of the address as a group quarter, housing unit, transitory location, nonresidential, vacant, or delete. If the dwelling is in fact a group quarter, it must then be determined what category it fits under (e.g., boarding school, correctional facility, health care facility, military quarters, residence hall or dormitory, etc.), and confirm its correct geographic location. The actual enumeration of group quarters is scheduled to begin April 1, 2010. According to the 2005-2007 American Community Survey 3-year estimates, more than 8.1 million people, or approximately 2.7 percent of the population, live in group quarter facilities. Group quarters with the largest populations include college and university housing (2.3 million), adult correctional facilities (2.1 million), and nursing facilities (1.8 million). The Bureau drew from a number of sources to build its list of group quarters addresses including data from the 2000 Census, LUCA submissions, internet based research, and group quarters located during address canvassing. During the 2000 Census, the Bureau did not always accurately enumerate group quarters. For example, in our prior work, we found that the population count of Morehead, Kentucky, increased by more than 1,600 when it was later found that a large number of students from Morehead State University’s dormitories were erroneously excluded from the city’s population when the Bureau incorrectly identified the dormitories as being outside city limits and in an unincorporated area of Rowan County. Similarly, North Carolina’s population count was reduced by 2,828 people, largely because the Bureau had to delete duplicate data on almost 2,700 students in 26 dormitories at the University of North Carolina at Chapel Hill. Precision is critical because, in some cases, small differences in population totals could potentially impact apportionment and/or redistricting decisions. The Bureau developed and tested new group quarters procedures in 2004 and 2006 that were designed to address the difficulties the Bureau had in trying to identify and count this population during the 2000 Census. For example, the Bureau integrated its housing unit and group quarters address lists in an effort to reduce the potential for duplicate counting as group quarters would sometimes appear on both address lists. Moreover, the Bureau has refined its definition of the various types of group quarters to make it easier to accurately categorize them. The operation began on September 28, as planned, in all 151 early opening local census offices and was 95 percent complete as of October 16, 2009. We have begun observations and will report our findings at a later date. With the cost of enumerating each housing unit continuing to grow, it will be important for the Bureau to determine which of its multiple MAF- building operations provide the best return on investment in terms of contributing to accuracy and coverage. According to the Bureau, it is planning to launch over 70 evaluations and assessments of critical 2010 Census operations and processes, many of which are focused on improving the quality of the MAF. For example, the Bureau plans to study options for targeted address canvassing as an alternative to canvassing every block in the country. The Bureau considered two major criteria for determining which studies to include in their evaluation program—the possibility for significant cost savings in 2020 and/or the possibility of significant quality gains in 2020. As the Bureau makes plans for the 2020 Census, these and other studies could prove useful in helping the Bureau streamline and consolidate operations, with an eye toward controlling costs and improving accuracy. Automation and IT systems will play a critical role in the ability of MAF/TIGER to extract address lists, maps, and provide other geographic support services. In our prior work, however, we have called on the Bureau to strengthen its testing of the MAF/TIGER system. In March 2009, for example, we reported and testified that while the MAF/TIGER program had partially completed testing activities, test plans and schedules were incomplete and the program’s ability to track progress was unclear. Specifically, while the Bureau had partially completed testing for certain MAF/TIGER products (e.g., database extracts) related to address canvassing, subsequent test plans and schedules did not cover all of the remaining products needed to support the 2010 Census. Further, Bureau officials stated that although they were estimating the number of products needed, the exact number would not be known until the requirements for all of the 2010 Census operations were determined. As such, without knowing the total number of products and when the products would be needed, the Bureau risked not being able to effectively measure the progress of MAF/TIGER testing activities. This in turn increased the risk that there may not be sufficient time and resources to adequately test the system and that the system may not perform as intended. At that time we recommended that the MAF/TIGER program establish the number of products required and establish testing plans and schedules for 2010 operations. In response to our recommendations, the Bureau has taken several steps to improve its MAF/TIGER testing activities, but substantial work remains to be completed. For example, the MAF/TIGER program has established the number of products and when the products are needed for key operations. Furthermore, the program finalized five of eight test plans for 2010 operations, of which the testing activities for one test plan (address canvassing) have been completed; three are under way; and one has not yet started. Lastly, the program’s test metrics for MAF/TIGER have recently been revised; however, only two of five finalized test plans include detailed metrics. While these activities demonstrate progress made in testing the MAF/TIGER system, the lack of finalized test plans and metrics still presents a risk that there may not be sufficient time and resources to adequately test the system and that the system may not perform as intended. Given the importance of MAF/TIGER to establishing where to count U.S. residents, it is critical that the Bureau ensure this system is thoroughly tested. Bureau officials have repeatedly stated that the limited amount of time remaining will make completing all testing activities challenging. The Bureau recognizes the critical importance of an accurate address list and maps, and continues to put forth tremendous effort to help ensure MAF/TIGER is complete and accurate. That said, the nation’s housing inventory is large, complex, and diverse, with people residing in a range of different circumstances, both conventional and unconventional. The operations we included in this review generally have proceeded as planned, or are proceeding as planned. Nevertheless, accurately locating each and every dwelling in the nation is an inherently challenging endeavor, and the overall quality of the Bureau’s address list will not be known until the Bureau completes various assessments later in the census. Moreover, while the Bureau has improved its management of MAF/TIGER IT systems, we continue to be concerned about the lack of finalized test plans, incomplete metrics to gauge progress, and an aggressive testing and implementation schedule going forward. Given the importance of MAF/TIGER to an accurate census, it is critical that the Bureau ensure this system is thoroughly tested. On October 15, 2009, we provided the Bureau with a statement of facts for our ongoing audit work pertaining to this testimony, and on October 16, 2009, the Bureau forwarded written comments. The Bureau made some suggestions where additional context or clarification was needed and, where appropriate, we made those changes. Mr. Chairman and members of this Subcommittee, this concludes my statement. I would be happy to respond to any questions that you might have at this time. If you have any questions on matters discussed in this statement, please contact Robert N. Goldenkoff at (202) 512-2757 or by e-mail at goldenkoffr@gao.gov. Other key contributors to this testimony include Assistant Director Signora May, Peter Beck, Steven Berke, Virginia Chanley, Benjamin Crawford, Jeffrey DeMarco, Dewi Djunaidy, Vijay D’Souza, Elizabeth Fan, Amy Higgins, Richard Hung, Kirsten Lauber, Andrea Levine, Naomi Mosser, Catharine Myrick, Lisa Pearson, David Reed, Jessica Thomsen, Jonathan Ticehurst, Kate Wulff, and Timothy Wexler. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The decennial census is a constitutionally mandated activity that produces data used to apportion congressional seats, redraw congressional districts, and help allocate billions of dollars in federal assistance. A complete and accurate master address file (MAF), along with precise maps--the U.S. Census Bureau's (Bureau) mapping system is called Topologically Integrated Geographic Encoding and Referencing (TIGER)--are the building blocks of a successful census. If the Bureau's address list and maps are inaccurate, people can be missed, counted more than once, or included in the wrong location. This testimony discusses the Bureau's readiness for the 2010 Census and covers: (1) the Bureau's progress in building an accurate address list; and (2) an update of the Bureau's information technology (IT) system used to extract information from its MAF/TIGER? database. Our review included observations at 20 early opening local census offices in hard-to-count areas. The testimony is based on previously issued and ongoing work. The Bureau has taken, and continues to take measures to build an accurate MAF and to update its maps. From an operational perspective, the Local Update of Census Addresses (LUCA) and address canvassing generally proceeded as planned, and GAO did not observe any significant flaws or operational setbacks. Group quarters validation got underway in late September as planned. A group quarters is a place where people live or stay that is normally owned or managed by an entity or organization providing housing and/or services for the residents (such as a boarding school, correctional facility, health care facility, military quarters, residence hall, or dormitory). LUCA made use of local knowledge to enhance MAF accuracy. Between November 2007 and March 2008, over 8,000 state, local, and tribal governments participated in the program. However, LUCA submissions generated a relatively small percentage of additions to the MAF. For example, of approximately 36 million possible additions to the MAF that localities submitted, 2.4 million (7 percent) were not already in the MAF. The other submissions were duplicate addresses, non-existent, or non-residential. Address canvassing (an operation where temporary workers go door to door to verify and update address data) finished ahead of schedule, but was over budget. Based on initial Bureau data, the preliminary figure on the actual cost of address canvassing is $88 million higher than the original estimate of $356 million, an overrun of 25 percent. The testing and improvements the Bureau made to the reliability of the hand held computers prior to the start of address canvassing played a key role in the pace of the operation, but other factors were important as well, including the prompt resolution of technical problems and lower than expected employee turnover. The Bureau's address list at the start of address canvassing consisted of 141.8 million housing units. Listers added around 17 million addresses and marked about 21 million for deletion. All told, listers identified about 4.5 million duplicate addresses, 1.2 million nonresidential addresses, and about 690,000 addresses that were uninhabitable structures. The overall quality of the address file will not be known until later in the census when the Bureau completes various assessments. While the Bureau has made some improvements to its management of MAF/TIGER? IT such as finalizing five of eight test plans, GAO continues to be concerned about the lack of finalized test plans, incomplete metrics to gauge progress, and an aggressive testing and implementation schedule going forward. Given the importance of MAF/TIGER? to an accurate census, it is critical that the Bureau ensure this system is thoroughly tested.
Since 1980, the Bureau has used statistical methods to generate detailed estimates of census undercounts and overcounts, including those of particular ethnic, racial, and other groups. To carry out the 2000 Census’s Accuracy and Coverage Evaluation program (A.C.E.), the Bureau conducted a separate and independent sample survey that, when matched to the census data, was to enable the Bureau to use statistical estimates of net coverage errors to adjust final census tabulations according to the measured undercounts, if necessary. The Bureau obligated about $207 million to its coverage evaluation program from fiscal years 1996 through 2001, which was about 3 percent of the $6.5 billion total estimated cost of the 2000 Census. While the A.C.E. sample survey of people was conducted several weeks after Census Day, April 1, the “as of” date on which the total population is to be counted, many of the processes were the same as the 2000 Census. For the census, the Bureau tried to count everybody in the nation, regardless of their dwelling, and certain kinds of dwellings, including single-family homes, apartments, and mobile homes, along with demographic information on the inhabitants. For A.C.E., the Bureau surveyed about 314,000 housing units in a representative sample of “clusters”—geographic areas each with about 30 housing units. The sample comprised roughly 12,000 of the about 3 million “clusters” nationwide. As illustrated in figure 1, the Bureau used a similar process to develop address lists, collect response data, and tabulate and disseminate data— one for the decennial census and one for A.C.E. sample areas. For the Census, the Bureau mailed out forms for mail-back to most of the housing units in the country; hand-delivered mail-back forms to most of the rest of the country; and then carried out a number of follow-up operations designed to count nonrespondents and improve data quality. A.C.E. collected response data through interviewing from April 24 through September 11, 2000. After the census and A.C.E. data collection operations were completed, the Bureau attempted to match each person counted on the A.C.E. list to the list of persons counted by the 2000 Census in the A.C.E. sample areas to determine exactly which persons had been missed or counted more than once by either A.C.E. or the Census. The results of the matching process, along with data on the racial/ethnic and other characteristics of persons compared, were to provide the basis for A.C.E. to estimate the extent of coverage error in the census and population subgroups and enable the Bureau to adjust the final decennial census tabulations accordingly. The matching process needed to be as precise and complete as possible, since A.C.E. collected data on only a sample of the nation’s population, and small percentages of matching errors could significantly affect the estimates of under- and overcounts generalized to the entire nation. Since the 2000 Census, we have issued four other reports on A.C.E., addressing its cost and implementation as part of our ongoing series on the results of the 2000 Census, as well as a report on the lessons learned for planning a more cost-effective census in 2010. (See the Related GAO Products section at the end of this report for the assessments issued to date.) These reports concluded, among other things, that while (1) the address list the Bureau used for the A.C.E. program appeared much more accurate than the preliminary lists developed for the 2000 Census and (2) quality assurance procedures were used in the matching process, certain implementation problems had the potential to affect subsequent matching results and thus estimates of total census coverage error. In the end, the Bureau decided not to use A.C.E.’s matching process results to adjust the 2000 Census. In March 2001, a committee of senior career Bureau officials recommended against using A.C.E. estimates of census coverage error to adjust final census tabulations for purposes of redistricting Congress. In October 2001, the committee also recommended against adjusting census data used for allocating federal funds and other purposes, largely because Bureau research indicated that A.C.E. did not account for at least 3 million erroneously counted persons (mostly duplicates) in the census, raising questions about the reliability of coverage error estimates. In March 2003, after considerable additional research, the Bureau published revised coverage error estimates and again decided not to adjust official census data, this time for the purpose of estimating the population between the 2000 and 2010 censuses. In light of its 2000 experience, the Bureau officially announced in January 2004 that while it plans to fully evaluate the accuracy of the 2010 census, it will not develop plans for using these coverage error estimates to adjust the 2010 Decennial Census. Bureau officials have told us that there is insufficient time to carry out the necessary evaluations of the coverage estimates between census field data collection and the Bureau’s legally mandated deadline (within 12 months of Census Day) for releasing redistricting data to the states. Furthermore, the Bureau does not believe adjustment is possible. In responding to an earlier GAO report recommending that the Bureau “determine the feasibility” of adjusting the 2010 Census, the Bureau wrote that the 2000 Census and A.C.E. was “a definitive test of this approach” which “provided more than ample evidence that this goal cannot be achieved.” However, in March, the National Academy of Sciences (NAS) published a report that recommended that the Bureau and the administration request and Congress provide funding for an improved coverage evaluation program that could be used as a basis for adjusting the census, if warranted. The Academy agrees with the Bureau that 12 months is insufficient time for evaluation and possible adjustment; in the same publication, NAS recommended Congress consider extending the statutory deadline of 12 months for providing data for redistricting purposes, a suggestion which, if appropriate, could make adjustment possible. To identify the factors that may have contributed to A.C.E. missing coverage errors in the census, we reviewed evaluations of A.C.E. and the Bureau’s subsequent revisions to its estimation methodology, as well as changes made to the design from its 1990 attempts to estimate coverage. We interviewed Bureau officials responsible for A.C.E. decision making to obtain further context and clarification. We did not attempt to identify all factors contributing to the success or failure of A.C.E in estimating coverage error. Since our focus was on the process and decisions that led to the results rather than on determining the underlying numbers themselves, we did not audit the Bureau’s research, the underlying data, or its conclusions. We relied on the Bureau’s own reporting quality assurance processes to assure the validity and accuracy of its technical reporting, and thus we did not independently test or verify individual Bureau evaluations of their methodologies. To identify the extent of the census errors not accounted for by A.C.E., we reviewed the descriptive coverage error estimates and the limitations and context of these data as described in the Bureau reports published by the Bureau in March 2001, October 2001, and March 2003. On, August 9, 2004, we requested comments on the draft of this report from the Secretary of Commerce. On September 10, 2004, the Under Secretary for Economic Affairs, Department of Commerce forwarded written comments from the department (see app. I), which we address in the “Agency Comments and Our Evaluation” section at the end of this report. The following Bureau decisions concerning the design of the census and the A.C.E. program created difficulties and blind spots for the coverage evaluation, possibly preventing A.C.E. from reliably measuring coverage error: (1) using residence rules that were unable to capture the complexity of American society, (2) excluding the group quarters population from the A.C.E. sample survey, (3) making various decisions that led to an increase in the number of “imputed” records in the census, (4) removing 2.4 million suspected duplicate persons from the census but not the A.C.E. sample, and (5) reducing the sample area wherein A.C.E. searched for duplicates during matching. However, the Bureau has not accounted for how these design decisions have affected coverage error estimates, which has prevented it from pinpointing what went wrong with A.C.E., and this in turn could hamper its efforts to craft a more successful coverage measurement program for the next national head count. Bureau officials attribute A.C.E.’s inaccuracy primarily to the fact that it used residence rules that do not fully capture the complexity of American society. According to senior Bureau officials, increasingly complicated social factors, such as extended families and population mobility, presented challenges for A.C.E., making it difficult to determine exactly where certain individuals should have been counted. Specifically, in developing A.C.E. methodology, Bureau officials assumed that each person in its sample could be definitively recorded at one known residence that the Bureau could determine via a set of rules. However, individuals’ residency situations are often complicated: Bureau officials cite the example of children in custody disputes whose separated parents both may have strong incentives to claim the children as members of their household, despite census residence rules that attempt to resolve which parent should report the child(ren). In such situations, wherein the residence rules are either not understood, are not followed, or do not otherwise provide resolution, the Bureau has difficulty determining the correct location to count the children. Bureau officials cite similar difficulties counting college students living away from home, as well as people who live at multiple locations throughout the year, such as seasonal workers or retirees. A.C.E. design also assumed that follow-up interviews would clarify and improve residence data for people for whom vague, incomplete, or ambiguous data were provided and whose cases remained unresolved. However, the Bureau found it could not always rely on individuals to provide more accurate or complete information. In fact, in our earlier reporting on A.C.E. matching, we described several situations wherein conflicting information had been provided to the Bureau during follow-up interviews with individuals, and Bureau staff had to decide which information to use. More recently, the Associate Director for Decennial Census told us that returning to an A.C.E. household to try and resolve conflicting data sometimes yielded new or more information but not necessarily better information or information that would resolve the conflict. The Bureau plans to review and revise its census residence rules for 2010, which may clarify some of this confusion. While the Bureau emphasizes residence rules as the primary cause of A.C.E. failure, our research indicates some of the Bureau’s other design decisions created blind spots that also undermined the program’s ability to accurately estimate census error. For example, the Bureau decided to leave people living in group quarters—such as dormitories and nursing homes— out of the A.C.E. sample survey, which effectively meant they were left out of the scope of A.C.E. coverage evaluation (see fig. 2). As a result, the matching results could not provide coverage error information for the national group quarters population of 7.8 million. In addition, the Bureau did not design A.C.E. matching to search for duplicate records within the subset of this population counted by the census, though later Bureau research estimated that if it had, it would have measured over 600,000 additional duplicates there. In response to our draft report the Department wrote that coverage evaluation was designed to measure some of these duplicates since during its follow-up interviews at households during A.C.E. matching, the Bureau included questions intended to identify college students living away at college. While coverage evaluation in 1990 included some group quarters, such as college dormitories and nursing homes, within its sample, the Bureau reported that the high mobility of these people made it more difficult to count them, thus the 1990 estimates of coverage for this population were weak. The Bureau decided not to gather these data during 2000 A.C.E. data collection based in part on the difficulty of collecting and matching this information in the past, and in part as a result of a separate design decision to change the way to treat information for people who moved during the time period between the census and the coverage evaluation interviews. By excluding group quarters from the coverage evaluation sample, the Bureau had less information collected on a sample of this population that included some duplication, and the missing information may have enabled it to better detect and account for such duplication. In addition, by developing coverage error estimates that were not applicable to the group quarters population, the Bureau made the task of assessing the quality of the census as a whole more difficult. Figure 2 also shows that another blind spot emerged as the Bureau increased the number of “imputed” records in the final census, though they could not be included in the A.C.E. sample survey. The Bureau estimates a certain number of individuals—called imputations—that they have reason to believe exist, despite the fact that they have no personal information on them, and adds records to the census (along with certain characteristics such as age and race/ethnicity) to account for them. For example, when the Bureau believes a household is occupied but does not have any information on the number of people living there, it will impute the number of people as well as their personal characteristics. The Bureau increased imputations in 2000 from about 2 million in 1990 to about 5.8 million records. Changes in census and coverage evaluation design from 1990 likely contributed to this increase. Specifically, the Bureau reduced the number of persons who could have their personal information recorded on the standard census form in 2000. In addition, the Bureau changed the way coverage evaluation accounted for people who moved between Census Day and the day of the coverage evaluation interview. These design changes resulted in less complete information on people and likely contributed to an increase in imputations. (These and other changes are explained in more detail in app. II.) Because imputed records are simply added to the census totals and do not have names attached to them, it was impossible for A.C.E. to either count imputed individuals using the A.C.E. sample survey or incorporate them into the matching process. At any rate, since the true number and characteristics of these persons are unknown, matching these records via A.C.E. would not have provided meaningful information on coverage error. A.C.E. was designed to measure the net census coverage error, in essence the net effect of people counted more than once minus people missed, and included measurement of the effects of imputation on coverage error. The Bureau generalizes its estimates of coverage error to cover imputations and also maintains that its imputation methods do not introduce any statistical bias in population counts. But the formulas used by the Bureau to calculate its estimates of coverage error account for imputations by subtracting them from the census count being evaluated, not by measuring the error in them. And the Bureau did not attempt to determine the accuracy of the individual imputations, that is although the Bureau imputed persons they have reason to believe existed, the Bureau does not know whether it over- or underestimated such persons. As more imputations are included in the census total, the generalization of coverage error estimates to that total population becomes less reliable. Similarly, the Bureau created an additional coverage error blind spot by including in the census 2.4 million records that it previously suspected were duplicates and thus were not included in the coverage evaluation. Prior to A.C.E. matching, the Bureau removed about 6 million persons from the census population, identifying them as likely duplicates. Then, after completing additional research on these possible duplicates, the Bureau decided to reinstate the records for 2.4 million of these persons it no longer suspected were duplicates. However, it did so after A.C.E. had completed matching and evaluating the census records from which the 2.4 million persons had been removed and for which coverage error estimation had begun (see fig. 3). The Bureau documented in a memorandum that the exclusion of the records from A.C.E. processing was not statistically expected to affect A.C.E. results. However, later Bureau research concluded that over 1 million of these 2.4 million records were likely duplicates, none of which could have been detected by A.C.E. While the Bureau maintains that the reinstatement of the over 1 million reinstated likely duplicates did not affect the A.C.E. estimate in a statistically significant way, this suggests that the resulting A.C.E.-based estimate of national population itself is blind to the presence in the census of the over 1 million duplicates the Bureau reintroduced. For 2010, Bureau officials have chartered a planning group responsible for, among other things, proposing improvements to reduce duplication in the census, which may address some of the problem. In addition to excluding populations from the scope of evaluation, the Bureau further curtailed its ability to measure coverage error by reducing A.C.E.’s search area to only one geographic ring around selected A.C.E. sample areas during the matching process. For the 1990 Census, the Bureau’s coverage evaluation program always searched at least one surrounding ring and an even larger ring in rural areas. However, in 1999, before a panel of the National Academy of Sciences, Bureau officials announced and defended the decision to not expand the search area except in targeted instances, saying that Bureau research indicated that the additional matches found in 1990 from the expanded search areas did not justify the additional effort. In its comments on our draft report, the Department writes that more important than the size of the search area is maintaining “balance”—i.e., search areas must be used consistently both to identify people who have been missed and to identify people who have been counted in error (including duplicates). The Department also justified the decision to reduce the search area in 2000 from 1990 in part by stating, “in an expected value sense, the reduced search area would have affected” the extra missed people and the extra miscounted people equally, or been balanced. However, later research discovered large numbers of the missed duplicates in the actual census by matching A.C.E. persons to census persons nationwide—far beyond the areas searched during A.C.E. matching. A 2001 Bureau report presenting the results of computer rematching of the A.C.E. sample concluded, “Our analysis found an additional 1.2 million duplicate enumerations in units that were out-of- scope for the A.C.E. but would have been in-scope for [1990 coverage evaluation].” In other words, if the Bureau had continued its 1990 practice of searching in housing units in larger geographic areas in 2000, the A.C.E. process might have identified more duplicates and yielded better results. The Bureau research cited above appears to question the decision to reduce the search areas. In fact, after the 2000 Census was completed, again before the National Academy of Sciences, a Bureau official suggested that any coverage evaluation methods for 2010 should conduct a more thorough search, perhaps expanding the search area to two or more geographic rings everywhere. This review has identified only some of the decisions that could have created problems in A.C.E. estimates. Because the Bureau has not attempted to account for how all of its design decisions relating to A.C.E. and the census affected the outcome of the program, the full range of reasons that A.C.E. estimates were not reliable remains obscure. Bureau research documented and this report describes the magnitude of the direct effects of most of these design decisions in terms of the size of the census population affected, and the Bureau’s final reporting on the revised A.C.E. estimates mentions many design changes, but not together or comprehensively, and they do not explain how the changes might have affected the estimates of coverage error. Without clear documentation of how significant changes in the design of the census and A.C.E. might have affected the measurements of census accuracy, it is not apparent how problems that have arisen as a result of the Bureau’s own decisions can be distinguished from problems that are less under the Bureau’s control, i.e., difficulties inherent to conducting coverage evaluation. Thus the Bureau’s plans to measure the coverage error for the 2010 Census are not based on a full understanding of the relationship between the separate decisions it makes about how to conduct A.C.E. and the census and the resulting performance of its coverage measurement program. This in turn could hamper the Bureau’s efforts to craft a more successful coverage measurement program for the next national head count. While the Bureau produced a vast body of research regarding the census and A.C.E., including multiple reassessments and revisions of earlier work, the revised estimates are not reliable. The initial A.C.E. estimates of coverage error suggested that while historical patterns of differences in undercounts between demographic groups persisted, the Bureau had succeeded in 2000 in reducing the population undercounts of most minorities, and the subsequent revised estimates showed even greater success in reducing population undercounts in the Census. However, the large number of limitations described in the Bureau’s documentation of the methodology used to generate the revised estimates of coverage error suggest that these estimates are less reliable than reported and may not describe the true rate of coverage error. The Bureau, however, has not made the full impact of these methodological limitations on the data clear. Moreover, the final revised estimates of coverage error for the count of housing units and the count of people, which the Bureau expected to be similar if the estimates were reliable, differed, further raising questions about the reliability of the revised estimates. The Bureau undertook an extensive review of A.C.E.’s results over a 3-year period. In doing so, the Bureau revised its estimates of A.C.E. coverage error twice—first in October 2001 and again in March 2003. These revisions suggest that the original A.C.E. estimates were unreliable. Figure 4 illustrates how each of the revised A.C.E. estimates of coverage error reduced the undercount for most of the three major race/origin groups from the initial A.C.E. estimates. Note that the final revised estimate indicates that the total population was actually overcounted by one-half of 1 percent. The differences in the revised estimates presumably provide a measure of the error in the original A.C.E. estimates. (The estimated net population undercounts—and their standard errors—for these groups are provided in app. III.) However, the revised estimates of coverage error may not be reliable enough themselves to provide an adequate basis for such a comparison to measure error in the original A.C.E. estimates. First, the number of Bureau-documented limitations with respect to the methodologies used in generating A.C.E.’s revised estimates raises questions about the accuracy of the revised estimates. Within voluminous technical documentation of its process, the Bureau identified several methodological decisions wherein if the decisions had been made differently, they may have led to appreciably different results. Thus the methods the Bureau chose may have affected the estimates of census coverage error themselves and/or the measures of uncertainty associated with the estimates, limiting the general reliability of the revisions. The limitations in the methodologies for the revised estimates included the following: Errors in the demographic data used to revise estimates may have contributed to additional error in the estimates. Errors stemming from the Bureau’s choice of model to resolve uncertain match cases were accounted for in initial March 2001 A.C.E. estimates but were not accounted for in the revised estimates in March 2003. Alternative possible adjustments for known inefficiencies in computer matching algorithms would directly affect revised estimates. The Bureau’s evaluations of the quality of clerical matching were used to revise the initial A.C.E. coverage estimates, leaving the Bureau less reliable means to measure the uncertainty in the revised estimates. For the earlier revision of coverage error estimates, the Bureau provided the range of impacts that could result from some different methodological decisions, enabling more informed judgments regarding the reliability of the data. For example, in support of its October 2001 decision to not adjust census data, the Bureau showed that different assumptions about how to treat coverage evaluation cases that the Bureau could not resolve could result in variations in the census count of about 6 million people. The Bureau also had reported previously the range of impacts on the estimates resulting from different assumptions and associated calculations to account for the inefficiency of computer matching. They found that different assumptions could result in estimates of census error differing by about 3.5 million people. However, with the final revision of the A.C.E. coverage error estimates, the Bureau did not clearly provide the ranges of impact resulting from different methodological decisions. While the Bureau did discuss major limitations and indicated their uncertain impact on the revised estimates of coverage error, the Bureau’s primary document for reporting the latest estimates of coverage error did not report the possible quantitative impacts of all these limitations—either separately or together—on the estimates. Thus readers of the reported estimates do not have the information needed to accurately judge the overall reliability of the estimates, namely, the extent of the possible ranges of the estimates had different methodological decisions been made. Sampling errors were reported alongside estimates of census error, but these do not adequately convey the extent of uncertainty associated with either the reported quantitative estimates themselves or the conclusions to be drawn from them. For example, the Bureau decided to make no adjustment to account for the limitation of computer matching efficiency when calculating its latest revision of estimates of coverage error, unlike the adjustment it made when calculating its earlier revised estimates. When justifying its adjustment made in its earlier revised estimates, the Bureau demonstrated that the choice of adjustment mattered to the calculated results. But the potential significance to the calculated results of the Bureau’s having made a different assumption was not reflected in the Bureau’s primary presentation of its estimates and their errors. The absence of clear documentation on the possible significant impacts of such assumptions could lead readers of the Bureau’s reporting to believe erroneously that all assumptions have been accounted for in the published statistics, or that the estimates of coverage error are more reliable than they are. According to Bureau reporting, when it examined the validity of the revised coverage error estimates the Bureau expected to see across demographic groups similar patterns between the coverage error for the count of the population and the count of housing. That is, if a population was overcounted or undercounted, then the count of housing units for that population was expected to be overcounted or undercounted as well. The Bureau developed estimates of coverage error in the count of housing units from A.C.E. data. But the comparisons of non-hispanic blacks and hispanics to non-hispanic whites in figure 5 shows that the relative housing undercounts are opposite of what was expected by the Bureau. For example, the estimated population undercount for non-hispanic blacks is almost 3 percent greater than that of the majority group—non-hispanic white or other—but the estimated housing unit undercount for non-hispanic blacks is about 0.8 percent less than that of the majority group. In addition, while the Bureau estimated that the non-hispanic white majority group had a net population overcount of over 1 percent, the Bureau estimated the majority group as having its housing units undercounted by about one-third of a percent. (The estimated net housing unit undercounts—and their standard errors—for these groups are provided in app. III.) Bureau officials told us that the problems A.C.E. and the census experienced with identifying coverage error in the population do not seem likely to have affected housing counts. However, when estimating coverage error for housing units for specific populations (e.g., by gender or race/ethnicity) errors in the population count can affect the reliability of housing tabulations. This is because when the Bureau tabulates housing data by characteristics like gender or race, it uses the personal characteristics of the person recorded as the head of the households living in each housing unit. So if there are problems with the Bureau’s count of population for demographic groups, for example by gender or sex, they will affect the count of housing units for demographic groups. While the unexpected patterns in population and housing unit coverage error may be reconcilable, Bureau officials do admit that problems with the estimations of population coverage error may also adversely affect the reliability of other measures of housing count accuracy they rely upon, such as vacancy rates. Bureau officials have indicated the need to review this carefully for 2010. While the multiple reassessments and revisions of earlier work did not result in reliable estimates, these efforts were not without value, according to the Bureau. Bureau officials stated that the revision process and results helped the Bureau focus for 2010 on detecting duplicates, revising residence rules, and improving the quality of enumeration data collected from sources outside the household, such as neighbors, as well as providing invaluable insights for its program of updating census population estimates throughout the decade. The volume and accessibility over the Internet of the Bureau’s research may have made this the most transparent coverage evaluation exercise of a Decennial Census. However, as the Bureau has closed the book on Census 2000 and turned toward 2010, the reliability of the Bureau’s coverage estimates remains unknown. The Bureau made extensive efforts to evaluate the census and its coverage error estimates resulting from A.C.E., but these efforts have not been sufficient to provide reliable revised estimates of coverage error. So while much is known about operational performance of the 2000 Census, one of the key performance measures for the 2000 census remains unknown. Moreover, neither Congress nor the public know why the coverage evaluation program did not work as intended, because the Bureau has not provided a clear accounting of how census and A.C.E. design decisions and/or limitations in the A.C.E. revision methodology discussed in this report accounted for the apparent weakness—or strengths—of A.C.E. Without such an accounting, the causes of problems and whether they can be addressed will remain obscure. And as the Bureau makes plans for coverage evaluation for the 2010 Census, whether that program approximates A.C.E.’s design or not, the Bureau will be missing valuable data that could help officials make better decisions about how to improve coverage evaluation. Finally, this lack of information calls into question the Bureau’s claim (made in response to a prior GAO recommendation that the Bureau determine the feasibility of adjustment) that it has already established that using coverage evaluation for adjustment purposes is not feasible. Without clearly demonstrating what went wrong with its most recent coverage evaluation and why, the Bureau has not shown that coverage evaluation for the purpose of adjustment is not feasible. In fact, this report mentions two census improvements—to residence rules and to efforts to identify and reduce duplicates—that the Bureau is already considering that could make A.C.E. estimates more reliable, and perhaps even feasible. Furthermore, although the Bureau reports that its experience with revising A.C.E. estimates has provided lessons, it remains unclear how the Bureau will use its published coverage error estimates to make decisions leading to a more reliable measure of coverage error in 2010, or how the unreliable estimates can be of value to policymakers or the public. As the Bureau plans for its coverage evaluation of the next national head count in 2010, we recommend that the Secretary of Commerce direct that the Bureau take the following three actions to ensure that coverage evaluation results the Bureau disseminates are as useful as possible to Congress and other census stakeholders: To avoid creating any unnecessary blind spots in the 2010 coverage evaluation, as the Bureau plans for its coverage evaluation in 2010, it should take into account how any significant future design decisions relating to census (for example, residence rules, efforts to detect and reduce duplicates, or other procedures) or A.C.E. (for example, scope of coverage, and changes in search areas, if applicable), or their interactions, could affect the accuracy of the program. Furthermore, in the future, the Bureau should clearly report in its evaluation of A.C.E. how any significant changes in the design of census and/or A.C.E. might have affected the accuracy of the coverage error estimates. In addition GAO recommends that in the future the Bureau plan to not only identify but report, where feasible, the potential range of impact of any significant methodological limitation on published census coverage error estimates. When the impact on accuracy is not readily quantifiable, the Bureau should include clear statements disclosing how it could potentially affect how people interpret the accuracy of the census or A.C.E. The Under Secretary for Economic Affairs at the Department of Commerce provided us written comments from the Department on a draft of this report on September 10, 2004 (see appendix I). The Department concurred with our recommendations, but took exception to some of our analyses and conclusions and provided additional related context and technical information. In several places, we have revised the final report to reflect the additional information and provided further clarification on our analyses. The Department was concerned that our draft report implied that A.C.E. was inaccurate because it should have measured gross coverage error components, and that this was misleading because the Bureau designed A.C.E. to measure net coverage errors. While we have previously testified that the Bureau should measure gross error components, and the Department in its response states that this is now a Bureau goal for 2010, we clarified our report to reflect the fact that the Bureau designed A.C.E. to measure net coverage error. Further, although the Department agreed with our finding that the Bureau used residence rules that were unable to capture the complexity of American society thus creating difficulty for coverage evaluation, the Department disagreed with our characterization of the role four other census and A.C.E. design decisions played in affecting coverage evaluation. Specifically, the Bureau does not believe that any of the following four design decisions contributed significantly to the inaccuracy of the A.C.E. results: 1. The Treatment of the Group Quarters Population—The Department commented that we correctly noted that group quarter residents were excluded from the A.C.E. universe who would have been within the scope of A.C.E. under the 1990 coverage evaluation design, and that a large number of these people were counted more than once in 2000. The Department maintains that the Bureau designed A.C.E. to measure such duplicates. We believe this is misleading. As the Department noted, during its follow-up at housing units the Bureau included questions intended to identify the possible duplication of college students living away at college, and we have now included this in our final report. But as we stated in our draft report, A.C.E. did not provide coverage error information for the national group quarters population. Moreover, during A.C.E. the Bureau did not search for duplicate people within the group quarters population counted by the census, as it did within housing units counted by the census. In fact, later Bureau research estimated that if it had done so, the Bureau would have identified over 600,000 additional duplicates there. As such, our finding that this may have contributed to the unreliability of coverage error estimates still stands. 2. The Treatment of Census Imputations—The Department stated that A.C.E. was designed to include the effects of imputations on its measurement of coverage error and that there was no basis for our draft report stating that as more imputations were included in the census then coverage error estimates became less reliable. While we agree that the Bureau’s estimates of coverage error accounted for the number of imputations, as we report, and as the Department’s response reiterated, no attempt was made to determine the accuracy of the imputations included in the census. Thus any errors in either the number or demographic characteristics of the population imputed by the Bureau were not known within the coverage error processes or estimation. As a result, in generalizing the coverage error estimates to the imputed segment of the population, the Bureau assumed that the imputed population had coverage error identical to the population for which coverage error was actually measured. Furthermore, the larger the imputed segment of the population became the more this assumption had to be relied upon. Since the real people underlying any imputations are not observed by the census, the assumption is, in its strictest sense, untestable, thus we maintain that increasing the number of imputations included in the census may have made generalizing the coverage error estimates to the total census population less reliable. 3. The Treatment of Duplicate Enumerations in the Reinstated Housing Units—The Department wrote that our draft report incorrectly characterized the effects of reinstating duplicates into the census. The Department indicated that A.C.E., having been designed to measure net coverage error, treated the over 1 million likely duplicates “exactly correctly” and that including them in the census had no effect on the mathematical estimates of coverage error produced by A.C.E. We reported that, according to Bureau research, introducing the additional duplicates into the census appeared to have no impact on the A.C.E. estimates. But we view this fact as evidence of a limitation, or blind spot, in the Bureau’s coverage evaluation. The fact that 2.4 million records, containing disproportionately over 1 million duplicate people could be added to the census without affecting the A.C.E. estimates demonstrates a practical limitation of those coverage error estimates. We maintain that the resultant measure of coverage error cannot be reliably generalized to the entire population count of which those 1 million duplicates are a part. 4. Size of the Search Area—The Department wrote that a search area like that used in 1990 would have done little to better measure the number of students and people with vacation homes who may have been duplicated in 2000. It described our conclusion regarding the reduction in search area from 1990 as not supported by the relative magnitudes of these situations. And finally, the Department offered additional support for the decision to reduce the search area by describing the reduced search area as balanced, or “in an expected value sense” [emphasis added] affecting the number of extra missed people and the extra miscounted people equally. In our final report we added a statement about the Department’s concern over the importance of balance in its use of search areas. But we disagree that our conclusion is unsupported, since in our draft report we explicitly cited Bureau research that found an additional 1.2 million duplicate enumerations in units that were out-of-scope for 2000 A.C.E. but that would have been in-scope for 1990’s coverage evaluation. In addition, the Department offered several other comments. Regarding our finding that the Bureau has not produced reliable revised estimates of coverage error for the 2000 Census, and, specifically, that the full impact of the Bureau’s methodological limitations on the revised estimates has not been made clear, the Department wrote that the Census Bureau feels that further evaluations would not be a wise use of resources. We concur, which is why our recommendations look forward to the Bureau’s preparation for 2010. The Department commented that it did not see how we could draw conclusions about the reliability of the Bureau’s coverage evaluation estimates if we did not audit the underlying research, data, or conclusions. We maintain that the objectives and scope of our review did not require such an audit. As we described, and at times cited, throughout our draft report, we used the results of the Bureau’s own assessment of the 2000 Census and its coverage evaluation. That information was sufficient to draw conclusions about the reliability of the A.C.E. estimates. As a result, there was no need to verify individual Bureau evaluations and methodologies. The Department expressed concern that our draft report implied that the unexpected differences in patterns of coverage error between the housing and the population count were irreconcilable. That was not our intent, and we have clarified that in the report. The Department expressed concern over the report’s characterization of the 1990 coverage error estimates for group quarters as weak in part due to the high mobility of this population. However, the 1990 group quarters estimates are described as “weak” in a Bureau memorandum proposing that group quarters be excluded from the 2000 coverage evaluation. The memorandum also explains how the mobility within the group quarters population contributes to the resulting estimates. We have not revised the characterization of the group quarters coverage error estimates or the causal link due to the mobility of that population, but we have revised our text to state more clearly that the 1990 estimates being discussed are those for group quarters. As agreed with your offices, unless you release its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time we will send copies to other interested congressional committees, the Secretary of Commerce, and the Director of the U.S. Census Bureau. Copies will be made available to others upon request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me on (202) 512-6806 or by e-mail at daltonp@gao.gov or Robert Goldenkoff, Assistant Director, at (202) 512-2757 or goldenkoffr@gao.gov. Key contributors to this report were Ty Mitchell, Amy Rosewarne, and Elan Rohde. The Bureau made various design decisions that resulted in an increase in the number of “imputations”—or people guessed to exist—included in the census population that could not be included within the A.C.E. sample survey. The Bureau believes certain numbers of people exist despite the fact that the census records no personal information on them; thus it projects, via computer-executed algorithms, numbers and characteristics of people and includes them in the census. Such records are simply added to the census totals, and do not have names attached to them. Thus it was impossible for A.C.E. to either count imputed individuals using the A.C.E. sample survey or incorporate them into the matching process. Since the true number and the characteristics of these persons are unknown, matching nameless records via A.C.E. would not have provided any meaningful information on coverage evaluation. The number of people the Bureau imputed grew rapidly in 2000, from about 2 million in 1990 to about 5. One of the reasons for the large increase in imputations may be the decision by the Bureau to eliminate field edits—the last-minute follow-up operation to collect additional information from mail-back forms that had too little information on them to continue processing—from field follow-up in 2000. While acknowledging that this decision may have increased imputations for 2000, a senior Bureau official justified the decision by describing the field edits in 1990 as providing at times a “clerical imputation” that introduced a subjective source of error, which computer- based imputation in 2000 lacked. The Bureau also reduced the number of household members for whom personal information could be provided on standard census forms, and this also contributed to the increase in imputations. Households reporting a household size greater than 6 in 2000—the number for whom personal characteristics could be provided—were to be automatically contacted by the Bureau to collect the additional information. Yet not all large households could be reached for the additional information, and the personal characteristics of the remaining household members needed to be imputed. Again, A.C.E. would have been unable to match people counted by its sample survey to imputations, so imputed people were excluded from A.C.E. calculations of coverage errors. An A.C.E. design choice by the Bureau that likely increased the amount of data imputed within the A.C.E. sample survey was how the Bureau decided to account for people who moved between Census Day and the day of the A.C.E. interview. Departing from how movers were dealt with in 1990, and partly to accommodate the initial design for the 2000 Census, which relied on sampling nonrespondents to the census, for 2000 the Bureau relied on the counts of the people moving into A.C.E. sample areas to estimate the number of matched people who had actually lived in the A.C.E. areas on Census Day but moved out. This decision resulted in the Bureau having less complete information about the Census Day residents in A.C.E. sample areas who had moved out, and likely increased the number of imputations that were later required, making it more difficult to match these moving persons to the census. A Bureau official also cited this decision as partial justification for not including group quarters in A.C.E. search areas. The extent that imputation affected the accuracy of the census is unknown. The National Academy of Sciences discussed in an interim report on the 2000 Census the possibility of a subset of about 1.2 million of these imputations being duplicates. That report stated that, for example, “it is possible that some of these cases—perhaps a large proportion—were erroneous or duplicates,” and described another subset of about 2.3 million that could include duplicates. However, this Academy report did not include any data to suggest the extent of duplicates within these groups, and it may similarly have been possible for the number of persons in this group to have been underestimated. The Bureau maintains that the imputations were necessary to account for the people its field operations led it to believe had been missed, and that its imputation methods do not introduce statistical bias. As shown in Table 1, the initial A.C.E. results suggested that the differential population undercounts of non-Hispanic blacks and Hispanics—the difference between their undercount estimate and that of the majority groups—persisted from Bureau estimates from its coverage evaluation in 1990. Yet they also demonstrated that the Bureau had apparently succeeded in reducing the magnitude of those differences since its evaluation of the 1990 Census. Subsequent revised results published in October 2001 for three race/origin groups indicated that differential undercounts were generally lower than the initial A.C.E. estimates, but that only the undercount estimate for Hispanics was still statistically different from zero. Finally, the latest revised estimates of undercount reported in March 2003 that of these three major race/origin groups, only the non-Hispanic black and non-Hispanic white percentage undercounts were significantly different from zero, in addition to the national net overcount. Unlike the estimates of census population accuracy, which were revised twice since initial estimates, the census housing count accuracy estimates have not been revised and are based on the initial A.C.E. data. A subset of those results, including those provided here, were also published in October 2001. This glossary is provided for reader convenience, not to provide authoritative or complete definitions. The Bureau’s Accuracy and Coverage Evaluation (A.C.E.) program was intended to measure coverage error (see below) for the 2000 Decennial Census. The program was to enable the Bureau to more accurately estimate the rate of coverage error via a sample survey of select areas nationwide, and if warranted, to use the results of this survey to adjust census estimates of the population for nonapportionment purposes. The use of statistical information to adjust official census data. A tally of certain kinds of dwellings, including single-family homes, apartments, and mobile homes, along with demographic information on the inhabitants. The headcount of everybody in the nation, regardless of their dwelling. The extent that minority groups are over- or undercounted in comparison to other groups in the census. Statistical studies to evaluate the level and sources of coverage error in censuses and surveys. When the census erroneously counts a person more than once. The rules the Bureau uses to determine where people should be counted. 2010 Census: Cost and Design Issues Need to Be Addressed Soon. GAO-04-37. (Washington, D.C.: January 15, 2004). 2000 Census: Coverage Measurement Programs’ Results, Costs, and Lessons Learned. GAO-03-287. (Washington, D.C.: January 29, 2003). 2000 Census: Complete Costs of Coverage Evaluation Programs Are Not Available. GAO-03-41. (Washington, D.C.: October 31, 2002). 2000 Census: Coverage Evaluation Matching Implemented as Planned, but Census Bureau Should Evaluate Lessons Learned. GAO-02-297. (Washington, D.C.: March 14, 2002). 2000 Census: Coverage Evaluation Interviewing Overcame Challenges, but Further Research Needed. GAO-02-26. (Washington, D.C.: December 31, 2001). The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
Evaluations of past censuses show that certain groups were undercounted compared to other groups, a problem known as "coverage error." To address this, the Census Bureau included in its 2000 Census design the Accuracy and Coverage Evaluation Program (A.C.E.) to (1) measure coverage error and (2) use the results to adjust the census, if warranted. However, the Bureau found the A.C.E. results inaccurate and decided not to adjust or plan for adjustment in 2010. Congress asked GAO to determine (1) factors contributing to A.C.E.'s reported failure to accurately estimate census coverage error, and (2) the reliability of the revised coverage error estimates the Bureau subsequently produced. To do this, GAO examined three sets of Bureau research published in March 2001, October 2001, and March 2003 and interviewed Bureau officials. According to senior Bureau officials, increasingly complicated social factors, such as extended families and population mobility, presented challenges for A.C.E., making it difficult to determine exactly where certain individuals should have been counted thus contributing to the inaccuracy of the coverage error estimates. For example, parents in custody disputes both may have an incentive to claim their child as a resident, but the Bureau used rules for determining where people should be countedresidence rules--that did not account for many of these kinds of circumstances. Other design decisions concerning both A.C.E. and the census also may have created "blind spots" that contributed to the inaccuracy of the estimates. The Bureau has not accounted for the effects of these or other key design decisions on the coverage error estimates, which could hamper the Bureau's efforts to craft a program that better measures coverage error for the next national census. Despite having twice revised A.C.E.'s original coverage error estimates, the Bureau has no reliable estimates of the extent of coverage error for the 2000 census. While both revisions suggested that the original estimates were inaccurate, in the course of thoroughly reviewing the revisions, the Bureau documented (1) extensive limitations in the revision methodology and (2) an unexpected pattern between the revised estimates and other A.C.E. data, both of which indicated that the revised coverage error estimates may be questionable themselves. Furthermore, when the Bureau published the revised estimates, it did not clearly quantify the impact of these limitations for readers, thus preventing readers from accurately judging the overall reliability of the estimates. It is therefore unclear how A.C.E. information will be useful to the public or policymakers, or how the Bureau can use it to make better decisions in the future.
The abuse of anabolic steroids differs from the abuse of other illicit substances. When users initially begin to abuse anabolic steroids, they typically are not driven by a desire to achieve an immediate euphoria like that which accompanies most abused drugs such as cocaine, heroin, and marijuana. The abuse of anabolic steroids is typically driven by the desire of users to improve their athletic performance and appearance— characteristics that are important to many teenagers. Anabolic steroids can increase strength and boost confidence, leading users to overlook the potential serious and long-term damage to their health that these substances can cause. In addition, the methods and patterns of use for anabolic steroids differ from those of other drugs. Anabolic steroids are most often taken orally or injected, typically in cycles of weeks or months (referred to as “cycling”), rather than continuously. Cycling involves taking multiple doses of anabolic steroids over a specific period of time, stopping for a period, and starting again. In addition, users often combine several different types of anabolic steroids to maximize their effectiveness (referred to as “stacking”). While anabolic steroids can enhance certain types of performance or appearance, when used inappropriately they can cause a host of severe, long-term, and in some cases, irreversible health consequences. The abuse of anabolic steroids can lead to heart attacks, strokes, liver tumors, and kidney failure. In addition, because anabolic steroids are often injected, users who share needles or use nonsterile injection techniques are at risk for contracting dangerous infections, such as HIV/AIDS and hepatitis B and C. There are also numerous side effects that are gender-specific, including reduced sperm count, infertility, baldness, and development of breasts among men; and growth of facial hair, male-pattern baldness, changes in or cessation of the menstrual cycle, and deepened voice among women. There is also concern that teenagers who abuse anabolic steroids may face the additional risk of halted growth resulting from premature skeletal maturation and accelerated puberty changes. The abuse of anabolic steroids may also lead to aggressive behavior and other psychological side effects. Many users report feeling good about themselves while on anabolic steroids, but for some users extreme mood swings also can occur, including manic-like symptoms leading to violence. Some users also may experience depression when the drugs are stopped, which may contribute to dependence on anabolic steroids. Users may also suffer from paranoia, jealousy, extreme irritability, delusions, and impaired judgment stemming from feelings of invincibility. Two national surveys showed increasing prevalence in teenage abuse of steroids throughout the 1990s until about 2002 and a decline since then (see fig. 1). One of these two national surveys, the Monitoring the Future (MTF) survey, is an annual survey conducted by the University of Michigan and supported by NIDA funding. The MTF survey measures drug use and attitudes among students in grades 8, 10, and 12, and asks several questions about the use of and attitudes towards anabolic steroids, such as perceived risk, disapproval, and availability of anabolic steroids. The survey’s questions are designed to assess respondents’ use of steroids in the last 30 days, the past year, and over the course of the respondent’s lifetime. Questions about steroid use were added to the study beginning in 1989. The most recent results from this survey showed that in 2006, 2.7 percent of 12th graders said they had used anabolic steroids without a prescription at least once. The second national survey, the Youth Risk Behavior Survey (YRBS), is a biennial survey conducted since 1991 by CDC. The YRBS is part of a surveillance system consisting of national, state, and local surveys of students in grades 9 through 12. These surveys collect information about a wide variety of risk behaviors, including sexual activity and alcohol and drug use. The most recent available national YRBS survey—conducted in 2005—asked one question related to lifetime steroid use without a prescription, which showed that 3.3 percent of 12th graders had used steroids at least once. The MTF and YRBS surveys indicate a low abuse rate for anabolic steroids among teenagers when compared with the abuse rates for other drugs. However, the reported easy availability of steroids and the potential for serious health effects make anabolic steroid abuse a health concern for teenagers, particularly among males. In general, the reported rates of anabolic steroid abuse are higher for males than for females (see fig. 2). Data from the 2006 MTF survey showed that 1.7 percent of teenage males reported abusing anabolic steroids in the past year, as compared with 0.6 percent of females. Data from the 2005 YRBS survey showed that 4.8 percent of high school males reported abusing steroids in their lifetime, as compared with 3.2 percent of females. There are two categories of federally funded efforts that address teenage abuse of anabolic steroids. Efforts are either designed to focus on preventing the abuse of anabolic steroids among teenagers or are broader and designed to prevent substance abuse in general—which can include abuse of anabolic steroids among teenagers. Two programs that received federal research funding for their development and testing, ATLAS and ATHENA, are designed to focus on preventing or reducing teen abuse of anabolic steroids. In addition, there are various research efforts and education and outreach activities that focus on this issue. Two federal grant programs—ONDCP’s Drug-Free Communities Support program and Education’s School-Based Student Drug Testing program—are designed to support state and local efforts to prevent substance abuse in general and may include anabolic steroid abuse among teenagers as part of the programs’ substance abuse prevention efforts. See appendix I for a list of the federally funded efforts discussed below. There are various federally funded efforts—programs, research, and educational activities—that address teenage abuse of anabolic steroids. Some of these efforts are designed to focus on preventing or reducing anabolic steroid abuse among teenagers. As part of our review we identified two programs, the ATLAS and ATHENA programs, which received federal research funding during their development and testing and are designed to focus on preventing the abuse of anabolic steroids among male and female high school athletes, respectively. ATLAS is a student-led curriculum designed to prevent male high school athletes from abusing anabolic steroids and other performance-enhancing substances. The program’s intervention strategy relies on peer pressure and providing information on healthy alternatives for increasing muscle strength and size. The ATLAS curriculum is typically delivered during a sport team’s season in a series of 45-minute sessions scheduled at the coaches’ discretion and integrated into the usual team practice activities. The athletes meet as a team in groups of six or eight students with one student functioning as the assigned group leader. Coaches, group leaders, and student athletes all work from manuals and workbooks, which provide brief, interactive activities that focus on drugs used in sports, sport supplements, strength training, sport nutrition, and decision making. The ATHENA program is designed to prevent the abuse of body-shaping substances such as diet pills and anabolic steroids, although abuse of the latter is less common in females than in males. Like ATLAS, the ATHENA curriculum is integrated into a sport team’s usual practice activities and uses workbooks and student group leaders. The ATHENA curriculum takes into account that female athletes are less likely than males to abuse anabolic steroids but are more likely to have problems with eating disorders and to use drugs such as diet pills and tobacco. As a result, ATHENA’s curriculum gives more attention than ATLAS’s to addressing these behaviors. The ATLAS and ATHENA curricula were developed and tested with funding provided by NIDA. From fiscal years 1993 through 2001, NIDA provided more than $3.4 million to fund the research that developed and tested the effectiveness of the ATLAS curriculum. Similarly, from fiscal years 1999 through 2003 NIDA provided $4.7 million in research funding to develop and test the effectiveness of the ATHENA curriculum. While ATLAS and ATHENA were developed and tested with federal funding, the programs are implemented at the local level. Schools in at least 25 states have chosen to implement the programs with local and private funds, and the National Football League and Sports Illustrated magazine together have supported the programs in more than 70 schools nationwide. In addition to the ATLAS and ATHENA programs, there are various federally funded research efforts that focus on preventing or reducing anabolic steroid abuse among teenagers. NIDA has funded several research projects examining the factors that influence teenagers to abuse anabolic steroids and the effectiveness of interventions used to prevent teenage steroid abuse. From fiscal years 2000 through 2006, NIDA awarded nearly $10.1 million in grants to support an average of four research projects each year related to anabolic steroid abuse with a specific focus on adolescents. In fiscal year 2006, for example, NIDA awarded a total of nearly $638,000 to three research projects that examined risk factors for anabolic steroid abuse among teenagers or the effects of steroid abuse in this population. Like NIDA, the United States Anti-Doping Agency (USADA)—an independent, nonprofit corporation funded primarily by ONDCP—supports research related to the abuse of anabolic steroids and other performance-enhancing drugs by athletes, including teenage athletes. In fiscal year 2006, USADA spent $1.8 million for research, and an ONDCP official estimated that about one-third of that research funding was directed to anabolic steroids and another performance-enhancing drug, human growth hormone. In addition to research, there are various education and outreach activities that focus on preventing anabolic steroid abuse among teenagers. Many of these efforts have been supported by NIDA. Since 2000, NIDA has provided nearly $500,000 in funding for a variety of education and outreach efforts in support of this goal. For example, in April 2000, in response to an upward trend in steroid abuse among students, NIDA launched a multimedia educational initiative intended to prevent anabolic steroid abuse among teenagers. Along with several national partners, including the National Collegiate Athletic Association, the American College of Sports Medicine, and the American Academy of Pediatrics, the initiative produced a Web site, a research report on steroid abuse, and postcard-sized messages about steroids for placement in gyms, movie theaters, shopping malls, bookstores, and restaurants in selected areas. By 2007, NIDA funding for this particular initiative totaled about $124,000. In addition to NIDA, other federal agencies and organizations have supported educational and outreach activities that focus on preventing anabolic steroid abuse among teenagers, as the following examples illustrate. ONDCP has funded six informational briefings since 2001 to encourage journalists, entertainment writers, and producers to accurately cover anabolic steroids and drug abuse among teenage athletes. ONDCP also has Web sites for teens and parents with information about anabolic steroids and links to NIDA resources. Since 2003, USADA has produced written publications and annual reports on anabolic steroid abuse and has distributed those publications through high schools and state high school associations. In addition, some USADA public service announcements to be aired during televised sports events and movie trailers have targeted anabolic steroid abuse. In fiscal years 2007 and 2008, SAMHSA expects to spend a total of $99,000 under a contract to develop and disseminate educational materials addressing the abuse of anabolic steroids by adolescent athletes. These materials, which are intended for use by high school athletic and health science departments, include brochures, a video, and 10 high school outreach seminars. As part of our review, we identified two federal grant programs that are designed to support state and local efforts to prevent various forms of substance abuse and that may include teenagers. Grantees of these programs may address teenage anabolic steroid abuse as part of the programs’ general substance abuse prevention efforts. The Drug-Free Communities Support program, funded by ONDCP and administered by SAMHSA under an interagency agreement, provides grants to community coalitions to address drug abuse problems identified in their communities. Many community coalitions choose to implement school- based drug prevention programs with their grant funding and are allowed to tailor these programs to address the drug prevention needs of their communities. In 2007, about one-quarter of more than 700 grantees reported that they were addressing steroid abuse as one of their program’s objectives. Each community coalition is eligible for grants of up to $125,000 per year, renewable for up to 4 more years, and requiring dollar- for-dollar community matching funds. In 2007, the Drug-Free Communities Support program is providing about $80 million in grants to 709 community coalitions for drug prevention activities based on the needs of the communities. Another federal grant program that supports substance abuse prevention efforts for teenagers and that may also include efforts to address anabolic steroid abuse in this population is the School-Based Student Drug Testing program in Education’s Office of Safe and Drug-Free Schools. Since 2003, this program has provided grants to school districts and public and private entities to establish school-based drug-testing efforts. For fiscal years 2003 through 2007, the Office of Safe and Drug-Free Schools awarded $32.2 million in grants to 87 individual School-Based Student Drug Testing grantees. According to information provided in the grantees’ grant applications, 34 of the grantees (representing 180 middle, junior, and high schools and at least 70,000 students) proposed using their grant-supported drug testing to test for anabolic steroids in addition to other substances such as amphetamines, marijuana, and cocaine. Education officials told us that although grantees generally identify the drugs for which they are testing in their annual performance reports, there has been no independent verification by Education staff that confirms that the 34 grantees actually have implemented anabolic steroid testing or whether additional grantees have included steroid testing in their efforts. Of the 16 studies we reviewed, nearly half focused on linking certain risk factors and behaviors to teenagers’ abuse of anabolic steroids, including the use of other drugs, risky sexual behaviors, and aggressive behaviors. Most of the other studies we reviewed were assessments of the ATLAS and ATHENA prevention programs and in general suggested that the programs may reduce abuse of anabolic steroids and other drugs among high school athletes immediately following participation in the programs. Appendix II is a list of the articles we reviewed. Almost half of the studies we reviewed identified certain risk factors and behaviors linked to the abuse of anabolic steroids among teenagers. Risk factors, such as antisocial behavior, family violence, and low academic achievement, are linked to youths’ likelihood of engaging in risky behaviors, including drug abuse. Several studies found that the use of alcohol and other drugs—such as tobacco, marijuana, and cocaine—is associated with the abuse of anabolic steroids among teenagers, including teenage athletes and non-athletes. One 2005 study found that the use of other drugs was more likely to predict anabolic steroid abuse than participation in athletic activities. Several studies we reviewed found no difference between athletes and non-athletes in their abuse of anabolic steroids, and one 2007 study of teenage girls found that female athletes were less likely than female non-athletes to abuse anabolic steroids. A few studies we reviewed found a positive correlation between anabolic steroid abuse and risky sexual behaviors such as early initiation of sexual activity and an increased number of sexual partners. Some studies found that aggressive behaviors such as fighting were related to anabolic steroid abuse by both males and females. Moreover, one 1997 study found that adolescents (both male and female) who reported abusing anabolic steroids in the past year were more likely to be perpetrators of sexual violence. However, the cause-and-effect relationships between anabolic steroid abuse and other risky behaviors, such as violence, have not been determined. About half of the studies we reviewed were assessments of the ATLAS and ATHENA prevention programs, and in general these studies suggested that these programs may reduce abuse of anabolic steroids and other drugs among high school athletes immediately following participation in the programs. Researchers assessing the ATLAS program reported that both the intention to abuse anabolic steroids and the reported abuse of steroids were lower among athletes who participated in the ATLAS program than among athletes who did not participate in the program. The most recent study found that although the intention to abuse anabolic steroids remained lower at follow-up 1 year later for athletes who participated in the ATLAS program, the effectiveness of the program in reducing reported use diminished with time. Similarly, researchers assessing the ATHENA program found that girls who participated in the program reported less ongoing and new abuse of anabolic steroids as well as a reduction in the abuse of other performance-enhancing and body-shaping substances. The authors note that these results are short term, and the long-term effectiveness of the ATHENA program is not known. The authors of the one study in our review that looked at student drug- testing programs found that the abuse of anabolic steroids and other illicit drugs and performance-enhancing substances was decreased among athletes at schools that implemented mandatory, random drug-testing programs. However, this group of athletes also showed an increase in risk factors that are generally associated with greater abuse of illicit drugs, including anabolic steroids. For example, athletes at schools with drug- testing programs were more likely to believe that peers and authority figures were more tolerant of drug abuse, had less belief in the negative consequences of drug abuse, and had less belief in the efficacy of drug testing. Based on these seemingly inconsistent findings, the study’s authors called for caution in interpreting the findings. Experts identified gaps in the research that addresses anabolic steroid abuse among teenagers. Experts identified gaps in the current research on the outcomes of prevention programs that focus on anabolic steroids. Experts also identified gaps in the research on the long-term health effects of initiating the abuse of anabolic steroids as teenagers. According to experts, available research does not establish the extent to which the ATLAS and ATHENA programs are effective over time in preventing anabolic steroid abuse among teenage athletes. Experts acknowledge that both programs appear promising in their ability to prevent the abuse of anabolic steroids among teenage athletes immediately following participants’ completion of the programs. Assessment of the effectiveness of the ATLAS program 1 year later, however, found that the lower incidence of anabolic steroid use was not sustained, although participants continued to report reduced intentions to use anabolic steroids. The long-term effectiveness of the ATHENA program has not been reported. The effectiveness of these programs has been assessed only in some schools in Oregon, and therefore experts report that the effectiveness of the programs may not be generalizable. In another example, experts identified the need for additional research to assess the effectiveness of drug-testing programs, such as those funded under Education’s School-Based Student Drug Testing program, in reducing anabolic steroid abuse among teenagers. According to experts, there are several gaps in research on the health effects of teenage abuse of anabolic steroids. Experts report that while there is some research that has examined the health effects of anabolic steroid abuse among adults—for example, the harmful effects on the cardiovascular, hormonal, and immune systems—there is a lack of research on these effects among teenagers. There is also a lack of research on the long-term health effects of initiating anabolic steroid abuse during the teenage years. Some health effects of steroid abuse among adults, such as adverse effects on the hormonal system, have been shown to be reversible when the adults have stopped abusing anabolic steroids. Experts point out, however, that it is not known whether this reversibility holds true for teenagers as well. While some experts suggest that anabolic steroid abuse may do more lasting harm to teenagers, due to the complex physical changes unique to adolescence, according to other experts there is no conclusive evidence of potentially permanent health effects. Experts also report that the extent of the psychological effects of anabolic steroid abuse and, in particular, of withdrawal from steroid abuse, is unclear due to limited research. Some experts we consulted noted a need to better inform primary care physicians and pediatricians about anabolic steroid abuse among teenagers, so these providers would be better able to recognize steroid abuse in their patients and initiate early intervention and treatment. We provided a draft of this report to HHS and Education for comment and received technical comments only, which we incorporated into the report as appropriate. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Health and Human Services and to the Secretary of Education. We will also provide copies to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact me at (202) 512-7114 or ekstrandl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix III. Table 1 lists selected federally funded efforts—including programs, research, and educational and outreach activities—that are designed to focus on preventing or reducing the abuse of anabolic steroids by teenagers (focused efforts), as well as other broader efforts that may address teenage abuse of anabolic steroids as part of the programs’ general substance abuse prevention efforts. The list includes programs funded by two departments and the Office of National Drug Control Policy (ONDCP), in the Executive Office of the President. Borowsky, I.W., M. Hogan, and M. Ireland. “Adolescent sexual aggression: risk and protective factors.” Pediatrics, vol. 100, no. 6 (1997): e71-e78. Dukarm, C.P., R.S. Byrd, P. Auinger, and M. Weitzman. “Illicit substance use, gender, and the risk of violent behavior among adolescents.” Archives of Pediatric & Adolescent Medicine, vol. 150, no. 8 (1996): 797-801. DuRant, R.H., L.G. Escobedo, and G.W. Heath, “Anabolic-steroid use, strength training, and multiple drug use among adolescents in the United States.” Pediatrics, vol. 96, no. 1 (1995): 23-28. Elliot, D., J. Cheong, E.L. Moe, and L. Goldberg. “Cross-sectional study of female students reporting anabolic steroid use.” Archives of Pediatric & Adolescent Medicine, vol. 161, no. 6 (2007): 572-577. Elliot, D., and L. Goldberg. “Intervention and prevention of steroid use in adolescents.” American Journal of Sports Medicine, vol. 24, no. 6 (1996): S46-S47. Elliot, D.L., L. Goldberg, E.L. Moe, C.A. DeFrancesco, M.B. Durham, and H. Hix-Small. “Preventing substance use and disordered eating: Initial outcomes of the ATHENA (Athletes Targeting Healthy Exercise and Nutrition Alternatives) program.” Archives of Pediatric & Adolescent Medicine, vol. 158, no. 11 (2004): 1043-1049. Elliot, D.L., E.L. Moe, L. Goldberg, C.A. DeFrancesco, M.B. Durham, and H. Hix-Small. “Definition and outcome of a curriculum to prevent disordered eating and body-shaping drug use.” The Journal of School Health, vol. 76, no. 2 (2006): 67-73. Fritz, M.S., D.P. MacKinnon, J. Williams, L. Goldberg, E.L. Moe, and D.L. Elliot. “Analysis of baseline by treatment interactions in a drug prevention and health promotion program for high school male athletes.” Addictive Behaviors, vol. 30, no. 5 (2005): 1001-1005. Goldberg, L., D. Elliot, G.N. Clarke, D.P. MacKinnon, E. Moe, L. Zoref, E. Greffrath, D.J. Miller, and A. Lapin. “Effects of a multidimensional anabolic steroid prevention intervention: the Adolescents Training and Learning to Avoid Steroids (ATLAS) program.” JAMA, vol. 276, no. 19 (1996): 1555- 1562. Goldberg, L., D. Elliot, G.N. Clarke, D.P. MacKinnon, L. Zoref, E. Moe, C. Green, and S.L. Wolf. “The Adolescent Training and Learning to Avoid Steroids (ATLAS) prevention program: background and results of a model intervention.” Archives of Pediatric & Adolescent Medicine, vol. 150 (1996): 713-721. Goldberg, L., D.L. Elliot, D.P. MacKinnon, E. Moe, K.S. Kuehl, L. Nohre, and C.M. Lockwood. “Drug testing athletes to prevent substance abuse: Background and pilot study results of the SATURN (Student Athlete Testing Using Random Notification) study.” Journal of Adolescent Health, vol. 32, no. 1 (2003): 16-25. Goldberg, L., D.P. MacKinnon, D.L. Elliot, E.L. Moe, G. Clarke, and J. Cheong. “The Adolescents Training and Learning to Avoid Steroids Program: Preventing drug use and promoting health behaviors.” Archives of Pediatric & Adolescent Medicine, vol. 154, no. 4 (2000): 332-338. MacKinnon, D.P., L. Goldberg, G. Clarke, D.L. Elliot, J. Cheong, A. Lapin, E.L. Moe, and J.L. Krull. “Mediating mechanisms in a program to reduce intentions to use anabolic steroids and improve exercise self-efficacy and dietary behavior.” Prevention Science, vol. 2, no. 1 (2001): 15-28. Miller, K.E., J.H. Hoffman, G.M. Barnes, D. Sabo, M.J. Melnick, and M.P. Farrell. “Adolescent anabolic steroid use, gender, physical activity, and other problem behaviors.” Substance Use & Misuse, vol. 40, no. 11 (2005): 1637-1657. Naylor, A.H., D. Gardner, and L. Zaichkowsky. “Drug use patterns among high school athletes and nonathletes.” Adolescence, vol. 36, no. 144 (2001): 627-639. Rich, J.D., C.K. Foisie, C.W. Towe, B.P. Dickinson, M. McKenzie, and C.M. Salas. “Needle exchange program participation by anabolic steroid injectors.” Drug and Alcohol Dependence, vol. 56, no. 2 (1999): 157-160. Laurie Ekstrand, at (202) 512-7114 or ekstrandl@gao.gov. In addition to the contact named above, key contributors to this report were Christine Brudevold, Assistant Director; Ellen M. Smith; Julie Thomas; Rasanjali Wickrema; and Krister Friday.
The abuse of anabolic steroids by teenagers--that is, their use without a prescription--is a health concern. Anabolic steroids are synthetic forms of the hormone testosterone that can be taken orally, injected, or rubbed on the skin. Although a 2006 survey funded by the National Institute on Drug Abuse (NIDA) found that less than 3 percent of 12th graders had abused anabolic steroids, it also found that about 40 percent of 12th graders described anabolic steroids as "fairly easy" or "very easy" to get. The abuse of anabolic steroids can cause serious health effects and behavioral changes in teenagers. GAO was asked to examine federally funded efforts to address the abuse of anabolic steroids among teenagers and to review available research on this issue. This report describes (1) federally funded efforts that address teenage abuse of anabolic steroids, (2) available research on teenage abuse of anabolic steroids, and (3) gaps or areas in need of improvement that federal officials and other experts identify in research that addresses teenage anabolic steroid abuse. To do this work, GAO reviewed federal agency materials and published studies identified through a literature review and interviewed federal officials and other experts. There are two categories of federally funded efforts that address teenage abuse of anabolic steroids. Efforts are either designed to focus on preventing the abuse of anabolic steroids among teenagers or are broader and designed to prevent substance abuse in general--which can include abuse of anabolic steroids among teenagers. Two programs that received federal funding during their development and testing, Athletes Training and Learning to Avoid Steroids (ATLAS) and Athletes Targeting Healthy Exercise & Nutrition Alternatives (ATHENA), are designed to focus on preventing or reducing teen abuse of anabolic steroids through use of gender-specific student-led curricula. In addition, there are various research efforts and education and outreach activities that focus on this issue. Two federal grant programs--the Office of National Drug Control Policy's Drug-Free Communities Support program and the Department of Education's School-Based Student Drug Testing program--are designed to support state and local efforts to prevent substance abuse in general and may include anabolic steroid abuse among teenagers as part of the programs' substance abuse prevention efforts. In 2007, about one-quarter of more than 700 Drug-Free Communities Support program grantees reported that they were addressing steroid abuse as one of their program's objectives. Almost half of the 16 studies GAO reviewed identified certain risk factors and behaviors linked to the abuse of anabolic steroids among teenagers. Several of these studies found connections between anabolic steroid abuse and risk factors such as use of other drugs, risky sexual behaviors, and aggressive behaviors. Most of the other studies were assessments of the ATLAS and ATHENA prevention programs and in general suggested that the programs may reduce abuse of anabolic steroids and other drugs among high school athletes immediately following participation in the programs. Experts identified gaps in the research addressing teenage abuse of anabolic steroids. Experts identified a lack of conclusive evidence of the sustained effectiveness over time of available prevention programs, for example at 1 year following participants' completion of the programs. Experts also identified gaps in the research on the long-term health effects of initiating anabolic steroid abuse as a teenager--including research on effects that may be particularly harmful in teens--and in research on psychological effects of anabolic steroid abuse.
The natural gas and electricity industries perform three primary functions in delivering energy to consumers: (1) producing the basic energy commodity, (2) transporting the commodity through pipelines or over power lines, and (3) distributing the commodity to the final consumer. A range of federal, state, and local entities regulate different aspects of these functions. While generation siting, intrastate transportation, and retail sales are generally regulated by state or local entities, wholesale sales and interstate transportation generally fall under federal regulation, primarily by FERC. Under federal law, FERC is responsible for ensuring that the terms, conditions, and rates for the interstate transportation of natural gas and electricity, certain sales for resale of natural gas, and wholesale sales of electricity in interstate commerce are “just and reasonable.” Other federal agencies also play an important role in regulating energy markets. For example, the Commodity Futures Trading Commission regulates commodity futures and options markets in the United States and protects market participants against manipulation, abusive trade practices, and fraud. For nearly a century, the natural gas and electricity industries were regulated as natural monopolies and dominated by a relatively few, large public utilities that produced, transported, and sold natural gas and electricity to the ultimate users. This monopoly structure controlled the entry, prices, and profits of industry participants. Under this regulatory framework, FERC established individual utilities’ terms, conditions, and rates for transportation and wholesale sale of natural gas and electricity in interstate commerce. To ensure that the rates these utilities charged were just and reasonable, FERC based the rates on the utilities’ cost to provide the service plus a fair return on investment, which is generally referred to as cost-of-service regulation. With technological, economic, and policy developments over the past two- to-three decades, these industries have undergone a transition—commonly known as “restructuring”—from this highly regulated environment to one that places greater reliance on competition to determine entry, prices, and profits. Natural gas was first to make the shift, facilitated by passage of the Natural Gas Policy Act of 1978 and subsequent FERC orders in 1985 and 1992 that opened pipeline transportation to all on equal terms and required pipeline companies to completely separate or “unbundle” their transportation, storage, and sales services. As a result, natural gas became a commodity bought and sold separately from its transportation. The electricity industry has experienced similar restructuring, starting about the same time but evolving more slowly than the natural gas industry. The Public Utility Regulatory Policies Act in 1978 introduced competition by requiring electric utilities to buy electricity produced by nonutility, electric power generators. Then in 1992, the Congress passed the Energy Policy Act, authorizing FERC to require utilities, on a case-by-case basis, to allow competitors to use their transmission lines for wholesale sales of electricity. In 1996, FERC ordered that electric transmission systems be opened to all qualified wholesale buyers and sellers of electric energy. FERC also required utilities to “functionally unbundle” their generation and transmission businesses to prevent discriminatory practices, such as not allowing competitors equal access to transmission lines. One option FERC provided the utilities to help them achieve unbundling was to transfer management of their transmission lines to an independent system operator (ISO) that would manage the system without any special interests and for all users’ benefit. Since 1996, six ISOs have formed and are operating, each with its own set of operating rules. Of these, four ISOs—California; New England; New York; Pennsylvania, New Jersey, and Maryland Interconnect (PJM)—operate interstate wholesale electricity markets in which electricity suppliers and buyers submit bids to sell and buy power. In 1999, FERC issued an order encouraging all privately owned electric utilities to voluntarily place their transmission facilities under the control of a broader market entity called a regional transmission organization (RTO). As a result, ISOs created under a previous FERC order would be supplanted by larger RTOs, which would cover the entire nation. The rationale behind FERC’s approach to forming RTOs was that the nation’s transmission systems should be brought under regional control in order to eliminate the remaining discriminatory practices in use, better meet the increasing demands placed on the transmission system, improve management of system congestion and reliability, and achieve fully competitive wholesale power markets. FERC is in the process of trying to establish these organizations to cover the continental United States and has currently approved two RTOs—Midwest ISO and PJM. In approving the formation and operation of ISOs and RTOs, FERC requires these organizations to, among other things, establish market monitoring units. These units are to provide for objective monitoring of the markets operated by the ISO or RTO to identify market design flaws, market power abuses, and opportunities for efficiency improvement. The market monitoring units of four ISOs or RTOs—California, New England, New York, and PJM—have been operating for several years under FERC’s approval. FERC has also approved a market monitoring unit for the Midwest ISO, but Midwest does not currently operate a centralized power market (it plans to do so by December 2003). FERC approves the units’ market monitoring plans and requires the units to periodically report on their monitoring activities. In July 2002, FERC issued a notice of proposed rulemaking to provide a standard market design for all electric transmission providers. FERC’s fundamental goal in this initiative is to create “seamless” wholesale electricity markets, nationwide, that allow sellers to transact easily across transmission boundaries and allow customers to receive the benefits of a lower cost and more reliable electricity supply. Accordingly, FERC’s standard market design proposal contains a wide range of rules to standardize the structure and operation of wholesale electricity markets and transmission services. Among other things, it (1) describes the rules for how a portion of the nation’s electricity will be exchanged in organized markets, (2) defines a new transmission service, and (3) establishes new market power mitigation and monitoring requirements. The proposal has been highly controversial. FERC estimates that the proposed standard market design rule has generated about 1,000 sets of formal comments reflecting concerns and reservations about the scope and details of the proposal. In April 2003, FERC issued a white paper explaining how it intends to change its proposal in response to the comments and concerns that had been raised. When the white paper was issued, FERC expected the final rule to be promulgated later in the year. However, in commenting on our draft report, FERC said that it is planning to hold technical conferences in different regions of the country this fall and has postponed the issuance of any final rule. With the opening of pipelines and transmission lines, other energy producers and marketers began to compete with the traditional utilities to the point that a complex structure of formal and informal primary and secondary energy markets has evolved. As competition has increased, FERC has allowed more and more producers and marketers to sell their energy at prices determined in the marketplace. This evolution to competitive energy markets is requiring FERC to fundamentally change how it does business. With the shift to market-based prices for natural gas and electricity, FERC has concluded that its approach to ensuring just and reasonable prices has to change: from one of reviewing individual companies’ rate requests and supporting cost data to one of proactively monitoring energy markets to ensure that they are working well to produce competitive prices. FERC established OMOI to coordinate and bring about this shift in the agency’s energy market oversight efforts. Like the agency’s other major offices, OMOI reports directly to the Chairman of FERC (see fig. 1). OMOI has organized its staff into eight divisions, which are grouped into three main units: (1) Market Oversight and Assessment, (2) Investigations and Enforcement, and (3) Management and Communication (see shaded area of fig. 1). The Market Oversight and Assessment unit performs a variety of tasks related to monitoring energy markets, monitoring financial markets, researching new data sources, publishing reports on market surveillance, and assisting with ongoing investigations. The Investigations and Enforcement unit performs a variety of tasks related to investigating market abuse, conducting audits of entities under FERC’s jurisdiction, and manning the enforcement hotline. Finally, the Division of Management and Communication and the OMOI director’s office provide administrative and management support. OMOI’s budget request for fiscal year 2003 is about $13.5 million and provides funding for 110 staff years, which includes $500,000 in contracting services. For fiscal year 2004, FERC has requested a budget for OMOI of about $14.3 million and 110 staff years, which includes $1 million in contracting services. With the formation of OMOI, FERC is making headway in establishing an oversight and enforcement capability for competitive energy markets. OMOI has taken a significant step forward in setting out its vision, mission, and primary functions as a framework for comprehensively overseeing the markets; developing its basic work processes; and beginning to use an array of tools to oversee the markets. The office also has almost completed its staffing to authorized levels. Nonetheless, these efforts are largely in their formative stage and OMOI continues to hire additional staff, improve its oversight tools, and adjust its processes and procedures. Additional actions to formalize the office’s work processes and procedures and to more clearly define its role would help ensure that its efforts to oversee energy markets are systematic and comprehensive. In addition, OMOI’s role largely determines its resource, information, technology, and staff skill mix needs. OMOI’s statements of its vision, mission, and functions set out the framework for a comprehensive market oversight and enforcement approach. According to the statements, OMOI plans to analyze and assess both market performance and market rules in the broader context of the markets’ overall efficiency and effectiveness and market behavior and compliance with rules at the individual market participant level. (See table 1.) These statements were a starting point for planning and organizing OMOI’s activities and serve to provide a concise, if general, outline of the office’s planned oversight and enforcement approach. OMOI decided to begin operating under this broad framework and to work out more details as it became more organized and gained experience with the markets and available data and oversight tools. At this point, OMOI has not provided additional details in writing on how it will carry out these functions to achieve its mission. However as OMOI moves forward, several issues have not been addressed that are important to the office’s credibility and to ensuring that it comprehensively carries out its planned approach. Recognizing that responsibility for making energy markets work well is shared with the industries (including the ISOs and RTOs and their market monitoring units), individual market participants, the states, other FERC offices, and other federal agencies, it is important that OMOI clearly define its role in achieving comprehensive oversight of the markets. This role and how OMOI will carry it out largely determines its resource, information, technology, and staff skill mix needs. First, OMOI has not directly and clearly connected its vision, mission, and functions to FERC’s statutory responsibilities for ensuring that wholesale natural gas and electricity prices are just and reasonable. Second, OMOI has not defined undue exercise of market power, although identifying and addressing the exercise of market power is one of the major aspects of market oversight, especially when the markets are in transition. Third, the statements do not explicitly recognize that an important function of the office will be to integrate its work with that of the industries’ market monitoring units, other agencies such as the Commodity Futures Trading Commission, and other parts of FERC, such as the Office of Markets, Tariffs, and Rates. Fourth, OMOI is still deciding at what level of detail it will review market transactions as it performs its oversight. Fifth, the office has not developed outcome or results-oriented performance measures that express what the office will be working to achieve and that can be used to assess its progress in carrying out its goals and objectives. FERC is responsible for ensuring that certain sales for resale of natural gas and wholesale sales of electricity in interstate commerce are just and reasonable. With the move to competitive markets, these prices are generally determined in the marketplace rather than set by FERC. FERC has recognized that this change means that it needs a new approach to ensuring that prices are just and reasonable and has begun to provide some guidance on what just and reasonable means in the context of competitive markets, most recently in its proposed rule on standardizing electricity markets. Statements in the proposed rule indicate that just and reasonable prices are those produced by structurally competitive markets. However, the statements do not define what a structurally competitive market is. In addition, these statements concern the operations of market monitoring units rather than FERC’s own role and responsibilities. Furthermore, the proposed rule has been highly controversial and may be substantially delayed and/or modified. The heads of the market monitoring units told us they recognize the difficulty of defining just and reasonable prices. They also said that they believe FERC had made progress in doing so. However, they generally believed that FERC had not yet gone far enough. For example, the head of the California ISO’s monitoring unit told us that for FERC to define what it will consider just and reasonable prices in a competitive marketplace is critical to achieving the Federal Power Act’s goal. She stated that a clear standard for just and reasonable is also critical to performing monitoring and oversight functions, and, without such a standard, existing ISOs or RTOs cannot move forward and other geographical areas will have no confidence in ISOs or RTOs and will not wish to develop them. On the other hand, the heads of the ISO New England, PJM, and New York ISO monitoring units stated that FERC should not develop overly detailed or prescriptive definitions that would reduce needed flexibility. The heads of the PJM and ISO New England units said that FERC should instead develop a strong policy statement or paper defining the term at a general or theoretical level and leave it to the market monitoring units to operationalize or put it into practice. Similarly, the head of the New York ISO unit cautioned that, with overly prescriptive criteria, market participants can structure behavior to avoid specific rules, conditions, or definitions, while engaging in behavior that would not be deemed acceptable. He added that, in orders that it has issued in individual cases, FERC has established precedent that prices can be considered just and reasonable when they are the product of workably competitive markets, and determining whether a market is competitive requires some room for considering individual circumstances. FERC officials, including OMOI managers, told us that they recognize the importance of defining just and reasonable prices in a competitive energy marketplace but are finding it difficult to do. For example, the Senior Energy Policy Advisor to the Chairman of FERC told us that FERC has been trying to define the term for several years. The Director of OMOI said that OMOI has the operational responsibility to give guidance on just and reasonable rates, but agreed that an important consideration for the agency is the level of detail at which it needs to be defined. In its proposed standard market design rulemaking, FERC provides some details on what it considers market power. In the proposed rule, FERC states that market power is the ability to raise price above the competitive level. The agency further states that identifying market power with precision is difficult, both because it is difficult to identify the competitive price (which should recover both fixed and variable costs over the long run) and because it can be difficult to isolate the impact of one entity on the competitive market. FERC adds that, in the proposed rule, it is incorporating the concept of when to intervene in the markets, rather than defining what constitutes market power. The market monitoring units would review market data, such as bidding patterns, to identify and intervene in market situations in which market power could be occurring. In its April 2003 white paper explaining the changes it planned to make in the final standard market design rule, FERC said that it would require the ISOs and RTOs to have clear and enforceable rules to define and police market manipulation and gaming strategies by market participants trying to unduly exercise their market power. The white paper also said that the ISOs and RTOs would be required to have a clear set of rules governing market participant conduct with the consequences for violations clearly spelled out. The white paper then provided areas of anticompetitive behaviors—such as physical and economic withholding of supplies—that, at a minimum, should be included. Again, the heads of the market monitoring units did not believe that FERC had yet gone far enough in defining market power. The head of the California ISO monitoring unit said that FERC needs to define what it will consider market power, and that the definition must be agreed on in order for FERC to perform its market development and oversight obligations. She added that inappropriate or anticompetitive behavior need not be defined through an exhaustive list of specific market behaviors but rather through a general set of characteristics. According to the heads of the market monitoring units of ISO New England and PJM RTO, FERC should develop a policy statement or paper on market power rather than a highly detailed definition. In contrast, the head of the NY ISO’s market monitoring unit stated that FERC needs to define what it will consider market power only in the context of specific market monitoring proposals. He told us that his market monitoring unit has been very successful in preventing market power by using very specific tests for market power abuse, enabling the unit to take appropriate action with minimal delay. He added that FERC should not adopt generic definitions that would restrict the ability of the New York ISO to implement the tests and market mitigation measures that have been approved for its use. The Director of OMOI agreed that a clarifying definition of market power to communicate the parameters of acceptable market behavior is needed. He added that developing such a definition is complex, and his office has to be careful in deciding what constitutes market power abuse because there is a necessary element of judgment involved in determining what is and what is not abuse in individual cases that should not be eliminated with a definition. An important OMOI function is to integrate its work with that of others inside and outside of FERC who also have a role in market oversight or who carry out related responsibilities. For example, FERC’s Office of Markets, Tariffs, and Rates has major responsibilities relating to building competitive energy markets and authorizing companies to participate in those markets. The office is currently leading FERC’s effort to establish a standard market design for wholesale electricity markets. Thus, it is important for the offices to work together so that OMOI can (1) better understand the markets that are being created and that OMOI is to oversee and (2) provide effective input from an oversight standpoint into structuring the markets. In addition, OMOI officials told us that they share oversight responsibility with other federal agencies—particularly the Commodity Futures Trading Commission—in areas where the financial and futures markets overlap or affect the physical natural gas and electricity markets. For example, OMOI officials stated that OMOI, along with the Commodity Futures Trading Commission and the Department of Justice, would be responsible for detecting the false reporting of natural gas prices or volumes to index publishers (see app. III). Moreover, as we discuss later in this report, OMOI is relying heavily on the market monitoring units to oversee electricity markets. OMOI has various initiatives under way to build its working relationship with these parties. However, because of the market monitoring units’ substantial role in its market oversight approach, OMOI has devoted considerable attention to improving its working relationship with these units, and its efforts with respect to other FERC offices and other federal agencies are in the early stages. For example, in responding to our survey, about 50 percent of OMOI managers and staff expressed dissatisfaction with communication with other FERC offices. (About 26 percent were satisfied, about 14 percent were as equally satisfied as dissatisfied, and 10 percent had no basis to judge.) In providing more detailed responses, several OMOI staff indicated that they thought communication and cooperation between OMOI and FERC’s Office of Markets, Tariffs, and Rates was a problem. In addition, the head of a market monitoring unit told us that he has had to inform OMOI staff about the issuance of orders initiated in other FERC offices. The Director of OMOI told us that he understands these concerns about the office’s working relationship with other FERC offices. According to the Director, OMOI wanted to first get its “act together” before reaching out to the other offices. He added that OMOI has been working hard the past couple of months with FERC’s Office of Markets, Tariffs, and Rates and Office of the General Counsel to establish more formal connections. OMOI officials told us that they plan to coordinate their work with other federal agencies to better incorporate their knowledge and views about related market activities. For example, the Director of OMOI’s Division of Financial Market Assessment told us that he would like to develop a formal information-sharing arrangement with the Commodity Futures Trading Commission so that OMOI has better access to financial information. He said that while FERC has limited jurisdiction over financial markets, OMOI wants to monitor these markets because the financial marketplace affects the health of wholesale electricity and natural gas markets. According to OMOI officials, they have regularly scheduled meetings with the Federal Trade Commission and the Department of Justice to discuss overlapping issues, specifically focusing on antitrust and market manipulation practices. As we previously reported, events such as the collapse of the Enron Corporation bring to light the importance of clarifying jurisdiction across the federal government as restructuring progresses. As we pointed out, effective coordination between FERC and the Commodity Futures Trading Commission is particularly important because of jurisdictional uncertainties regarding the oversight of on-line trading activities, such as those previously operated by Enron. In the same way, we also noted that in a Senate Governmental Affairs report and memorandum, and other congressional hearings, both FERC and the Security and Exchange Commission have been questioned about their lack of diligence in following through on Enron’s activities—even though they had indications of improper conduct. The report commented that effective coordination between agencies prevents companies from exploiting the lack of oversight in areas where neither agency may have taken full responsibility. According to OMOI, its primary functions are to assess market performance, ensure conformance with Commission rules, and produce internal and external reports on the results. This description of its functions is general and broad at this point. According to OMOI’s Deputy Director for Market Oversight and Assessment, as the office builds its capability, it must decide at what level of detail it should monitor the markets. This issue largely centers around whether OMOI should operate at a high level—that is, assess the markets’ overall performance and major outcomes, such as competitiveness, supply, and price, and leave the detailed monitoring to the market monitoring units—or “get down in the weeds” to review market transactional data as market monitoring units do for their individual markets. The Deputy Director anticipates that OMOI will operate somewhere in between these two levels. The level at which OMOI reviews the markets affects both the number and skill mix of the staff that OMOI needs. For example, the head of the New York ISO’s market monitoring unit told us that, by the end of 2003, his unit will have 30 staff, consisting of engineers, economists, business majors, analysts, and information technologists, to cover the New York market alone. He indicated that OMOI would need many more staff than this if it plans to review the markets at the same level of detail on a national basis. The responses to our survey and our discussions with OMOI staff indicate that opinions vary on this issue. In responding to our survey, 57 percent of OMOI’s managers and staff said that top management had clearly defined what role OMOI will play in monitoring markets, while about 32 percent disagreed. (The remaining 11 percent neither agreed nor disagreed.) Additional comments provided for our survey indicate that agreement has not been reached on how OMOI should carry out that role. Survey respondents expressed concerns that OMOI was not reviewing and analyzing market data in enough depth. For example, some OMOI staff said that OMOI should be continuously reviewing market data on a real-time basis to identify market power abuses. During our interviews, OMOI managers also expressed different opinions about the issue. For example, an OMOI division director told us that the office will examine similar data at a level similar to what the market monitoring units currently do, and that OMOI has most of the data it needs to do so. On the other hand, another division director told us said that he was not certain what the office’s vision for overseeing the markets will be, and that he was not sure if it has the information technology capability to perform detailed analysis of market transactions like the market monitoring units do. He also stated that OMOI would need to have staff who performed this work on a daily basis in order to become skilled at it, and that they could not gain this expertise on an ad hoc basis. OMOI’s stakeholders have also expressed varying views. For example, the heads of the market monitoring units have generally suggested that OMOI leave the detailed monitoring of market transactions to them and focus on broader, national issues. On the other hand, others such as consumer groups have called for FERC to closely monitor the markets to prevent market abuse or violations by market participants. According to FERC’s Annual Performance Report for Fiscal Year 2001 (March 2002), the agency recognizes that accountability requires strong performance measures of the following two types: output measures that specify targets for the specific work items that the agency produces—such as orders, decisions, and environmental reviews—and for when it produces them, and results-oriented or outcome measures that specify the results that the agency is working to create in the larger world. FERC has been developing output measures for many years for its strategic and annual performance plans but has established few outcome measures in the energy markets oversight area. The agency has stated that developing outcome measures is proving to be difficult but believes that it is possible. In our June 2002 report, we recommended that FERC develop such measures to assess how well it is doing in achieving its goals and objectives for overseeing competitive energy markets. Although FERC developed new performance measures for its market oversight goals and objectives for fiscal years 2003 and 2004, the new measures are generally not outcome-oriented and do not lend themselves to assessing OMOI’s effectiveness. For example, one key performance measure is to “track performance of natural gas and electric markets,” while another is to “assess performance of natural gas and electric markets.” The performance targets for these measures are to “issue market surveillance reports to the Commission twice each month” and “publish regular summer and winter seasonal market assessments, state of the market reports, and other reports as conditions warrant,” respectively. While it can be determined if OMOI issues these products, the products’ mere issuance does not indicate whether OMOI is achieving its goal of protecting customers and market participants through vigilant and fair oversight of energy markets. Although outcome measures most importantly allow the agency, the Congress, and other stakeholders to assess OMOI’s performance in carrying out its mission, establishing these measures also helps to more clearly communicate what the office is working to achieve. OMOI does not yet have formal processes and written procedures to direct its staff in their activities. Instead, it is using a series of key meetings and internal and external reports. According to OMOI managers, staff receive direction and guidance as they prepare for and participate in these meetings and help prepare these reports. The key meetings are of two types: (1) regularly scheduled meetings of OMOI managers and staff and (2) OMOI’s closed-door meetings with the FERC commissioners. The key regularly scheduled meetings have been weekly. However, according to the Director of OMOI, morning meetings to discuss plans for the day’s activities are also becoming important to the office’s operations. The predominant subject of the weekly meetings alternates from electricity markets one week to natural gas markets the next. At these meetings, which can last for several hours, OMOI managers and staff share the results of their market oversight activities and projects since the last meeting. Staff also use the weekly meetings to make a variety of decisions, including (1) whether the oversight staff should follow up on an issue with the appropriate market monitoring unit and/or begin collecting and analyzing their own data on the issue or (2) whether the enforcement staff should begin investigating a situation or should audit market participants’ compliance with certain FERC requirements. They also identify issues to be discussed in the closed-door meetings. At the closed-door meetings, OMOI discusses national and regional issues concerning electricity and natural gas markets—such as changes in prices and the adequacy of supply and infrastructure—with the commissioners. The commissioners are also informed of any complaints received, progress on significant enforcement investigations, and any new investigations. OMOI’s also prepares a series of market oversight reports or products on a daily to annual basis (see table 2). These reports are intended to (1) help OMOI staff and the FERC commissioners stay abreast of market developments and activities and (2) inform market participants and others of market performance issues and OMOI’s activities. OMOI managers also believe that the information needed for these reports helps inform the office’s staff as to the types of analyses that they need to perform and how their work is linked. In our survey of OMOI managers and staff, we asked if the office had established effective processes to oversee natural gas and electricity markets. Just slightly over half—about 53 percent—said that the office had established effective processes to oversee the markets. About 28 percent did not believe that effective processes had been established for electricity, while 22 percent did not believe they had been established for natural gas. The remaining respondents said that they neither agreed nor disagreed that effective processes had been established or said that they had no basis to judge. In providing more detailed responses to our survey, several OMOI staff commented on the office’s processes and procedures. For example, one respondent stated that processes and procedures do not exist, and that most of what is done is ad hoc. Another said that the office needs adequate planning tools to be efficient and effective, while another stated that no operational market monitoring plan has been developed for electricity or natural gas and to the extent that any plans for the office’s operations have been developed, they are at a high level and not suitable for monitoring markets. According to OMOI’s Deputy Director for Market Oversight and Assessment, his divisions plan to establish a consistent process to monitor the electric, natural gas, and related financial markets, as well as both strong priority setting and management processes. FERC officials, including OMOI managers, agreed that OMOI needs to formalize its processes. However, they said that they did not want to do so too quickly because OMOI is a new office and constantly learning. The Senior Energy Policy Advisor to the Chairman of FERC told us that formalizing OMOI’s processes is a matter of timing. She stated that she would not want the office to “lock down” its processes until it is sure that they are working well. To carry out its oversight activities, OMOI’s major tools are its (1) Market Monitoring Center, (2) enforcement hotline, (3) investigations and operational audits, and (4) partnership with the market monitoring units. Although these tools potentially provide OMOI with the means to oversee the energy markets, they have some significant limitations in coverage and available data. For example, the Market Monitoring Center lacks important market information, and market monitoring units do not operate in most parts of the United States. OMOI is aware of and is working to address these limitations. Opportunities also exist to use these tools more systematically to improve their effectiveness. Patterned after market operation centers of the ISOs and major energy trading companies, the center uses computers and various market reporting services and software packages to make large amounts of data on natural gas and electricity markets available in a useable format. For example, electricity market information includes prices on the spot market and for futures contracts, plant outage information, business news, and historical data for trend analysis. Natural gas market data includes spot and futures prices, market commentary, storage levels, imports and exports, and supply/demand statistics. In addition, several weather services are available to monitor changing conditions nationwide, as weather and climate affect energy supply and demand in both spot and futures markets. OMOI’s staff uses the Market Monitoring Center as a research tool in carrying out their assigned projects. During these projects, they often review the center’s wholesale price and other market information, such as the data on power plant outages and transmission constraints, for anomalies. These anomalies generally include large price increases or spikes or unexpected constraints in areas of the national grid of electric transmission lines or the natural gas pipeline network. For example, OMOI monitored a natural gas price spike in February 2003 and tracked its effects on the electricity market in the New York area. When anomalies are identified, OMOI staff investigate to determine the cause by calling the applicable market monitoring unit or using data in the center or otherwise available to FERC. Depending on the results of this examination, the results are presented to the commissioners and other agency managers as an early warning of market problems or OMOI initiates a preliminary investigation or operational audit. In some cases, OMOI staff has worked with ISO or RTO representatives to change market rules that led to the identified anomaly. OMOI may also become aware of a market anomaly or potential market problem through another source, such as a market monitoring unit, and use the center to collect additional data on it. OMOI also uses a number of market performance measures or metrics to graphically capture market trends. OMOI is working to develop additional metrics and anticipates that, with a more comprehensive set of these metrics, it will be able to inform the FERC commissioners and stakeholders such as the Congress, market participants, and the financial markets as to how well the energy markets are working and give early warning of problems. While the center’s information is substantial, it is significantly limited in certain areas. For example, the center has limited up-to-the-minute information on electricity prices, fuel costs, and spot and futures contracts prices. It also has limited information on the operations of the electric system. Operations information, such as data on power plant outages and the availability of capacity on transmission lines, is important to detect and analyze changes in the markets and to identify potential anticompetitive behaviors. In addition, the center does not have access to nonfederal information needed to assess reliability of the electric power grid and monitor overall electricity market performance. This information includes data system frequency (a measure of how well the system is balancing electricity demand and supply), power flows on key transmission lines, and transmission between parties. According to OMOI officials, market performance and electricity system reliability are mutually dependent, and such information would help them to determine whether market participants are behaving anticompetitively. The center also does not have access to a third party source for price or quantity information on most bilateral transactions of wholesale electricity, which are the major portion of market transactions. However, FERC has revised its filing requirements for utilities to require them to electronically file quarterly reports on their electric power sales, including information on prices and quantities. FERC is continuing to expand the information available in the center. It has added four information services since our June 2002 report. For example, Genscape measures power plant operations for selected power plants. In addition, OMOI is continuing to assess its energy market information needs. During fiscal year 2002, FERC completed studies to take stock of the agency’s current and future market information needs. As part of that effort, FERC formed teams to identify information that FERC currently collects and additional information that it might need to perform its duties related to restructured markets. According to OMOI officials, the office is using the information from these teams as a baseline to assess its overall market information needs. Although these data shortcomings significantly limit the Market Monitoring Center’s potential use for comprehensive and real-time monitoring of the markets, some OMOI staff knowledgeable about the center’s operations and use highlighted the potential to use the center more systematically. For example, a process is currently not in place to use the center to continuously monitor the markets, and written protocols have not been developed for what data are to be reviewed and what actions OMOI staff should take when certain market situations are noticed. Currently, OMOI staff use the center intermittently as they do research for their projects. According to an OMOI staff person, the center is not in use at times. FERC’s primary purpose in creating the hotline was to provide a mechanism to informally receive complaints or inquiries from industry and the public so that the agency can deal with concerns more quickly and with fewer resources than would be required under FERC’s formal complaint process. Since the hotline’s creation in FERC’s Office of General Counsel in 1987, the number of complaints and inquiries has increased substantially. For example, the hotline received 145 complaints and inquiries in fiscal year 1996, compared with 584 in fiscal year 2002. FERC’s goal has been to respond to and resolve the complaints and inquiries very quickly. For example, FERC set a goal in its fiscal year 2003 performance plan to resolve 80 percent of the complaints and inquiries within 1 week of the initial contact. When a complaint or inquiry is received (by telephone, letter, or E-mail), an attorney is assigned to investigate. The attorney contacts the other party, usually the same day, and attempts to resolve the issue. If the issue cannot be resolved through this informal process, the complainant can file a formal complaint or, if OMOI finds indications of a more egregious violation of rules or regulations, it can launch an investigation into the matter. With the hotline’s transfer to OMOI in August 2002, it has become an important market oversight tool by providing market participants the opportunity to anonymously and informally make complaints to OMOI about anticompetitive actions by other parties. According to the Enforcement Hotline Director, the hotline’s underlying philosophy is that of a neighborhood watch with participants patrolling their own markets. To this end, OMOI has encouraged market participants and the general public to call, e-mail, or write the hotline to complain or report market activities that may be an abuse of market power, an abuse of an affiliate relationship, a tariff violation, or another type of violation by an entity regulated by FERC. According to OMOI, hotline calls have included complaints about bidding anomalies, price spikes, inappropriate use of certain financial instruments, fluctuations in available capacity on electric transmission lines and natural gas pipelines, discrimination in interconnection to the electric grid, and improper market transactions between a company and an affiliate. Hotline complaints have led to or contributed to decisions to initiate several enforcement investigations. OMOI officials also told us that the hotline staff has been focusing more attention on tracking market-related calls to look for trends because OMOI is trying to use the hotline as a tool for identifying market issues early. For example, the officials said that the hotline received several calls from energy marketers who said that they could be driven out of business by stricter standards for creditworthiness. According to the officials, FERC had been reviewing creditworthiness issues on a case-by-case basis but, after the calls, decided to convene a technical conference in February 2003 to begin to address these issues on a broader basis. Led by attorneys in OMOI’s Enforcement Division, investigations are designed to collect and analyze information regarding specific concerns about whether a party has violated the energy-related laws, regulations, and/or market rules administered by FERC. OMOI may initiate an investigation as a result of an action such as a hotline complaint, a formal complaint, a referral from a market monitoring unit or another office within FERC, the findings of an audit, or routine market monitoring. In addition, the enforcement staff may begin an investigation based on its scanning or tracking of industry or market events through news or other accounts. OMOI’s Division of Operational Investigations is responsible for conducting audits to review compliance with FERC’s regulations such as those governing companies’ transactions or dealings with their affiliates to prevent discriminatory practices and reporting of market information. OMOI initiates operational audits for a variety of reasons, including providing input to policy deliberations, regulations development, and enforcement cases. FERC’s investigations and operational audits relating to energy markets have increased almost steadily each month since this responsibility was moved to OMOI. For example, on June 1, 2002—about a month before the enforcement staff was transferred to OMOI from FERC’s Office of the General Counsel—FERC was conducting 37 investigations and operational audits related to the electricity and natural gas industries and other areas such as hydroelectric projects. This number was 68 as of May 31, 2003. During this period, OMOI opened a total of 79 investigations and operational audits and closed 48. Of the investigations that OMOI closed, several resulted in entities paying refunds, civil penalties, or the costs of the investigations, as well as preparing compliance plans and taking other remediation actions. One highly visible example is the recently settled case against the Transcontinental Gas Pipe Line Corporation (Transco) for anticompetitive practices that included a civil penalty of $20 million—the largest civil penalty in FERC’s history. However, in other cases, no further action was taken—beyond working with the parties under investigation to bring them into compliance with the rules and regulations—because there is no civil penalty authority associated with the activities. The civil penalty imposed on Transco stemmed from the company’s violation of rules in one of the few areas in which FERC had the authority to impose such penalties. While FERC can order refunds of excessive rates, FERC generally lacks authority to impose appropriate penalties. No section of the Federal Power Act allows FERC to levy monetary penalties against market participants who charge unjust or unreasonable rates for electricity. Although the Natural Gas Policy Act of 1978 gave FERC some authority to levy civil penalties, this authority applies to a limited number of natural gas transactions in interstate commerce. Given this situation, legislation was recently introduced in the Congress that would give FERC additional penalty authority. On April 11, 2003, the House passed H.R. 6, which would expand FERC’s penalty authority under the Federal Power Act and increase the maximum civil penalty for certain violations from $10,000 to $1 million per violation per day. The bill is currently awaiting action by the Senate, which is considering similar legislation. While investigations are almost always opened in response to specific complaints or concerns about a potential violation, operational audits provide the opportunity to review compliance with regulations and market rules on a broader basis. However, OMOI has limited resources devoted to these audits. At the end of June 2003, the Division of Operational Investigations was conducting audits of 16 entities under FERC’s jurisdiction—11 in the Pacific Northwest, 1 in the Midwest, 2 in the mid- Atlantic, and 2 in the Southwest—with respect to certain aspects of FERC’s regulations. According to the Director of the Division of Operational Investigations, most of the work by the division’s staff is in supporting ongoing enforcement investigations rather than performing audits. He said that his staff provides technical support to these investigations by reviewing regulations and accounting and trading issues. The Director said that he would like to develop, but has not yet developed, a strategy for systematically auditing compliance with FERC’s regulations and market rules on a cyclical basis. Absent this type of more comprehensive review, OMOI has to rely on the limited coverage provided by hotline calls and enforcement investigations. For example, hotline calls depend on individuals knowing about violations and being willing to report them to FERC. Recognizing that market monitoring units play a significant role in overseeing wholesale electric power markets, OMOI is devoting considerable attention to improving its working relationship with these units. However, FERC has not yet put in place a process to periodically assess the monitoring units’ effectiveness so that it will have assurances that they are effectively carrying out their responsibilities. Moreover, the partnership’s effectiveness in overseeing the nation’s electricity markets is limited because most of the United States is not covered by these units. FERC has described the role of the market monitoring units in various terms, including as the “first line of defense” against market problems, its “eyes and ears,” its “soldiers on the front line,” and as “practically an extension of, or a surrogate for, the Commission’s own market monitoring and investigative staff.” The significance of the monitoring units’ role is illustrated by OMOI’s response to our request for information on how the office’s market oversight approach will identify certain trading schemes, such as those used by the Enron Corporation in the California electricity market and other manipulations of the energy markets. In their response, OMOI officials said the monitoring units are responsible for detecting most of the schemes and manipulations in their respective electricity markets. (See app. III for OMOI’s response.) OMOI is taking a number of steps to improve communication and to better ensure that its staff and the market monitoring units work well together. For example, the office has assigned specific staff members as contact points for the ISOs/RTOs and their market monitoring units. Because many of the issues arising and enforcement cases being initiated concern California, OMOI has located two of its staff with the California ISO’s market monitoring unit. OMOI has also formalized the frequency and nature of communication between itself and the units, for example, by establishing a series of routine meetings and drafting guidelines on how the units will communicate certain market events to OMOI. Furthermore, OMOI is working with the market monitoring units to develop a joint OMOI-market monitoring unit mission statement and has taken steps to standardize the way the market monitoring units will report on their markets. For example, OMOI is working with the monitoring units to develop a set of standardized measures or metrics by January 2004. With standard metrics, FERC can compare and contrast the individual regional markets and better report on how markets are performing nationwide. According to the heads of the four market monitoring units, their communication with FERC has improved since the creation of OMOI. The head of one unit told us that the frequency and the detail of their discussions with OMOI were notable improvements, while another said that the improvement had been significant. The third market monitor told us that communication has improved considerably and the more frequent communication with OMOI has improved OMOI staff’s knowledge of their markets. According to the remaining market monitor, his ability to communicate with FERC was very poor before OMOI was formed, but since then the frequency and content of this communication has improved. He added that he hopes that OMOI’s enforcement staff is letting him know when it is conducting investigations relating to the markets that he monitors, but he does not know whether or not they are. In our June 2002 report, we recommended that FERC update its strategic plan to set out clear expectations for how the ISOs/RTOs will monitor energy markets and how FERC will evaluate their monitoring units’ effectiveness. While a key strategy in FERC’s current strategic plan is to integrate FERC’s market oversight activities with the work of the monitoring units, the plan does not yet set out clear expectations for these units or how FERC will ensure that they are effectively carrying out their market oversight role. OMOI’s Director of Management and Communication told us that performance expectations for the market monitoring units make sense, and that the office expects to begin the process to incorporate expectations into FERC’s fiscal years 2003-2008 strategic plan that is scheduled to be issued in September 2003. The heads of the market monitoring units agreed that FERC needs assurances that their units are carrying out their monitoring functions effectively and suggested ways that the agency could obtain these assurances. For example, the head of the New York ISO’s market monitoring unit said that FERC should monitor market outcomes, maintain close contact with the individual units, and operate a hotline that market participants can use to register concerns about the units. Similarly, the head of the New England ISO’s monitoring unit said that OMOI should develop additional expertise for each market and, more importantly, should synthesize comments from stakeholders in each market regarding the units’ performance. While the market monitoring units are highly important to OMOI’s efforts to oversee electricity markets, the units’ coverage of the nation’s electricity markets is limited. The monitoring units of the PJM Interconnection RTO, ISO New England, and New York ISO cover the Northeastern markets, while the California ISO’s monitoring unit covers the markets in that state. The Electric Reliability Council of Texas, over which FERC has only limited jurisdiction because the market is essentially intrastate, also has a market monitoring unit that essentially covers Texas. FERC has also approved a market monitor for the Midwest ISO, which plans to operate a centralized power market by December 2003. In addition, FERC has efforts under way to expand the number and/or the market coverage of RTOs. At present, according to FERC, five other RTOs have been conditionally approved. However, it could be several years before these organizations are operating and have market monitoring units in place. According to the Director of OMOI, one of the office’s major challenges is how to monitor the markets in places where there is no market monitoring unit. He added that the office has limited access to market data without the existence of the formal markets provided by an ISO or RTO to generate the data and a market monitoring unit to make it available to them. OMOI officials told us that they are using calls to the enforcement hotline and audits as a way to provide some oversight in these areas. These efforts, however, do not replicate the extensive and detailed monitoring performed by the market monitoring units. OMOI has almost completed its staffing to authorized levels, including the hiring of a substantial number of staff from outside FERC with energy market experience. The office also has trained staff to increase their knowledge about competitive energy markets, and has contracted to acquire additional market expertise. However, several key management positions have not been filled. In addition, OMOI staff raised several issues, including the adequacy of the office’s staffing levels, skills mix, the need for additional training, and morale. As of June 17, 2003, OMOI had a staff of 98 employees—12 less than the 110 positions budgeted for fiscal year 2003. OMOI has staffed the office with a mix of reassigned FERC employees and outside hires (see fig. 2). OMOI’s director and two of three office directors are outside hires. During its first 4 months, the office principally consisted of its top leadership and employees reassigned from other FERC offices, such as the Office of Markets, Tariffs, and Rates and the Office of General Counsel’s Market Oversight and Enforcement Division, that had some experience related to energy market oversight and investigation. These internal transfers continued until they reached a total of 61 employees in September 2002. Senior OMOI officials told us that transferring to OMOI was voluntary, but not everyone was selected. According to OMOI officials, approximately 180 FERC employees applied for OMOI. As of June 17, 2003, OMOI had hired 45 employees from outside FERC—5 less than the 50 positions budgeted for fiscal year 2003. To recruit qualified individuals with industry experience, OMOI offered a number of recruitment bonuses. Because of the specialized skills required to monitor energy markets, OMOI has generally offered larger recruitment bonuses than other FERC offices—an average of $12,852 given to 10 individuals. According to OMOI officials, they are still receiving resumes from interested applicants and plan to continue their recruiting and hiring efforts over the next few months. OMOI officials told us that, in hiring potential applicants, they looked at the applicants’ experience in energy markets. In its fiscal year 2003 budget request, one of FERC’s performance targets for OMOI was the “hiring of staff with market expertise.” In the fiscal year 2004 budget request, FERC is revising the performance target to state that “30 percent of OMOI staff have energy market experience gained through direct activity in those markets.” According to OMOI officials, 29 of the office’s current employees (or 29.6 percent) have energy market experience. More specifically, OMOI officials told us that 19 (or 19.4 percent) of the employees have worked for energy- related companies and have direct experience with some aspect of energy markets. An additional 10 employees (or 10.3 percent) have worked as consultants, legal counsel, or other positions that demonstrated detailed understanding of market activities without active participation, according to the officials. OMOI said that they review the resumes of both internal transfers and outside hires to determine market experience. One reason that OMOI has not yet reached its authorized staffing level is that 13 of its employees have left to go to another FERC office, another federal agency, or to private industry. The majority of those leaving—10 of 13—had originally transferred in from other FERC offices. According to OMOI officials, most of these employees moved to other FERC offices to take more senior positions. They said that because OMOI had to bring in a number of outside hires at a high grade (at the GS-15 level), opportunities for promotion within OMOI are very limited. Of the three outside hires that have left, two were interns with limited appointments. In addition to hiring staff with needed skills, OMOI has offered a variety of internal and external training programs. For example, OMOI has instituted technical sessions on a biweekly basis during which OMOI staff informally share information and expertise with other staff, invited industry experts for presentations on market issues, invited representatives from market monitoring units to provide the versions of the training classes they offer their own staffs, visited market monitoring units at various RTOs to interact with them and learn about their markets and functions, interacted with vital market participants such as credit rating agencies and generation and transmission operators to enhance their overall knowledge of the gas and electricity markets, and identified leadership and managerial training as a critical need and plans to develop targeted training in these areas. OMOI has also used contractors to obtain skills not available internally. For example, OMOI has hired, on a consulting basis, an energy trader formerly with the Enron Corporation. OMOI also has used contractors to assist them in a variety of other ways, including developing measurement metrics for the market monitoring units and studying power plant outages. In addition, OMOI used contractors to help develop its first seasonal market report. Furthermore, the office has contracted with knowledgeable vendors to provide employees with information on key aspects of energy markets. Since its inception in fiscal year 2002, OMOI has spent a total of about $501,000 on contract services. The office is requesting an increase in funding for these services from $500,000 for fiscal year 2003 to $1 million for fiscal year 2004. Although OMOI has made progress in hiring staff, OMOI continues to face challenges in filling some of its leadership and technical positions. As of June 17, 2003, OMOI had not permanently staffed three of OMOI’s seven division director positions. According to a senior OMOI official, finding qualified applicants for the division director positions has been particularly difficult because not many applicants have both technical skills and leadership experience coupled with a public-service mentality. For example, OMOI has advertised a “Division Director” position at the senior executive service level for the Division of Energy Market Oversight on several occasions, but FERC hiring officials did not find any applicants suitable to meet OMOI’s needs. The position has been relisted. OMOI’s difficulty in filling its leadership positions is particularly important because sustained, committed leadership is indispensable to successful organizational transformations. By its very nature, the transformation process entails fundamental change. Consistent leadership helps the process stay the course and helps ensure changes are thoroughly implemented and sustained over time. Senior OMOI officials also told us that OMOI continues to face challenges hiring people with certain skills such as engineers with market experience, people with technical skills for performing sophisticated analysis, and people with forensic auditing experience. In addition, in responding to our survey, OMOI employees indicated a relatively high level of concern about the office’s staffing levels and skill mix. About 49 percent of OMOI employees did not believe that the office’s staffing levels were satisfactory. In comparison, 22 percent thought that the levels were satisfactory. The remaining 30 percent neither agreed nor disagreed that the staffing levels were satisfactory or stated that they had no basis to judge. In addition, while about 45 percent of the employees believed that the office’s skill mix was adequate, 35 percent did not. The remaining 21 percent neither agreed nor disagreed or had no basis to judge. In providing written comments for our survey, the employees often commented on staffing levels and skill mix. For example, one respondent wrote “the resources in the office are clearly inadequate to perform comprehensive oversight of industries as large as wholesale electricity and natural gas.” Another wrote “staffing levels are not sufficient to regularly, systematically evaluate all energy markets in the United States to look for aberrant behavior.” The written comments on skill mix varied. For example, some staff wrote that OMOI had too many employees with natural gas experience and too few with electricity experience. Others commented that the office had too few engineers, especially electrical engineers, or needed more investigations staff, economists, attorneys, or technical staff. To some extent, these concerns about staffing levels and skill mix likely reflect the staff’s individual views about what this new office should do to oversee the markets, particularly the level of detail at which it should review market transactions. It is difficult to judge the validity of the staff’s concerns until OMOI has clearly defined its role. Of course, the FERC commissioners and the Congress would be involved in defining the role and committing the resources to carry it out. Many of OMOI’s employees also indicated that they would benefit from additional training. For example, in responding to our survey, over 70 percent of OMOI employees expressed a need for more training in areas such as market functions, market structures, and the interaction of financial markets and energy markets. In addition, more than half of the staff indicated that additional training in economic theory and models would be useful. (See table 3.) Furthermore, our survey of OMOI staff uncovered a potential issue regarding the office’s morale. In responding to the survey, about 49 percent of OMOI managers and staff characterized the office’s morale as generally high or very high, compared to about 31 percent that said morale was generally low or very low. (The remaining 20 percent characterized morale as neither high nor low or said they had no basis to judge.) However, there was a disparity of opinion between new hires and internal transfers. Among staff that had been with FERC for more than a year or before OMOI was established (internal transfers), about 40 percent said the office’s morale was low, slightly higher than the 38 percent that said it was high. In comparison, 14 percent of staff that had been with FERC for less than 1 year (new hires) said the office’s morale was low, while 68 percent said it was high. Additionally, several internal transfers expressed concern that OMOI’s top managers do not value them as highly as those hired from outside FERC. In their written comments to our survey, several staff expressed their sense that a “double standard” exists in that (1) the work of the “new FERC” employees (outside hires) is valued by top managers more than that of the “old FERC” employees (transfers) and (2) the new FERC employees receive higher pay and more than their share of the bonuses and other rewards. Several employees also commented that there is not much promotion potential for internal transfers. Furthermore, several employees that left OMOI to go to other parts of FERC told us that they had left for a promotion, but the sense that they were not valued was also a factor in leaving. Since we issued our June 2002 report, FERC has developed a human capital plan that lays the foundation for the agency to strategically manage its workforce. As our previous work has found, strategic human capital planning must be the centerpiece of any serious change management initiative, yet a key challenge for many federal agencies is to strategically manage their human capital. Given that many federal agencies have not yet begun any comprehensive human capital planning, FERC’s human capital plan is commendable and a promising first step. Nonetheless, the plan is in the formative stages and lacks key elements. The plan does not yet fully (1) identify specific activities, resources, and time frames needed to implement the agency’s human capital initiatives and (2) provide results oriented or outcome measures to track the agency’s progress in implementing the plan’s initiatives and evaluate their effectiveness. By including these key elements in its human capital plan, FERC could better ensure that its workforce is able to effectively oversee and monitor energy markets. In addition to human capital planning, the agency is taking other steps to help transform its workforce, including assessing additional human capital flexibilities that could improve recruitment and retention efforts. In February 2003, the Chairman of FERC approved the agency’s first human capital management plan, a step forward in fostering a more strategic approach to human capital management. In our June 2002 report, we pointed out that FERC was one of many federal agencies that had not given adequate attention to human capital management. Specifically, we found that FERC had not conducted systematic strategic human capital planning to guide its efforts to recruit, develop, train, and retain the type of workforce that can effectively oversee competitive energy markets. Properly done, human capital planning provides managers with a strategic basis for making human resources decisions and allows agencies to systematically address issues driving workforce change, such as those affecting OMOI. One tool that agencies can use to improve their human capital management is a human capital plan that systematically identifies the workforce needed for the future and identifies strategies for shaping this workforce. Accordingly, we recommended that FERC develop a comprehensive strategic human capital management plan to include the following: a skills assessment program that would identify gaps in skills currently held by the workforce that are necessary to carry out the agency’s evolving regulatory and oversight responsibilities; a recruitment and retention initiative, based on priorities for meeting future regulatory and oversight staffing needs, which addresses filling skill gaps in the current workforce; a training effort targeted at increasing staff knowledge in the areas of market functions and market structures so that FERC staff will be better prepared to regulate and oversee competitive energy markets; and a comprehensive succession plan for solving challenges posed by the large number of impending retirements within the agency, including reliable projections of the number of eligible staff who may actually retire. The plan, which covers a period of from 2 to 5 years, is essentially broken up into two major sections. The first section addresses issues facing the agency as a whole. For example, the plan’s first section describes FERC’s current human capital situation, including data on overall workforce demographics such as size and composition of the workforce, employee pay grade distribution, attrition rates, projected retirement eligibility, and retirement rates. The plan then uses these data to frame five broad workforce challenges and identifies five human resource goals and 19 objectives to achieve these goals. (See table 4.) The plan’s second section provides information specific to each major FERC office. This section identifies each office’s specific human capital challenges based on their particular workforce demographics and current and future work requirements and includes a short-term hiring plan and longer-term human capital initiatives. The plan, to varying degrees, discusses the four major components that we previously recommended be included—skills assessment, recruitment and retention, training, and succession planning. Regarding skills assessment, the second section of the plan identifies for each office the current and future skills they need to achieve FERC’s strategic goals. Where gaps existed between current and future skill needs, the offices have developed human capital initiatives to close the gaps. According to a senior FERC human resource official, the plan will improve as this skills assessment process improves. The official said that the FERC offices are still learning how to determine their skill needs and, as a result, when someone retires or otherwise leaves, FERC managers tend to seek a replacement with the same skills, rather than thinking about future skill needs. Concerning recruitment and retention, the plan establishes attracting and retaining talented, diverse employees capable of maintaining excellence as a human resource goal. To accomplish this goal, FERC identifies various objectives, including institutionalizing an agencywide workforce planning process and implementing recruiting and retention strategies based on the results of this workforce planning process and the offices’ hiring plans. Another of the objectives is to develop a demonstration project to increase hiring and retention success and improve accountability for hiring and retention decisions. FERC’s plan also identifies a number of initiatives to improve recruitment and retention. For example, FERC plans to implement an exit interview process to track and document why employees leave. According to the plan, the information gained from exit interviews will be used to support or modify agency personnel practices in order to improve employee retention. With respect to training, the plan establishes development opportunities to expand individual and organizational capabilities as a goal. An objective under this goal is to upgrade the effectiveness of the central and individual office training programs. The plan recognizes that FERC needs to implement a revamped energy markets curriculum to ensure that staff, such as those in OMOI, have current market-oriented skills and expertise. One of the next steps in the plan is that the human resources staff will coordinate the offices’ efforts to design and offer training for managers and to develop a markets-oriented curriculum to build organizational and staff capabilities. Human resources officials told us that FERC is already using an agencywide team to develop such a curriculum. According to senior human resources officials, the curriculum will likely be offered to all FERC offices to develop a common foundation across the agency. Although OMOI is the FERC office primarily responsible for monitoring competitive energy markets, FERC officials indicated that a number of offices in addition to OMOI are seeking markets training to do their jobs better. As FERC develops this new curriculum, the current central program has been temporarily suspended, and each office is responsible for providing informal training to its own staff. For example, OMOI has offered a variety of training to increase staff knowledge on competitive energy markets. According to a senior FERC official, the new agencywide training program should be implemented by the beginning of fiscal year 2004. FERC’s plan identifies succession planning as a challenge and points out that over half of FERC’s workforce will be eligible to retire by 2007. It also sets out the establishment of a leadership succession planning program as an objective under its building leadership goal. To address this challenge, many of FERC’s offices intend to develop their own succession planning strategies. For example, the section of the plan for the Office of Markets, Tariffs, and Rates states that because of its “graying” leadership ranks, the office must develop a succession plan for its key leadership positions. The section also states that because of its overall graying workforce, the office must develop a larger entry-level/career ladder pipeline to maintain adequate numbers of employees, both in total numbers and at the top level of career-ladder positions. The section of the plan for OMOI also addresses succession planning. In the plan, OMOI states that it will develop a succession plan to address the loss of leadership and skills due to retirements and the return of employees to the private sector. However, the human capital plan does not provide any additional information on how these succession plans will be developed, what resources are needed, how they will be implemented, and when they will be completed. Leading organizations use their succession planning initiatives not only to identify individual replacements for current leaders but also as a strategic tool to build current and future organizational capacity by identifying and developing the right people, with the right skills, at the right time for leadership, managerial, and other critical positions. FERC’s plan also addresses other related human capital issues. For example, it notes that the current performance management system may not be adequate to sustain and build the workforce needed for the future. As FERC takes steps to transform its workforce, performance management will be a critical element. Our previous work has found that instituting a results-oriented culture and creating a modern, credible, and effective performance management system can be strategic tools to drive change and achieve desired organizational results. Under the plan’s goal of fostering a performance culture that rewards achievement are four broad performance management objectives. For example, the plan indicates that the agency will strengthen connections among accomplishments, awards, and performance feedback but does not yet provide details on how this will be done, what resources are needed, and when it will be completed. According to senior FERC officials, the agency’s current performance system does not meaningfully differentiate between high and low performers, and performance is not directly linked with annual pay increases. Instead of awarding pay increases based on annual performance appraisals, performance is rewarded through the use of bonuses throughout the year. As a result, employees are rewarded for specific events rather than their overall contribution to agency results. According to FERC officials, this system was put in place to avoid the problem of too many outstanding ratings. Given that FERC is one of only a small number of agencies that have begun efforts to address their human capital challenges by developing human capital plans, FERC’s efforts are commendable. However, work remains to be done to ensure FERC’s plan is successful. Varying senior FERC human resources officials described the plan as having a “ways to go” or as a “baby step.” As we previously discussed, the plan, at this point, provides limited information how the agency’s goals and objectives will be achieved. While the plan includes strategies, it generally does not yet identify specific activities, resources, and time frames. This type of information helps provide more clarity of direction and organizational commitment as the plan is being implemented. The plan also does not provide results-oriented performance measures to help FERC gauge its progress in achieving the plan’s goals and objectives. Our previous work has shown that high- performing organizations recognize the fundamental importance of developing and using indicators to measure both the outcomes of human capital strategies and how these outcomes have helped the organizations accomplish their missions and programmatic goals. For example, a human capital plan can include measures that indicate whether the agency executed its human capital initiatives—such as hiring, retention, training, or performance management strategies—as intended, whether it achieved the goals for these strategies, and how these initiatives helped improve programmatic results. Although FERC intends to review and update the plan on a quarterly basis and revise it annually, it may be difficult to review the plan’s progress in a meaningful way without this type of specificity. In our June 2002 report, we noted that although FERC had taken steps to acquire and develop the staff knowledge and skills it needed to effectively regulate and oversee energy markets, it had not fully explored all the human capital flexibilities that are available to federal agencies for responding to workforce challenges. All federal agencies, including FERC, have personnel flexibilities and tools available to them to help overcome workforce recruitment and retention issues. Many of these flexibilities and tools can be initiated by federal agencies on their own, while others require approval from the Office of Personnel Management, the Office of Management and Budget, or the Congress. In our prior report, we found that FERC was using a number of available flexibilities such as recruitment bonuses, retention allowances, tuition reimbursement, and alternative work schedules but had not requested other flexibilities that could help improve recruitment and retention. Accordingly, we recommended that FERC (1) identify the personnel tools, flexibilities, and strategies, other than those already in use by FERC, available to federal agencies to recruit and retain employees; (2) conduct an internal assessment of the effectiveness and applicability of these to FERC; and (3) develop an action plan to use the appropriate tools, flexibilities, and strategies to recruit and hire needed expertise. Since our prior report, FERC has expanded its use of some existing human capital flexibilities to improve its ability to recruit and retain employees. One example is FERC’s student loan repayment program. As one of the first federal agencies to employ this flexibility, FERC has used a total of $331,499 to help 41 employees repay their student loans. Participants in the program commit to staying at FERC for a minimum of 3 years. According to FERC officials, this program has been particularly successful in retaining attorneys, who often have high student loan debt. In addition, FERC has expanded its use of recruitment and retention bonuses. For example, FERC offered 10 retention bonuses in 2002 compared with 2 in 2001. FERC has also given 75 recruitment bonuses of around $3,000 to $4,000 each to attract qualified employees. As noted earlier, OMOI has typically offered larger recruitment bonuses than other FERC offices because of the specialized skills required to effectively monitor competitive energy markets. In addition, according to FERC’s human resources manager, the agency’s senior human resources officials have identified additional human capital flexibilities that could prove useful in attracting and retaining quality employees and have assessed their applicability to FERC. However, the Chairman of FERC has not yet decided which, if any, of these additional flexibilities FERC will seek approval for from the Office of Personnel Management, the Office of Management and Budget, or the Congress. As part of their assessment, FERC officials examined the flexibilities in use at agencies including the Internal Revenue Service, the Federal Aviation Administration, the Securities and Exchange Commission, and the Transportation Security Agency to identify lessons learned and strategies for acquiring additional flexibilities. Senior FERC human resources officials said that they may look to acquire many of the same flexibilities currently available to the Securities and Exchange Commission, a similar regulatory agency. However, these officials also noted that FERC may have more difficulty obtaining approval for the additional flexibilities because of its relatively low attrition rate, an average of 7 percent since 1995. In contrast, the Security and Exchange Commission, which has a turnover rate of around 30 percent, uses compensation-based programs, such as special pay rates, more actively than other government agencies. OMOI has made a credible start toward establishing an oversight and enforcement capability for competitive energy markets. Significantly, the office recognizes that additional efforts are needed and has under way or is planning a number of initiatives, including expanding its activities, further identifying its information needs, and improving its working relationships with the market monitoring units. While these initiatives are important to OMOI’s success, the activities that the office needs to engage in, the information and other resources it needs to carry out these activities, and the working relationships it needs to establish with others depend on the role that it has defined for itself to achieve its mission. At this point, OMOI’s role lacks clarity in several respects. For example, OMOI has not explicitly and directly related its role and activities to the agency’s responsibility for ensuring just and reasonable prices nor decided at what level of detail it will review the markets. In addition, OMOI has not clearly defined market power, although market power is a major oversight concern and an issue that OMOI has to make sure is adequately addressed. Moreover, OMOI has not explicitly defined how it will work with others inside and outside of FERC that either share energy market oversight responsibilities or have related responsibilities. OMOI is a new office with unique and broad responsibilities for overseeing the nation’s energy markets. As such, its first months have been a learning experience as it hired its staff and began to carry out its activities. Thus, we do not disagree with the office’s decision to begin its work with few formal processes and written procedures as it, in effect, was developing and testing them. However, after almost a year, OMOI has added more staff, and its oversight activities are becoming more complex. Establishing formal processes and developing written procedures are important to help ensure that they are systematic, understood, and implemented effectively. Formal processes and written procedures also help provide assurances to OMOI’s stakeholders that the office has fully thought through and is systematically monitoring today’s energy markets. Although FERC’s recently completed human capital plan begins to lay the foundation for the agency to strategically manage its human capital, it does not yet contain key elements that could increase the likelihood that the plan will be effective. It generally does not identify specific activities, resources, and milestones to implement the human capital objectives. It also does not contain results oriented performance measures that can help FERC measure progress toward achieving these objectives. Setting out specific activities, resources, and milestones provide a clearer road map for achieving the plan’s objectives and more clearly defines the organizational commitment needed for the plan’s implementation. Moreover, without results oriented measures, FERC will be unable to determine whether its initiatives are leading to better outcomes and achieving the desired effects, such as whether its workforce is better able to meet the challenges posed by competitive energy markets. To help ensure that FERC’s oversight of competitive energy markets is comprehensive and resources are effectively directed, we recommend that the Chairman of FERC more clearly define OMOI’s role in overseeing the nation’s energy markets by taking the following actions: Explicitly describe OMOI’s activities relative to carrying out the agency’s statutory requirements to ensure just and reasonable prices and to preventing market manipulation. Explicitly establish the level of detail at which OMOI will routinely review market transactions to carry out its oversight activities. Delineate how other FERC offices and other organizations, including the market monitoring units and other federal agencies, share in and contribute to OMOI’s mission and establish expectations for how they will work together. To help ensure that OMOI carries out its role systematically and effectively, we recommend that the Chairman of FERC direct OMOI to establish formal processes and written procedures for its key activities. To strengthen FERC’s human capital plan, we recommend that the Chairman of FERC revise the agency’s plan to (1) identify specific activities, resources, and time frames to implement the human capital initiatives and (2) provide results-oriented measures to track the agency’s progress in implementing the initiatives and evaluate their effectiveness. We provided FERC with a draft of this report for review and comment. In his written comments, the Chairman of FERC generally agreed with the report’s conclusions and recommendations. Specifically, the Chairman stated that the report offers valuable advice for additional improvement and accomplishment and that, in general, he agrees with the report on the steps that are needed next to more clearly define the role of market monitoring and expand the agency’s human capital initiative. The Chairman further stated that he agrees that it is now time to formalize and document many of OMOI’s processes. The complete text of FERC’s comments on our draft report is presented in appendix IV. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to other appropriate congressional committees; the Chairman, FERC; the Director, Office of Management and Budget; and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841. Key contributors to this report are listed in appendix V. To determine FERC’s progress in establishing an oversight and enforcement capability for competitive energy markets, we focused our review on the formation and operation of OMOI. We reviewed pertinent FERC documents, including annual reports, budget requests, strategic and annual performance plans, reports, speeches, and congressional testimony by the FERC Chairman, commissioners, and other officials relating to energy market oversight. We also reviewed OMOI documents, including OMOI divisions’ strategic plans, market oversight reports, enforcement reports, and information related to OMOI’s staffing levels and budget. In addition, we interviewed OMOI managers at the division head level and above, including the director and deputy directors of the office. We also obtained the views of the heads of the four market monitoring units that were operating at the time of our review on OMOI’s progress in establishing a market oversight and enforcement capability at the national level. Furthermore, we drew on our prior work in the areas of electricity and natural gas markets. In addition to our document review and interviews, we conducted a survey of OMOI staff, up to and including those at the director and deputy director level. The survey was conducted using a self-administered electronic questionnaire posted on the World Wide Web. We sent E-mail notifications to 92 OMOI staff beginning on March 24, 2003. We then sent each employee who was surveyed a unique password by e-mail to ensure that only members of the target population could participate in our survey. We closed the survey on April 11, 2003, having received a total of 80 responses, for an overall response rate of 87 percent. A copy of this survey with the quantitative results can be found in appendix II. While our survey results are generalizable to the current OMOI population as described above, the practical difficulties of conducting surveys may introduce errors into the results. Although we administered our survey to all known members of the population of OMOI employees, and thus our results are not subject to sampling error, nonresponse to the entire survey or individual questions can introduce a similar type of variability or bias into our results—to the extent that those not responding differ from those who do respond in how they would have answered our survey questions. We took steps in the design, data collection, and analysis phases of our survey to minimize population coverage, measurement, and data- processing errors, such as checking our population list against known totals of employees, pretesting and expert review of questionnaire questions, and follow-up with those not immediately responding. To determine FERC’s progress in improving agencywide human capital management, we reviewed pertinent FERC documents, including the agency’s human capital plan and information related to the agency’s training, human capital flexibilities, and performance management programs. In addition, we interviewed senior human resources officials at FERC, including FERC’s Executive Director. We conducted our work between October 2002 and June 2003 in accordance with generally accepted government auditing standards. This appendix contains the questions and responses from our survey of Federal Energy Regulatory Commission (FERC) employees in the Office of Market Oversight and Investigations. Responses are expressed as a percentage of those responding to the survey. The U.S. General Accounting Office (GAO), an independent agency of Congress, is conducting a follow-up review of management issues at the Federal Energy Regulatory Commission (FERC). As part of our study, we are soliciting the views of the FERC staff in the Office of Market Oversight and Investigations to obtain their opinions about a variety of topics relating to the work of the FERC. Most of the questions in this survey can be answered by checking boxes or filling in blanks. Space has been provided at the end of the survey for any additional comments. The survey should take about 20 minutes to complete. GAO will take steps to prevent the disclosure of individually identified data from this survey. Only GAO staff assigned to this study can access and view your responses. No one at the FERC will see your individual responses. The username and password associated with the survey is included only to allow you to access the survey and enter your responses, and to aid us in our follow-up efforts. Survey results will be reported in summary form. If individual answers are discussed in our report, no information will be included that could be used to identify individual respondents. If you have any questions or are experiencing difficulties responding to the questionnaire, please contact Adam Hoffman at (202) 512-6667 or hoffmana@gao.gov or Jason Holliday at (202) 512-4582 or hollidayj@gao.gov. Your participation is very important and we urge you to complete this survey. We cannot provide meaningful information to the Congress on these issues without your frank and honest answers. Thank you for your time and assistance. Please refer to the following definitions when completing this survey: Office - Refers to the Office of Market Oversight and Investigations (OMOI) Division - Refers to a division within OMOI such as the Division of Energy Market Oversight, Division of Management and Communication, etc. The objective of this section is to obtain general information about your current position with FERC. 1. How long have you been employed by FERC, including its predecessor, the Federal Power Commission? (Check one.) 19% Less than 6 months 20% More than 20 years 2. Which of the following generally describes your current area of work? (Check one.) 13% Economist (Industry, Financial, etc.) Engineer (Electrical, Mechanical, Petroleum, etc.) 38% Energy Industry Analyst 11% Other Analyst (Financial, Budget, Operations Research, Program Management, etc.) If you checked "None of the above", please enter your current area of work in the space provided. The objective of this section is to obtain information about OMOI's effectiveness in meeting its mission goals and objectives. In general, how clear or unclear to you are each of the following? (Check one in each row.) a. FERC's overall mission/goals and objectives b. OMOI's goals and objectives c. Your division's goals and objectives d. Your current duties and responsibilities In general, with regard to oversight and enforcement of wholesale electricity markets, overall, how effective or ineffective is OMOI in doing the following: (Check one in each row.) a. Monitoring wholesale electricity markets to determine whether prices are just and reasonable b. Analyzing spikes in wholesale electricity prices c. Responding appropriately to the causes of d. Detecting market power abuses in wholesale e. Correcting detected market power abuses in electricity market structure and rules g. Remedying problems concerning wholesale electricity market structure and rules h. Resolving complaints and disputes among electricity market participants quickly and fairly Enforcing violations of FERC's requirements relating to wholesale electricity markets Please enter any other issue regarding the oversight and enforcement of wholesale electricity markets that you feel should have been listed above concerning OMOI's level of effectiveness. In general, with regard to oversight and enforcement of wholesale natural gas markets, overall, how effective or ineffective is OMOI in doing the following: (Check one in each row.) a. Monitoring wholesale natural gas markets to determine whether prices are just and reasonable b. Analyzing spikes in wholesale natural gas prices c. Responding appropriately to the causes of wholesale natural gas price spikes d. Detecting market power abuses in wholesale e. Correcting detected market power abuses in natural gas market structure and rules g. Remedying problems concerning natural gas h. Resolving complaints and disputes among natural gas market participants quickly and fairly Enforcing violations of FERC's requirements relating to wholesale natural gas markets Please enter any other issue regarding the enforcement and oversight of wholesale natural gas markets that you feel should have been listed above concerning OMOI's level of effectiveness. 6. Would you agree or disagree with the following statements as they relate to management/resources issues in OMOI? (Check one in each row.) a. Top OMOI management has established effective processes and procedures to oversee wholesale electricity markets. b. Top OMOI management has established effective processes and procedures to enforce wholesale electricity market rules. c. Top OMOI management has established effective processes and procedures to oversee wholesale natural gas markets. d. Top OMOI management has established effective processes and procedures to enforce wholesale natural gas market rules. e. My immediate manager(s) provides clear and concise direction. Top management has clearly defined what role OMOI is going to play in monitoring markets. Staffing levels in OMOI are satisfactory. h. The employee skill mix in OMOI is adequate. Information technology support and services are satisfactory. j. OMOI maintains a strong focus on achieving the FERC's mission. k. OMOI has set clear performance expectations. l. OMOI is able to retain quality employees. Please enter any other issues regarding management/resources issues in OMOI you feel should have been listed above. 7. Would you agree or disagree with the following statements as they relate to data/knowledge requirements issues in OMOI? (Check one in each row.) Staff understands what data are required to effectively oversee wholesale electricity markets. Staff understands what data are required to effectively enforce wholesale electricity market rules. Staff understands what data are required to effectively oversee wholesale natural gas markets. Staff understands what data are required to effectively enforce wholesale natural gas market rules. Staff has adequate access to data on electricity market performance. Staff has adequate access to data on natural gas market performance. Staff has adequate knowledge of, or experience with overseeing competitive electricity markets. Staff has adequate knowledge of, or experience with enforcing market rules in competitive electricity markets. Staff has adequate knowledge of, or experience with overseeing competitive natural gas markets. Staff has adequate knowledge of, or experience with enforcing market rules in competitive natural gas markets. Staff understands the integration of gas and electricity markets. Staff understands the relationship between financial markets and energy markets. 8. Would you agree or disagree with the following statements as they relate to authority issues in FERC? (Check one in each row.) FERC should have authority to enforce reliability rules for electricity. FERC should have additional authority to require submission/sharing of data from Independent System Operators. FERC should have additional authority to levy penalties. FERC should have additional authority to collect necessary data to oversee energy markets and enforce market rules. Please enter any other issues regarding authority issues in FERC you feel should have been listed above. When answering the next question, please recall how we defined division earlier in the survey: Division - Refers to a division within OMOI such as the Division of Energy Market Oversight, Division of Management and Communication, etc. 9. Thinking about your current division in OMOI, would you agree or disagree with the following statements? (Check one in each row.) a. My division has clearly defined its goals and objectives. b. My division currently has adequate staff to do its work. c. The staff in my division have the skills needed to do their jobs well. Please enter any other issues regarding your current division in OMOI you feel should have been listed above. 10. In your opinion, would additional training in the following subject areas assist you in overseeing energy markets and enforcing market rules? (Check one in each row.) a. Basic economic principles/definitions Statistical software packages such as SAS or g. Understanding how financial markets interact with energy markets (including trading, hedging, derivatives, and financial instruments) In the space provided, please enter any other training that you believe would assist you in overseeing energy markets and enforcing market rules. The objective of this section is to obtain your views on morale and the general work environment in OMOI. 11. Overall, how would you characterize the current level of morale in OMOI? (Check one.) 15% Neither high nor low No basis to judge 12. Specifically, how satisfied or dissatisfied are you with each of the following communication issues as they relate to your current work environment? (Check one in each row.) a. Communication between the Chairman and b. Communication between the Commissioners (not including the Chairman) and OMOI c. Communication between OMOI's top d. Communication between different divisions e. Communication with offices within FERC f. Communication between management of g. Communication with other federal agencies h. Communication with state agencies 13. Specifically, how satisfied or dissatisfied are you with each of the following cooperation issues as they relate to your current work environment? (Check one in each row.) a. Cooperation between different divisions in b. Cooperation with offices within FERC other c. Cooperation between management of d. Cooperation with other federal agencies e. Cooperation with state agencies f. Cooperation with Market Monitoring Units 14. Specifically, how satisfied or dissatisfied are you with each of the following leadership/change issues as they relate to your current work environment? (Check one in each row.) a. Leadership provided by Commissioners and b. Leadership/supervision that I directly receive from my division in OMOI c. Organizational changes within OMOI d. Changes in my job duties as a result of 15. Specifically, how satisfied or dissatisfied are you with each of the following resources/rewards issues as they relate to your current work environment? (Check one in each row.) a. Availability of resources (i.e., budget, technology, staff, etc.) necessary to do my b. Availability of rewards for job performance In the space provided, please enter any other issues related to your current work environment that you would like to mention. 16. Thinking about the issues covered in the previous few questions concerning your current work environment, overall, how satisfied or dissatisfied are you with the work environment in OMOI? (Check one.) 15% Equally satisfied as dissatisfied No basis to judge 17. Do you plan to leave FERC through retirement or resignation, within one of the following time periods? (Check one.) 1 to less than 2 years 2 to less than 3 years 3 to less than 5 years I have no plans to leave FERC within the next 5 years 45% Unsure at this time The objective of this section is to obtain your views on the creation of OMOI. 18. Were you employed by FERC before the creation of OMOI in 2002? (Check one.) 64% Yes (Continue with question 19.) 35% No (Skip to question 21.) No basis to judge (Skip to question 21.) 19. To what extent, if at all, do you believe that the creation of OMOI improved FERC's ability to oversee energy markets overall and enforce market rules in energy markets? (Check one in each row.) a. Creation of OMOI improved FERC's ability to oversee b. Creation of OMOI improved FERC's ability to enforce market rules in energy markets 20. In your opinion, to what extent, if at all, has your work focus changed as a result of the creation of OMOI? (Check one.) 24% Changed to a very great extent 20% Changed to a great extent 26% Changed to a moderate extent 16% Changed to little or some extent 12% Has not changed at all If your work has changed at all as a result of the creation of OMOI, please describe the changes in the space provided. Here you are provided an opportunity to provide additional comments or suggestions. 21. If you have any additional comments relating to any of the issues raised in this questionnaire, please enter them 22. If you have any additional suggestions not noted elsewhere on this questionnaire about how FERC or OMOI can improve operations, please enter them in the space provided. Final Survey Question - Be sure to answer this when survey is complete. 23. If you have completed the questionnaire, please check the "Completed" box below. Please note: You must answer "Completed" for your answers to be included. Clicking "Completed" is equivalent to "mailing" your questionnaire. It lets us know that you are finished, and that you want us to use your answers. It also lets us know not to send you any follow-up messages reminding you to complete your questionnaire. Thank you for your cooperation. This appendix contains the Federal Energy Regulatory Commission’s (FERC) response to our questions concerning how the agency’s new market oversight approach will detect certain market manipulation schemes, such as the ones used by the Enron Corporation, and other potentially noncompetitive actions. (See table 5.) For each of these schemes or types of actions, Office of Market Oversight and Investigations (OMOI) officials provided the (1) FERC office or other organization responsible for detecting it, (2) type of oversight used, and (3) type/source of data used. We received this information from OMOI officials in April 2003. In addition to the individuals named above, Adam Hoffman, Jason Holliday, and Raymond Smith made key contributions to this report. Important contributions were also made by Stuart Kaufman, Ellen Rubin, and Barbara Timmerman. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
In June 2002, GAO reported that the Federal Energy Regulatory Commission (FERC) had not yet adequately revised its regulatory and oversight approach for the natural gas and electricity industries' transition from regulated monopolies to competitive markets. GAO also concluded that FERC faced significant human capital challenges to transform its workforce to meet such changes. In responding to the report, FERC said that the new Office of Market Oversight and Investigations (OMOI) it was creating and human capital improvements under way would address these concerns. GAO was asked to report on FERC's progress in (1) establishing an oversight and enforcement capability for competitive energy markets and (2) improving agency-wide human capital management. FERC has made strides in putting an energy market oversight and enforcement capability in place, but work remains to ensure that its efforts will be comprehensive and systematic. Since FERC declared OMOI functional in August 2002, the office has focused primarily on outlining its vision, mission, and primary functions; developing basic work processes; integrating its use of an array of tools to oversee the markets; and hiring staff with market experience. OMOI is also assessing its data needs and developing its working relationships with others, such as the industry's market monitoring units. Nonetheless, the office still has work to do in the following two key areas. Clearly defining its role: OMOI has not clearly defined its role and the activities that it will engage in to achieve its mission. For example, the office has not yet decided on the level of detail at which it will review electricity markets. This decision has substantial implications for the office's data, technology, resource, and staff skill mix needs. Developing formal processes and written procedures: OMOI's processes are largely informal and ad hoc, and it has few written procedures to ensure that its efforts are coordinated, systematic, understood by its staff, and transparent to its stakeholders. Although OMOI has had some early accomplishments--such as a $20 million civil penalty against a company for anticompetitive behavior--it is difficult to judge how effective the office will be until its role and major processes are clearly set out. FERC is also making progress toward addressing its considerable human capital management challenges, but additional actions could increase its likelihood of success. FERC's success in these efforts is important because the extent to which it can carry out its mission in a changing environment depends on its ability to adjust its staff skills and abilities in a difficult context. For example, over half of its workforce will be eligible to retire by 2007. In response, FERC has, among other things, expanded its use of certain personnel flexibilities, such as recruiting and retention bonuses, and is considering use of additional flexibilities. More importantly, FERC, in February 2003, developed a human capital plan. However, the plan does not contain some elements key to successful implementation, including (1) details on specific activities and resources needed to implement its human capital initiatives and (2) results-oriented measures that can be used to track the agency's progress in implementing the initiatives and evaluate their effectiveness. FERC also has not established time frames for many of its human capital initiatives.
As we reported in September 2004, improvements in information technology, decreasing data transmission costs, and expanded infrastructure in developing countries have facilitated services offshoring. Offshoring is reflected in services import data because when a company replaces work done domestically with work done overseas, such as in India or China, the services are now being imported from overseas. For example, when a U.S.-based company pays for a service (such as computer and data processing services in India), the payment is recorded as a services import (from India in this example). BEA reports data on trade in services that are frequently associated with offshoring. BEA’s trade in services data consist of cross-border transactions between U.S. and foreign residents and comprise five broad categories of services. One of these five categories of services is “other private services,” which includes key sectors associated with offshoring under the subcategory of BPT services. In 2003, BPT services accounted for $40.8 billion or 48 percent of U.S. imports of “other private services,” which totaled $85.8 billion. (See fig. 1.) U.S. data on BPT services differentiate between affiliated and unaffiliated trade. Affiliated trade occurs between U.S. parent firms and their foreign affiliates and between foreign parent firms and their affiliates in the United States; while unaffiliated trade occurs between U.S. entities and foreigners that do not own, nor are owned by, the U.S. entity. In 2003, total U.S. imports of affiliated BPT services accounted for approximately $29.9 billion, or about 73 percent of all U.S. imports of these services. BEA does not disaggregate affiliated trade by country, in particular types of services, due to its concerns about the accuracy and completeness of data firms’ report. Total U.S. imports of unaffiliated BPT services amounted to approximately $11.0 billion in 2003, or about 27 percent of the total unaffiliated U.S. imports of BPT services. According to U.S. data, the growth of U.S. trade in BPT services has been rapid. For example, from 1994 to 2003, total unaffiliated U.S. imports of these services more than doubled. In addition, U.S. exports of unaffiliated BPT services almost doubled during the same period. To report data on trade in BPT services, BEA conducts mandatory quarterly, annual, and 5-year benchmark surveys of firms in the United States. In administering its services surveys, BEA seeks to collect information from the entire universe of firms with transactions in BPT services above certain threshold levels for the period covered by each survey. The mailing lists for the surveys include firms in the United States that have previously filed a survey and other firms that BEA believes may have had transactions in the services covered by the survey. The mailing lists of firms receiving surveys are derived, in part, from U.S. government sources, industry associations, business directories, and various periodicals. Firms receiving the surveys are required to report transactions above a certain threshold value, which BEA believes, in theory, captures virtually the entire universe of transactions in the services covered by its surveys. Those firms with transactions falling below the threshold value are exempt from reporting data by type of service, but they are asked to voluntarily provide estimates of the aggregate value of their transactions for all services covered by the survey. The trade data that BEA produces help government officials, business decision makers, researchers, and the American public to follow and understand the performance of the U.S. economy. For example, analysts and policy makers use U.S. trade data to assess the impact of international trade on the U.S. balance of payments and the overall economy. In addition, U.S. trade data are used by trade policy officials to negotiate international trade agreements. U.S. data show a significantly smaller volume of trade in BPT services between India and the United States than Indian data show. BEA data on U.S. imports of unaffiliated BPT services from India indicate that U.S. firms import only a small fraction of the total that India reported in exports of similar services to the United States. In addition, this gap has grown between 2002 and 2003. This gap does not exist just for U.S. and Indian data. A similar gap also exists between other developed countries’ import data and Indian export data. BEA data show a rapid increase in U.S. imports of unaffiliated BPT services from India. For 2002, the total unaffiliated U.S. imports of BPT services from India totaled approximately $240 million. For 2003, the total unaffiliated U.S. imports of BPT services from India increased to about $420 million. India reports exports to the United States of similar services of about $6.5 billion for 2002 and $8.7 billion for 2003. Thus, the value of the gap between U.S. and Indian data in 2002 was approximately $6.2 billion and, in 2003, was about $8.3 billion, an increase of about one-third. (See fig. 2.) RBI, which is India’s central bank, is responsible for reporting official Indian data on trade in services. However, RBI data on trade in services incorporate the data collected by India’s primary information technology association—the National Association of Software and Service Companies (NASSCOM). To improve the completeness of the data NASSCOM provides to RBI, NASSCOM includes data on the software services exports it receives from an Indian government program, the Software Technology Parks of India (STPI). While RBI does not provide country-specific data on India’s exports of services to the United States, NASSCOM’s data do provide a country-specific breakdown. Thus, the data cited above for India come from NASSCOM. According to a recent RBI report, a technical group recommended in 2003 that RBI compile data on software and information technology exports through quarterly surveys, and through a comprehensive survey to be conducted every 3 years. The first of these studies was released in September 2005, as our report was being finalized, and provides data on Indian exports of computer services for 2002. The 2005 RBI report showed that India reported approximately $4.3 billion in computer services exports to the United States and Canada for 2002 (2003 data have not yet been provided). Although RBI’s report did not provide an estimate of the U.S. share of these exports, on the basis of NASSCOM’s estimate that 80 to 85 percent of exports to North America were destined for the United States in 2002, we estimate that India exported approximately $3.5 billion in computer services to the United States. Those examining trends in offshoring often compare U.S. and Indian data series; however, there are at least five factors that make this comparison difficult and affect the difference between U.S. and Indian data. These factors relate to (1) the treatment of services provided by foreign temporary workers in the United States; (2) the definition of some services, such as computer programs embedded in goods and certain information technology-enabled services; (3) the treatment of transactions between firms in India and the overseas offices of U.S. firms; (4) the reporting of country-specific data on trade in affiliated services; and (5) the sources of data and other methodological differences in the collection of services trade data. According to U.S. and Indian officials, U.S. and Indian data differ in their treatment of salaries paid to certain temporary foreign workers providing services to clients in the United States. U.S. data do not include such salaries as cross-border trade in services. The United States only includes the salaries paid to temporary foreign workers who have been in the United States less than 1 year and are not on the payrolls of firms in the United States. However, Indian data do include, as Indian exports, the value of services provided by Indian workers employed in the United States for more than 1 year, according to Indian officials. The U.S. approach accords with the international standards of IMF. According to BEA and international standards, cross-border trade in services occurs between residents of a country and nonresidents, or “foreigners,” and residency of a temporary foreign worker employed abroad is based, in part, on the worker’s length of stay in the country. Therefore, according to these standards, if a temporary foreign worker stays or intends to stay in the United States for 1 year or more, that worker is considered a U.S. resident, and the value of the work performed is not included in U.S. import data. The treatment of services provided by temporary foreign workers in the United States is likely a significant factor contributing to the difference between U.S. and Indian data, according to Indian officials. Some Indian officials estimated that in past years, approximately 40 percent of India’s exports to the United States of services corresponding to BPT services were delivered by temporary Indian workers in the United States. For example, for 2002, RBI found that approximately 47 percent of India’s global exports of computer services occurred through the on-site delivery of services by temporary Indian workers. U.S. and Indian data differ, in part, due to differences in how both countries count services trade. India counts as trade in services certain transactions in software that are classified as trade in goods in U.S. data. For example, Indian data on trade in services include software embedded on computer hardware, which the United States classifies as trade in goods. Consistent with internationally recommended standards, the United States does not separate the value of embedded software that is physically shipped to or from the United States from the overall value of the media or computer in which it is installed. Thus, the value of such software is not recorded as trade in services but is included in the value of the physical media and hardware–-which are counted as trade in goods in U.S. data. We were not able to determine the extent to which this factor contributes to the difference in U.S. and Indian data because we found no estimates of the proportion of embedded software in Indian data on services exports to the United States. Indian officials stated that the difference in the treatment of embedded software likely does not significantly contribute to the difference in data because India exports a relatively low value of embedded software. For example, according to Indian officials, the portion of India’s global services exports delivered through physical media and hardware accounts for 10 to 15 percent of the total value of India-reported exports of services corresponding to BPT services. U.S. and Indian data also differ in how they define services in their respective data series. Unlike BEA, RBI and NASSCOM do not report data under the category of BPT services. RBI officials stated that it reports trade data on services similar to BPT services under the category of Software Services. RBI does not report a breakdown of its data on software services into subcategories of services. According to a NASSCOM official, NASSCOM classifies its trade data on services that most closely correspond to BPT services under Information Technology and Information Technology-Enabled Services (IT-ITES). The subcategories of services under this classification do not directly correspond to the subcategories of BPT services, but are similar. For example, under its IT- ITES classification, NASSCOM reports data on IT Services and Software, while BPT services include computer and data processing, and database and other information services. However, NASSCOM includes data on certain information technology-enabled services, such as certain financial services, that are not included in BEA’s definition of BPT services, but are recorded separately. Although these categories roughly compare, a reconciliation of these subcategories has not yet been done. Thus, we were not able to determine the extent to which these definitional differences contribute to the difference between U.S. and Indian data. The treatment of services involving the overseas offices of U.S. firms by BEA and India is another factor explaining some of the difference between U.S. and Indian data. Unlike the United States, India counts the sales of services from firms in India to U.S.-owned firms outside the United States as exports to the United States. U.S. data do not count such sales as U.S. imports of services from India, because BEA considers the overseas offices of U.S. firms to be residents of the countries where they are located rather than residents of the country of the firm’s owners. The U.S. approach is consistent with international standards. U.S. and Indian officials could not provide us an estimate of the extent to which the treatment of transactions involving the overseas offices of U.S.- owned firms contribute to the difference in U.S. and Indian data. However, one high-level Indian official stated that it is likely a significant factor. The reporting of affiliated trade in services differ in U.S. and Indian data. BEA reports country-specific data only for unaffiliated U.S. imports of BPT services, while Indian data include both affiliated and unaffiliated trade in services but do not separate the two. BEA reports detailed data only for unaffiliated trade because it has concerns about the accuracy and completeness of the data that firms report about affiliated trade in BPT services by country. For example, multinational firms with global offices may find it difficult to establish where, between whom, and what type of services have been transacted; and report these data along national lines to a statistical agency. BEA does collect data on overall affiliated services trade, but it reports only the total value across all countries due to its concerns about the reliability of how companies are allocating these totals to specific countries. In addition, due to concerns over the reporting burden on U.S. companies, BEA collects less detailed data on affiliated transactions than on unaffiliated transactions. U.S. data on overall affiliated trade across all countries show that a significant majority of total U.S. imports of BPT services take the form of trade between parents and affiliates. For example, for 2003, approximately three-quarters of all U.S. imports of BPT services—about $29.9 billion— represented trade within multinational firms. If U.S.-Indian trade in these services reflects this overall share of trade through affiliates, then unreported affiliated trade with India may be much larger than the unaffiliated trade that is reported. Therefore, the lack of reported data on affiliated imports of BPT services contributes to the difference in data. There are differences in the sources of data the United States and India use to collect data on trade in services, which may contribute to overcounting or undercounting of services trade. While both BEA and NASSCOM prepare estimates of cross-border trade in services by surveying qualifying firms, U.S. and Indian data differ in the universe of such firms covered by their survey methodologies. The universe of firms in India exporting services is relatively easily identified because these firms have an incentive to report data on their exports of services and tend to be concentrated in certain industries. For example, firms exporting software services are required to report export data to the government of India’s STPI program. STPI requires firms to report these data in order to comply with India’s foreign exchange controls and to qualify for certain tax incentives and infrastructure benefits. To improve the completeness of its own survey data from its member firms, NASSCOM incorporates information on other exporters collected under the STPI program prior to providing these data to RBI. In addition, services exporting firms tend to be concentrated in certain industries. For instance, according to Indian officials, NASSCOM surveys its member firms in India to collect the annual dollar value of these firms’ exports. The member firms that NASSCOM surveys number approximately 900 and, according to a NASSCOM official, these firms contribute a large share of India’s total exports of these services. In addition, RBI has begun its own comprehensive survey of companies, which according to RBI, covered all of the identified companies engaged in software and IT services exports activities. RBI identified these companies on the basis of lists provided by NASSCOM, STPI, and the Electronics and Computer Software Export Promotion Council (ESC). In contrast to how India identifies firms exporting services, BEA does not have an easily available list of services importers. Instead, it must identify firms from public sources. BEA acknowledges that its survey methodology may contribute to the undercounting of U.S. imports of services due, in part, to the difficulty it faces in identifying the universe of services importers. The firms in the United States that BEA surveys to estimate U.S. imports are in many different industries and number in the thousands. Thus, BEA notes that it is difficult to establish and maintain a comprehensive mailing list for all U.S. firms importing services from foreign sources, particularly if the group of firms that import services changes substantially from year to year. In addition, maintaining accurate coverage using surveys is particularly difficult when there is rapid growth in the activity, as is the case with BPT services imports from India. Under BEA regulations, BEA exempts smaller importers from reporting their imports. Instead, it estimates these imports on the basis of a sample. If the value of smaller transactions is higher than BEA assumes in its estimation procedures, then imports of services would be understated. BEA, therefore, may undercount the total value of U.S. imports of services. The data collection entities–-BEA and NASSCOM–-also differ significantly in mission and scope. BEA is the U.S. agency charged with collecting, analyzing, and reporting official statistics on a broad range of U.S. imports and exports of services. BEA is regarded as a leading statistical organization, and it provides both statistical concepts and best practices to other countries and statistical organizations worldwide. NASSCOM is not a government statistical agency. It is a private trade association that represents the interests of the software and services industry in India, and data collection is but one element of a broader mission that focuses on representing that industry. Recently, RBI has recognized a need to reexamine the current methodology on the collection of software exports data, and is utilizing a methodology to collect services data in accordance with IMF standards. As a U.S. government agency, we were not able to fully review India’s methodologies, but we did further examine in the next section of this report the challenges BEA faces in collecting services statistics. BEA faces challenges in collecting services import data, including identifying the full universe of services importers. To test its survey coverage, we provided BEA with lists of firms that we identified from public sources as likely importing BPT services from India. Although the BEA mailing lists included most of the firms we identified, they did not include all of these firms. In addition, BEA may be undercounting imports because it is challenging to identify all of the applicable surveys to send to firms. BEA also has not always received quality survey responses from firms. BEA has taken action to improve survey coverage and responses through outreach to survey respondents and by attempting to collaborate with other federal agencies, but it has not been able to access data that could assist in identifying the universe of firms importing services. Services offshoring presents its own challenges for statistical agencies. As previously discussed, identifying services importers becomes difficult if the group of firms and individuals importing services changes over time, or if there is a rapid increase in services imports. In the case of BPT services, both the United States and India have reported a rapid increase of exports to the United States and BEA may be undercounting U.S. firms importing such services from India due to this growth. (See fig. 3.) BEA acknowledges that it is able to identify a higher proportion of U.S. exporters than U.S. importers. This is because exporters tend to be large firms providing one particular type of service and are concentrated in certain industries, while importers vary in size and industry affiliation. Thus, BEA officials expressed concern that they are not able to identify and survey small firms that import BPT services infrequently, and are potentially undercounting U.S. trade in these services. To test for potential undercounting of U.S. imports, we provided BEA with lists of firms that we identified through publicly available sources as likely to be importing BPT services from India. BEA then (1) reviewed its mailing lists of firms that were sent surveys to verify that it had previously identified and surveyed these firms and (2) verified whether the firms we identified reported imports from India. Table 1 shows the following: BEA had included in its mailing lists 87 of the 104 firms we identified as likely importing BPT services from India; thus, BEA did not send surveys to 17 of these firms. After further analysis, BEA added 13 of these firms to its mailing lists and has sent them surveys, thus improving the universe of services importers. Of the 66 affiliated firms that received surveys, 48 firms received the quarterly survey for affiliated imports; thus, BEA did not send 18 affiliated firms this quarterly survey, although they received other surveys. Of the 21 unaffiliated firms that received surveys, 6 received the quarterly survey for unaffiliated imports; thus, BEA did not send 15 unaffiliated firms this quarterly survey, although they received other surveys. BEA may miss some BPT services imports because it is difficult to identify the total number of surveys that apply to all of the services transactions for which each firm was qualified. On the basis of the review of our lists, it appears that some of the firms that BEA identified in at least one of its comprehensive mailing lists were not on the mailing lists for other surveys that we expected. These firms likely had transactions covered by surveys other than the one they received. For example, several companies we identified as having an affiliate office in India did not receive one of the surveys for affiliated transactions, although these firms received a survey for unaffiliated transactions. With respect to BEA’s effort to verify whether firms that we identified actually reported imports from India, of the 51 firms responding to the quarterly surveys, 15 firms indicated imports from India. Thus, 15 of the 104 firms we identified on the basis of public-source data as likely importing BPT services from India, reported those imports to BEA. High-level BEA officials indicated that it is possible that companies are not reporting country information because they fall below the survey exemption levels and, thus, were not required to provide such detailed data to BEA. BEA requests firms falling below survey exemption levels to voluntarily report aggregate transactions for all countries combined, without a country- specific breakdown. While these results cannot be generalized, they confirm the challenges of collecting services import data. However, they do not provide an indication of the magnitude or extent of these challenges. In addition, our lists of firms were based on a review of multiple sources of publicly available information. Without directly surveying each firm, however, it is not possible to confirm that they actually purchased BPT services from India. BEA is addressing concerns related to the identification of U.S. importers, the undercounting of services, and the administration of its surveys. For example, BEA contracted with a private firm to undertake an external review of its data sources and methods of identifying these services importers. The review will examine the extent of undercounting in both affiliated and unaffiliated services transactions, including the possible sources of undercounting, and any additional methods or sources of information that will improve survey coverage. The goals of this effort include identifying the extent of qualified firms that are not currently on the survey mailing lists, and to improve the estimates of international transactions. BEA expects the results of this review early in fiscal year 2006. BEA also has made efforts to ensure that firms receive the surveys for which they are qualified. BEA routinely sends surveys to firms that may be exempt from reporting in order to make a determination that they are still exempt. In addition, firms having transactions in services not covered in the surveys they receive are required to request additional surveys from BEA. In order to report data on trade in services, BEA needs to receive accurate and complete survey responses. However, BEA notes that the information it receives from firms on their affiliated imports of particular types of services has not proved sufficiently reliable to support the release of country-level estimates. As previously discussed, BEA is able to report overall affiliated trade for specific countries, but it is not able to report BPT trade for specific countries. This is because BEA has concerns over the quality of responses it receives from firms when they allocate affiliated imports to detailed types of services. Global firms may have difficulty accurately attributing services exported to the United States when their operations are spread across multiple countries. In addition, a high-level BEA official said that firms may not fully report all of their affiliated transactions for which they should report. This official noted that these reporting difficulties may reflect business record-keeping practices, which are intended to meet financial reporting requirements, rather than government surveys. In order to address these challenges, BEA is taking action to improve the quality of survey responses and to overcome the difficulty of reporting detailed data on affiliated imports of services. For example, an examination of BEA’s data on affiliated transactions is a component of BEA’s contract with a private firm that is conducting an external review of BEA’s data sources and methods of identifying services importers. In addition, BEA has requested Census to conduct an external review of its survey forms and instructions, and to make recommendations that would improve clarity and promote accurate reporting. BEA is also performing its own review of its surveys to determine the clarity of survey instructions and is providing training to survey recipients on how to complete the surveys accurately. In addition, to improve the quality of its data on affiliated services imports, including affiliated imports of BPT services, BEA is considering collecting data on both affiliated and unaffiliated transactions on the same survey form. BEA is also considering expanding the types of affiliated BPT services for which it requests data to match the detailed data it collects on unaffiliated imports of BPT services. BEA is currently negotiating access to data from other federal agencies to expand its existing sources of data and to improve its survey coverage, but BEA has been unable to access this data from other federal agencies. According to BEA officials, other federal agencies, such as Census, possess data that could assist BEA in preparing its estimates of trade in services, including information on firms in the United States that could be importing services. For example, Census surveys firms to collect data of firms’ business expenses, which include the purchase of BPT services. These surveys may be useful to identify importers because large purchasers of services may also be importing these services. The survey data that Census currently collects are not directly useful for BEA because the data on business expenses do not separate domestic from international expenses and do not distinguish between affiliated and unaffiliated transactions. However, BEA would get name and addresses of potential services importers. In addition, BEA could potentially request that Census add questions to one or more of the surveys that Census administers in order to identify services importers. However, BEA currently faces legal restrictions in gaining access to data utilized by Census. Although federal laws allow such data sharing between Census and BEA, BEA is generally restricted from gaining access to federal tax information that Census obtains from the Internal Revenue Service. According to BEA officials, BEA is negotiating with Census and the Internal Revenue Service to gain access to sources of data to improve its mailing lists. The large difference between U.S. and Indian data on BPT sources makes the analysis of the extent of offshoring more difficult. Some of this difference in data can be attributed to varying definitions of BPT services, but some also appears to be due to incomplete U.S. data. BEA has been seeking various ways to improve the overall quality of U.S. services trade data, but our test of whether they had identified BPT service importers indicated that they were not identifying all U.S. importers of services. Given the importance of this category of data in understanding the extent of offshoring of services, a subject of continuing public and congressional concern, we believe that additional efforts to strengthen the quality of U.S. services data are merited. We are recommending that the Secretary of Commerce direct BEA to systematically expand its sources of information for identifying firms to survey. BEA should consider ways to improve its identification of the appropriate survey forms to send to firms and the information requested about services imports, particularly with regard to affiliated imports. We also recommend that the Secretary direct BEA to pursue additional company information from previous Census surveys and consider requesting Census to add questions to future surveys to help identify services importers. The Department of Commerce provided written comments on the draft report, which are reproduced in appendix II. Commerce concurred with our recommendation that BEA should strive to improve its coverage of services imports. In particular, Commerce agreed that BEA should pursue additional company information from Census. Commerce also provided technical comments, which we incorporated into the report as appropriate. Following the receipt of agency comments from Commerce, RBI publicly released a report outlining a new methodology to compile services export data in accordance with IMF standards. Although RBI’s new survey methodology conforms more closely to IMF standards for defining international transactions in services, differences between U.S. and Indian data remain due to a variety of factors we discuss in this report. For example, the RBI report acknowledges that Indian data include not only exports of computer-related services but also exports of ITES. Since the primary objective of RBI’s survey was to collect data on software exports in conformity with IMF’s definition of computer services, RBI’s survey data exclude data from companies exclusively exporting ITES, and include only data on computer services. However, RBI’s report does not indicate that RBI’s survey methodology addresses other factors contributing to the difference between U.S. and Indian data. For example, it appears that RBI’s survey data include the earnings of foreign temporary workers employed abroad without taking into account their length of stay or intention to remain abroad. RBI estimated this on-site work to account for approximately 47 percent of India’s total worldwide exports, although some portion of this total may include services provided by temporary Indian workers employed abroad for over 1 year. In addition, RBI’s report does not indicate that sales of embedded software are excluded from RBI’s survey data. We are providing copies of this report to interested congressional committees and the Secretary of Commerce. Copies will be available to others upon request. In addition, the report will be available at no charge on the GAO Web site at www.gao.gov. If you or your staff have any questions about this report, please contact Mr. Yager on (202) 512-4128. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other GAO contacts and staff acknowledgments are listed in appendix III. This report discusses (1) the extent of the difference between U.S. and Indian data on trade in business, professional, and technical (BPT) services, (2) the factors that explain the difference between U.S. data on imports of BPT services and India’s data on exports of those same services, and (3) the challenges that the United States has faced in collecting services data. To obtain information on the extent of the difference between U.S. and Indian services trade data, we analyzed and compared U.S. and Indian data and interviewed U.S. and Indian government officials from the relevant agencies, including the U.S. Bureau of Economic Analysis (BEA), and the Reserve Bank of India (RBI). RBI relies on a trade association, the National Association of Service and Software Companies (NASSCOM), to collect data on these services. Although we reviewed NASSCOM's survey form and discussed with a NASSCOM official the collection of their statistics, NASSCOM did not provide us with their methodology for ensuring the reliability of their data. Therefore, we were not able to independently assess the quality and consistency of their data. However, for the purposes of this report, we found these data to be sufficiently reliable for reporting the difference in the official U.S. and Indian trade data in BPT services. To determine the factors that explain the difference in U.S. and Indian trade data, we reviewed official methodologies, interviewed relevant officials, and conducted a search of available literature. We reviewed documentation and technical notes from BEA and RBI to determine the U.S. and Indian methodologies for collecting and reporting trade in services data and to assess the limitations and reliability of various data series. We discussed these topics with BEA officials. In addition, we traveled to India to interview RBI officials and NASSCOM representatives and to obtain documentation on the collection and limitations of Indian data. We also interviewed a range of U.S. and Indian businesses in India that supply trade data to the United States and India to determine how they report data. We performed a literature search and obtained information from the Brookings Institution, the Institute for International Economics, and the Organization for Economic Co-operation and Development (OECD). To determine the international standards for collecting and reporting trade-in-services data, we reviewed relevant documentation from international organizations, including the International Monetary Fund and the United Nations. In September 2005, as our report was being finalized, RBI released a report entitled “Computer Services Exports from India: 2002-03,” which discusses the methodology and results of a comprehensive survey that RBI conducted to collect data on India’s “computer services” exports for 2002 in conformity with the International Monetary Fund’s Balance of Payments Manual, 5th edition (1993). The RBI report provides information about RBI’s survey methodology, including the number and types of companies surveyed and the information sought through the survey. In addition, the report outlines recommendations for RBI to collect data on software and information technology exports through representative quarterly surveys and a comprehensive survey every 3 years. We incorporated this additional information from the RBI report where appropriate. To examine the coverage of BEA’s surveys for collecting trade-in-services data, we supplied BEA with lists of U.S.-based companies we identified as likely importers of services from India to compare with its mailing lists. We developed two lists. The first list included the names and addresses of companies in the United States with affiliate offices in India that are likely importing BPT services from India through affiliates. The second list included the names and addresses of companies that are likely purchasers of services through unaffiliated parties in India. We identified these companies through publicly available sources, including public media, company filings with the Securities and Exchange Commission, annual reports of companies, the list of NASSCOM member companies, and lists of companies compiled by information technology interest groups. Our lists of firms are not necessarily representative of all U.S. firms importing from India, and we do not generalize our results. We asked BEA to compare these lists with the following mailing lists for affiliated and unaffiliated surveys to identify how many companies it was surveying: We requested that BEA provide us with the number of companies from both lists that BEA was able to identify and not identify on its corresponding mailing lists. For companies that received a survey, we asked BEA to identify the number of these companies that responded to the survey and provided information on purchases from India. For companies that were not on any mailing list, BEA was asked to identify (1) whether the firms were excluded from its mailing list because they were assumed to be below exemption levels for the particular survey, (2) whether the firms are on BEA’s current mailing list for the particular survey, and (3) whether the firms are listed on other BEA mailing lists. We discussed the results of this review with BEA officials. To assess the challenges the United States has faced in collecting and reporting data on trade in services, we reviewed relevant BEA documentation and interviewed BEA officials. We reviewed BEA documentation to determine BEA’s data limitations and to assess the challenges BEA faces in collecting and reporting U.S. data on trade in services. To determine the challenges of expanding BEA’s survey coverage through interagency data sharing we interviewed officials at BEA and the U.S. Census Bureau (Census), and we reviewed Census documentation. We also interviewed BEA officials to discuss these identified challenges and to determine the plans and actions BEA has taken to improve the quality of U.S. data. Finally, we interviewed Internal Revenue Service (IRS) officials to gain an understanding of IRS policy on restricting access to federal tax information that the IRS provides to Census. We performed our work from March 2005 through September 2005 in accordance with generally accepted government auditing standards. In addition to the person named above, Virginia Hughes, Bradley Hunt, Ernie Jackson, Sona Kalapura, Judith Knepper, Robert Parker, Cheryl Peterson, and Tim Wedding made major contributions to this report.
Trade in business, professional, and technical (BPT) services associated with offshoring needs to be accurately tracked, but a gap exists between U.S. and Indian data. The extent of and reasons for this gap are important to understand in order to address questions about the magnitude of offshoring and to analyze its future development. Under the authority of the Comptroller General of the United States, and as part of a body of GAO work on the issue of offshoring of services, this report (1) describes the extent of the gap between U.S. and Indian data, (2) identifies factors that contribute to the difference between the two countries' data, and (3) examines the challenges the United States has faced in collecting services trade data. GAO has addressed this report to the congressional committees of jurisdiction. The gap between U.S. and Indian data on trade in BPT services is significant. For example, data show that for 2003, the United States reported $420 million in unaffiliated imports of BPT services from India, while India reported approximately $8.7 billion in exports of affiliated and unaffiliated BPT services to the United States. At least five definitional and methodological factors contribute to the difference between U.S. and Indian data on BPT services. First, India and the United States follow different practices in accounting for the earnings of temporary Indian workers residing in the United States. Second, India defines certain services, such as software embedded on computer hardware, differently than the United States. Third, India and the United States follow different practices for counting sales by India to U.S.-owned firms located outside of the United States. The United States follows International Monetary Fund standards for each of these factors. Fourth, BEA does not report country-specific data for particular types of services due to concerns about the quality of responses it receives from firms when they allocate their affiliated imports to detailed types of services. As a result, U.S. data on BPT services include only unaffiliated imports from India, while Indian data include both affiliated and unaffiliated exports. Fifth, other differences, such as identifying all services importers, may also contribute to the data gap. The U.S. Bureau of Economic Analysis (BEA) has experienced challenges in identifying all U.S. services importers and obtaining quality survey data from importers. To test BEA's survey coverage, GAO provided BEA with lists of firms identified from public sources as likely importers of BPT services from India. The results of this test showed that some services importers were not included in BEA's mailing lists. However, BEA has taken action to address these challenges, including collaborating with other federal agencies, such as the U.S. Census Bureau and the Internal Revenue Service, to better identify firms to survey. However, data-sharing restrictions hamper BEA's efforts.
Recent advances in aircraft technology, including advanced collision avoidance and flight management systems, and new automated tools for air traffic controllers enable a shift from air traffic control to collaborative air traffic management. Free flight, a key component of air traffic management, will provide pilots with more flexibility, under certain conditions, to fly more direct routes from city to city. Currently, pilots primarily fly fixed routes—the aerial equivalent of the interstate highway system—that often are less direct because pilots are dependent on ground- based navigational aids. Through free flight, FAA hopes to increase the capacity, efficiency, and safety of our nation's airspace system to meet the growing demand for air transportation as well as enhance the controllers’ productivity. The aviation industry, especially the airlines, is seeking to shorten flight times and reduce fuel consumption. According to FAA’s preliminary estimates, the benefits to the flying public and the aviation industry could reach into the billions of dollars when the program is fully operational. In 1998, FAA and the aviation community agreed to a phased approach for implementing the free flight program, established a schedule for phase 1, and created a special program office to manage this phase. During phase 1, which FAA plans to complete by the end of calendar year 2002, the agency has been deploying five new technologies to a limited number of locations and measuring their benefits. Figure 1 shows how these five technologies—Surface Movement Advisor (SMA), User Request Evaluation Tool (URET), Traffic Management Advisor (TMA), Collaborative Decision Making (CDM), and passive Final Approach Spacing Tool (pFAST)—operate to help manage air traffic. According to FAA, SMA and CDM have been deployed at all phase 1 sites on or ahead of schedule. Table 1 shows FAA’s actual and planned deployment dates for URET, TMA, and pFAST. To measure whether the free flight tools will increase system capacity and efficiency, in phase 1, FAA has been collecting data for the year prior to deployment and initially planned to collect this information for the year after deployment before making a decision about moving forward. In December 1999, at the urging of the aviation community, FAA accelerated its funding request to enable it to complete the next phase of the free flight program by 2005—2 years ahead of schedule. During this second phase, FAA plans to deploy some of the tools at additional locations and colocate some of them at selected facilities. FAA also plans to conduct research on enhancements to these tools and incorporate them when they are sufficiently mature. FAA plans to make an investment decision in March 2002 about whether to proceed to phase 2. However, by that date, the last site for URET will have been operational for only 1 month, thus not allowing the agency to collect data for 1 year after deployment for that site before deciding to move forward. (See table 1.) FAA officials told us that because the preliminary data showed that the benefits were occurring more rapidly than anticipated, they believe it is unnecessary to wait for the results from the evaluation plan to make a decision about moving forward. To help airports achieve their maximum capacity for arrivals through free flight, FAA’s controllers will undergo a major cultural change in how they will manage the flow of air traffic over a fixed point (known as metering). Under the commonly used method, controllers use “distance” to meter aircraft. With the introduction of TMA, controllers will have to adapt to using “time” to meter aircraft. The major technical challenge with deploying the free flight tools is making URET work with FAA’s other air traffic control systems. While FAA does not think this challenge is insurmountable, we believe it is important for FAA to resolve this issue to fully realize URET's benefit of increasing controller productivity. Initially, controllers had expressed concern about how often they could rely on TMA to provide the data needed to effectively manage the flow of traffic. However, according to FAA and subsequent conversations with controllers, this problem was corrected in May 2001 when the agency upgraded TMA software and deployed the new version to all sites. To FAA’s credit, it has decided not to deploy pFAST to additional facilities in phase 2 because of technical difficulties associated with customizing the tool to meet the specific needs of each facility, designing other automated systems that are needed to make it work, and affordability considerations. Ensuring that URET is compatible with other major air traffic control systems is a crucial technical challenge because this requires FAA to integrate software changes among multiple systems. Among these systems are FAA’s HOST, Display System Replacement, and local communications networks. Compounding this challenge, FAA has been simultaneously upgrading these systems’ software to increase their capabilities. How well URET will work with these systems is unknown because FAA has yet to test this tool with them. FAA has developed the software needed for integration and has begun preliminary testing. Although problems have been uncovered during testing, FAA has indicated that these problems should not preclude URET’s continued deployment. By the end of August 2001, FAA expects to complete testing of URET’s initial software in conjunction with the agency’s other major air traffic control systems. FAA acknowledges that further testing might uncover the need for additional software modifications, which could increase costs above FAA’s current estimate for this tool’s software development and could cause the agency to defer capabilities planned for phase 1. Ensuring URET’s compatibility with other air traffic control systems is important to fully realize its benefits of increasing controllers’ productivity. URET is used in facilities that control air traffic at high altitudes and will help associate and lead controllers work together to safely separate aircraft. Traditionally, an associate controller has used the data on aircraft positions provided by the HOST computer and displayed on the Display System Replacement workstation to assess whether a potential conflict between aircraft exists. If so, an associate controller would annotate the paper flight strips containing information on their flights and forward these paper flight strips to the lead controller who would use the Display System Replacement workstation to enter flight plan amendments into the HOST. URET users we spoke with said that this traditional approach is a labor-intensive process, requiring over 30 keystrokes. With URET, an associate controller can rely on this tool to automatically search for potential conflicts between aircraft, which are then displayed. URET also helps an associate controller resolve a potential conflict by automatically calculating the implications of any change prior to amending the flight plan directly into the HOST. According to the users we spoke with, these amendments require only three keystrokes with URET. FAA, controllers, maintenance technicians, the aviation community, and other stakeholders agree on the importance of using a phased approach to implementing the free flight program. This approach allows FAA the opportunity to gradually deploy the new technologies at selected facilities and users to gain operational experience before total commitment to the free flight tools. It basically follows the “build a little, test a little, field a little” approach that we have endorsed on numerous occasions. To FAA’s credit, the agency has appropriately used this approach to determine that it will not deploy pFAST in phase 2. We also agree with major stakeholders that adapting to the program’s tools poses the greatest operational challenge because they will change the roles and responsibilities of the controllers and others involved in air traffic services. However, the success of free flight will rely on agencywide cultural changes, especially with controllers, who trust their own judgment more than some of FAA’s new technologies, particularly because the agency’s prior efforts to deploy them have had significant problems.Without training in these new tools, air traffic controllers would be hampered in fulfilling their new roles and responsibilities. Another major challenge is effectively communicating TMA’s capabilities to users. Because FAA has been deferring and changing capabilities, it has been difficult for controllers to know what to expect and when from this tool and for FAA to ensure that it provides all the capabilities that had been agreed when FAA approved the investment for phase 1. During our meetings with air traffic controllers and supervisors, their biggest concern was that the free flight tools would require cultural changes in the way they carry out their responsibilities. By increasing their dependence on automation for their decisionmaking, these tools are expected to help increase controllers’ productivity. Moreover, the tools will require changes in commonly recognized and accepted methods for managing traffic. Controllers and supervisors emphasized that URET will increase the responsibilities of the associate controllers in two important ways. First, their role would no longer be focused primarily on separating traffic by reading information on aircraft routes and altitudes from paper flight strips, calculating potential conflicts, and manually reconfiguring the strips in a tray to convey this information to a lead controller. With the URET software that automatically identifies potential conflicts up to 20 minutes in advance, associate controllers can be more productive because they will no longer have to perform these manual tasks. Second, they can assume a more strategic outlook by becoming more focused on improving the use of the airspace. URET enables them to be more responsive to a pilot’s request to amend a flight plan (such as to take advantage of favorable winds) because automation enables them to more quickly check for potential conflicts before granting a request. Although the controllers said they look forward to assuming this greater role and believe that URET will improve the operational efficiency of our nation’s airspace, they have some reservations. Achieving this operational efficiency comes with its own set of cultural and operational challenges. Culturally, controllers will have to reduce their dependency on paper flight strips as URET presents data electronically on a computer screen. According to the controllers we interviewed, this change will be very challenging, especially at facilities that handle large volumes of traffic, such as Chicago, because the two facilities that have received URET have taken several years to become proficient with it even though they have less traffic. Operationally, controllers said that URET’s design must include some backup capability because they foresee the tool becoming a critical component in future operations. Moreover, as controllers become increasingly experienced and reliant on URET, they will be reluctant to return to the former manual way because those skills will have become less current. As new controllers join the workforce, an automated backup capability will become increasingly essential because they will not be familiar with controlling traffic manually with paper flight strips. Currently, FAA is not committed to providing a backup to URET in either phase because the tool is only a support tool, not a mission-critical tool that requires backup. However, the agency is taking preliminary steps to provide some additional space for new equipment in the event it decides to provide this backup. Depending on how the agency plans to address this issue, the cost increase will vary. For TMA, controllers emphasized during our discussions that using time rather than distance to meter properly separated aircraft represents a major cultural shift. While controllers can visually measure distance, they cannot do the same with time. As one controller in a discussion group commented, TMA “is going to be a strain, … and I hate to use the word sell, but it will be a sell for the workforce to get this on the floor and turn it on and use it.” Currently, controllers at most en route facilities use distance to meter aircraft as they begin their descent into an airport’s terminal airspace. This method, which relies on the controllers’ judgment, results in the less efficient use of this airspace because controllers often add distance between planes to increase the margin of safety. With TMA, controllers will rely on the computer’s software to assign a certain time for aircraft to arrive at a predetermined point. Through continuous automatic updating of its calculations, TMA helps balance the flow of arriving flights into the congested terminal airspace by rapidly responding to changing conditions. The controllers at the first three of the en route centers that have transitioned to TMA easily accepted it because they had been using time to meter air traffic for 20 years. However, as other en route centers transition to TMA, the controllers’ receptivity will be difficult because they have traditionally used distance to meter air traffic. FAA management realizes that the controllers’ transition to metering based on time versus distance will be challenging and has allowed at least 1 full year for them to become proficient in using the tool and begin to reap its full benefits. As a result, the Free Flight Program Office has established a 1-year period for controllers to become trained and comfortable with using this tool. FAA is relying heavily on national user teams to help develop training for TMA and URET. However, because of a lack of training development expertise and other factors, their efforts to provide adequate training for TMA have been hampered. Controllers said that, while they have knowledge of TMA, they are not specialists in developing training and therefore need more assistance from the program office. Also, because only a few key controllers have experience in using TMA, the teams have had to rely on them to develop a standardized training program while working with local facilities to tailor it to their needs. Moreover, these controllers are being asked to troubleshoot technical problems. Finally, controllers said the computer-based training they have received to date has not been effective because it does not realistically simulate operational conditions. FAA is currently revising its computer-based training to provide more realistic simulations. Because using the free flight tools will require controllers to undergo a complex and time-consuming cultural change, developing a comprehensive training program would greatly help FAA’s efforts to implement the new free flight technologies. Communicating to users how the new tools will benefit the organization and them will greatly enhance the agency’s training strategy. While FAA’s training plans for URET are preliminary because it is undergoing testing and is not scheduled for deployment until the latter part of 2001, we believe that providing adequate training in advance is essential for controllers to become proficient in using this tool. Our discussions with controllers and FAA’s TMA contractor indicated that in order to address local needs and to fix technical problems with TMA, FAA deferred several aspects of the tool that had been established for earlier deployment in phase 1. FAA officials maintain that these capabilities will be deployed before the end of phase 1. However, if these capabilities are not implemented in phase 1, pushing them into phase 2 will likely increase costs and defer benefits. For example, TMA’s full capability to process data from adjacent en route centers has been changed because FAA determined that providing the full capability was not cost effective. While controllers said that even without this full capability, TMA has provided some benefits, they said that deferring some aspects of the tool’s capabilities has made it less useful than they expected. Moreover, controllers maintain that FAA has not clearly communicated the changes with the tool’s capabilities to them. Without knowing how the tool’s capabilities are being changed and when the changes will be incorporated, it is difficult for users to know what to expect and when and for FAA to evaluate the tool’s cost, schedule, and ability to provide expected benefits. FAA has begun to measure capacity and efficiency gains from using the free flight tools and its preliminary data show that the tools provide benefits. FAA expects additional sites to show similar or greater benefits, thus providing data to support a decision to move to phase 2 by March 2002. Because the future demand for air traffic services is expected to outpace the tools’ capacity increases, the collective length of delays during peak periods will continue to increase but not to the extent that they would have without them. When FAA, in collaboration with the aviation industry, instituted the phased approach to implement its free flight program in 1998, the agency established a qualitative goal for increasing capacity and efficiency. In May 2001, FAA announced quantifiable goals for each of the three tools. For URET, FAA established an efficiency goal to increase direct routings by 15 percent within the first year of being fully implemented. Achieving this goal translates into reduced flight times and fuel costs for the airlines. The capacity goals for TMA and pFAST are dependent upon whether they are used together (colocated) and whether any constraints at an airport prevent them from being used to their full potential to expand capacity. If they are used together (such as at Minneapolis), FAA expects capacity to increase by 3 percent in the first year of operations and by 5 percent in the following year. However, at Atlanta, which is constrained by a lack of runways, the goal is 3 percent when these tools are used together. If only one of these tools is deployed (such as at Miami), FAA expects a 3-percent increase in capacity. While FAA has established quantifiable goals for these tools, the agency has only recently begun to develop information to determine whether attaining its goals will result in a positive return on the investment. Making this determination is important to help ensure that the capacity and efficiency gains provided by these tools are worth the investment. As previously shown in table 1, the actual systems that will be deployed for TMA and pFAST have only recently been installed at several locations or are scheduled to be installed this winter. To date, prototypes of these tools have been colocated at one location, and the actual equipment has been colocated at three locations. TMA is in a stand-alone mode at two locations. FAA reported that TMA achieved its first-year goal of a 3- percent increase in capacity at Minneapolis, and the agency is collecting data to determine whether the tool is meeting its goals at the other locations. Most of FAA’s data regarding the benefits provided by these tools are based on operations of their prototypes at Dallas-Fort Worth. These data show that TMA and pFAST achieved the 5-percent colocation goal. However, the data might not be indicative of the performance of the actual tools that will be deployed to other locations because Dallas-Fort Worth does not face the constraints affecting many other airports (such as a lack of runways). Because FAA does not plan to begin deploying the actual model of URET until November 2001, the agency’s data on its benefits have been based only on a prototype. At the two facilities—Indianapolis and Memphis— where the prototype has been deployed since 1997, FAA reported that URET has increased the number of direct routings by over 17 percent as of April 2001. According to FAA’s data, all flights through these two facilities were shortened by an average of one-half mile, which collectively saved the airlines approximately $1.5 million per month in operating costs. However, the benefits that FAA has documented for using URET reflect savings for just a segment of a flight—when an airplane is cruising through high-altitude airspace—not the entire flight from departure to arrival. Maintaining URET’s benefits for an entire flight is partly dependent on using it in conjunction with TMA and pFAST. Although a researcher at the Massachusetts Institute of Technology, who is reviewing aspects of FAA’s free flight program, recognizes URET’s potential benefits, the researcher expressed concerns that its benefits could be lessened in the airspace around airports whose capacity is already constrained. Likewise, in a study on free flight supported by the National Academy of Sciences and the Department of Transportation, the authors found that the savings attributed to using direct routings might “be lost as a large stack of rapidly arriving aircraft must now wait” in the terminal airspace at constrained airports. Although URET can get an airplane closer to its final destination faster, airport congestion will delay its landing. While TMA and pFAST are designed to help an airport handle arrivals more efficiently and effectively, they cannot increase the capacity of an airport’s terminal airspace beyond the physical limitations imposed by such constraining factors as insufficient runways or gates. In contrast, FAA’s Free Flight Program Office believes that the savings observed with the prototype of URET will accrue when the actual tool is used in conjunction with TMA and pFAST. FAA plans to have procedures in place by the time these three tools are used together so that URET’s benefits will not be reduced. However, the colocation of these three tools is not expected to occur until February 2002, which is only 1 month before the agency plans to make an investment decision for phase 2. Thus, we believe that FAA will not have enough time to know whether URET’s benefits would be reduced. During peak periods, the demand for air traffic currently exceeds capacity at some airports, causing delays. FAA expects this demand to grow, meaning that more aircraft will be delayed for longer periods. Free flight tools have the potential to allow the air traffic system to handle more aircraft (increase capacity) but not to keep up with the projected growth in demand. Thus, they can only slow the growth of future delays. They cannot fully eliminate future delays or reduce current delays unless demand remains constant or declines. FAA’s model of aircraft arrivals at a hypothetical congested airport, depicted in figure 2, illustrates the projected impact of the tools. According to the model, if demand increases and the tools are not deployed (capacity remains constant); the collective delays for all arriving flights (not each one) will increase by about an hour during peak periods. But if demand increases exceed capacity increases from deploying the tools, these delays will only increase by about half an hour. While recognizing that the free flight tools will provide other benefits, FAA has not quantified them. According to FAA, although TMA and pFAST are designed to maximize an airport’s arrival rates, they also can increase departure rates because of their ability to optimize the use of the airspace and infrastructure around an airport. Regarding URET, FAA maintains that by automating some of the functions that controllers had performed manually, such as manipulating paper flight strips, the tool allows controllers to be more productive. If FAA’s data continue to show positive benefits, the agency should be in a position by March 2002 to make a decision to deploy TMA to additional sites. However, FAA might not be in a position to make an informed decision on URET because the schedule might not allow time to collect sufficient data to fully analyze the expected benefits from this tool during phase 1. Currently, operational issues present the greatest challenge because using the free flight tools will entail a major cultural shift for controllers as their roles and responsibilities and methods for managing air traffic will change. While FAA management has recognized the cultural changes involved, they have not taken a leadership role in responding to the magnitude of the changes. In particular, while involving controllers in developing and delivering training on these new tools, FAA has not provided support to ensure that the training can be effectively developed and presented at local sites. Because the agency has been changing the capabilities of TMA from what had been originally planned but not systematically documenting and communicating these changes, FAA and the users of this tool lack a common framework for understanding what is to be accomplished and whether the agency has met its goals. While the free flight tools have demonstrated their potential to increase capacity and save the airlines money, only recently has FAA established quantifiable goals for each tool and begun to determine whether its goals are reasonable—that they will result in a positive return on investment. Because several factors influence the benefits expected from the tools, it is important for FAA to clearly articulate the expectations for each tool by specific location. To make the most informed decision about moving to phase 2 of the free flight program, we recommend that the Secretary of Transportation direct the FAA Administrator to take the following actions: Collect and analyze sufficient data in phase 1 to ensure that URET can effectively work with other air traffic control systems. Improve the development and the provision of local training to enable field personnel to become proficient with the free flight tools. Determine that the goals established in phase 1 result in a positive return on investment and collect data to verify that the goals are being met at each location. Establish a detailed set of capabilities for each tool at each location for phase 2 and establish a process to systematically document and communicate changes to them in terms of cost, schedule, and expected benefits. We provided a draft of this report to the Department of Transportation and the National Aeronautics and Space Administration for their review and comment. We met with officials from the Office of the Secretary and FAA, including the Director and Deputy Director Free Flight Program Office, to obtain their comments on the draft report. These officials generally concurred with the recommendations in the draft report. They stated that, to date, FAA has completed deployment of the Surface Movement Advisor and the Collaborative Decision Making tools on, or ahead of, schedule at all phase 1 locations and plans to complete the deployment of the remaining free flight tools on schedule. FAA officials also stated that the agency is confident that it will be in position to make an informed decision, as scheduled in March 2002, about moving to the program’s next phase, which includes the geographic expansion of TMA and URET. Furthermore, FAA stated that the free flight tools have already demonstrated positive benefits in an operational environment and that it expects these benefits will continue to be consistent with the program’s goals as the tools are installed at additional sites. In addition, FAA officials provided technical clarifications, which we have incorporated in this report, as appropriate. We acknowledge that FAA has deployed the Surface Movement Advisor and the Collaborative Decision Making tools on schedule at various locations. Furthermore, the report acknowledges that the free flight tools have demonstrated benefits and that the agency should have the data on TMA to make a decision about moving forward to phase 2 by March 2002. However, as we note in the report, FAA faces a significant technical challenge in ensuring that URET works with other air traffic control systems. Moreover, the data on URET's benefits reflect those of the prototype system. FAA is scheduled to deploy the first actual system in November 2001 and the last in February 2002—just 1 month before it plans to make an investment decision. With this schedule, the actual system might not be operational long enough to gather sufficient data to measure its benefits. Furthermore, FAA has yet to overcome the operational challenge that is posed when controllers use TMA and must shift from the traditional distance-based method of metering air traffic to one based on time. If FAA can not satisfactorily resolve these issues, the free flight program might not continue to show positive benefits and could experience cost overruns, delays, and performance shortfalls. The National Aeronautics and Space Administration expressed two major concerns. First, it felt that the benefits provided from the TMA tool justified its further deployment. Our initial conclusion in the draft report, that FAA lacked sufficient data to support deploying this tool to additional sites, was based on FAA’s initial evaluation plan, which required at least 1 year of operational data after each tool had been deployed. FAA officials now believe that waiting for full results from the evaluation plan before making a decision to move forward is no longer necessary because TMA's performance results are occurring more rapidly than anticipated. This report now acknowledges that the agency should have the data it needs to make a decision to move forward with this tool. Second, NASA felt that the report was unclear regarding the nature of our concerns about the reliability of TMA's data. The discussion in the draft report indicated that FAA lacked sufficient data to show that it had addressed our concerns with TMA. FAA officials provided this support, and this report has been revised accordingly. In addition, National Aeronautics and Space Administration officials provided technical clarifications, which we have incorporated into this report, as appropriate. (See appendix II for the National Aeronautics and Space Administration's comments.) As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to interested Members of Congress; the Secretary of Transportation; the Administrator, Federal Aviation Administration; and the Administrator, National Aeronautics and Space Administration. We will also make copies available to others upon request. If you have questions about this report, please contact me at (202) 512- 3650. Key contributors are listed in appendix III. Because of the importance of the free flight program to the future operation of our nation’s aviation system and the upcoming decision about whether to proceed to the next phase, the Chairmen of the Senate Committee on Commerce, Science, and Transportation and the Subcommittee on Aviation asked us to provide information to help them determine whether the Federal Aviation Administration (FAA) will be in a position to decide on moving to the next phase. This report discusses (1) the significant technical and operational issues that could impair the ability of the free flight tools to achieve their full potential and (2) the extent to which these tools will increase efficiency and capacity while helping to minimize delays in our nation’s airspace system. Our review focused on three free flight phase 1 tools—the User Request Evaluation Tool, the Traffic Management Advisor, and the passive Final Approach Spacing Tool—because they account for approximately 80 percent of FAA’s $630 million estimated investment for phase 1 and approximately 80 percent of FAA’s $717 million estimated investment for phase 2. We did not review the Surface Movement Advisor or the Collaborative Decision Making tools because generally they had been implemented at all phase 1 locations when we started this review and FAA does not intend to deploy their identical functionality in phase 2. To obtain users’ insights into the technical and operational issues and the expected benefits from these tools, we held four formal discussion group meetings with nationwide user teams made up of controllers, technicians, and supervisors from all the facilities currently using or scheduled to receive the Traffic Management Advisor during phase 1. We also visited and/or held conference calls with controllers, technicians, and supervisors that used one or more of these tools in Dallas, Texas; southern California; Minneapolis, Minnesota; Memphis, Tennessee; Indianapolis, Indiana; and Kansas City, Kansas. development and acquisition. Based on these criteria, we interviewed FAA officials in the Free Flight Program Office, the Office of Air Traffic Planning and Procedures, and the Office of Independent Operational Test and Evaluation. To review test reports and other documentation highlighting technical and operational issues confronting these tools, we visited FAA’s William J. Hughes Technical Center in Atlantic City, New Jersey, and FAA’s prime contractors that are developing the three free flight tools. We also visited the National Aeronautics and Space Administration’s Ames Research Center at Moffett Field, California, to understand how its early efforts to develop free flight tools are influencing FAA’s current enhancement efforts. To determine the extent to which the free flight tools will increase capacity and efficiency while helping to minimize delays, we analyzed the relevant legislative and Office of Management and Budget’s requirements that recognize the need for agencies to develop performance goals for their major programs and activities. We also interviewed FAA officials in the Free Flight Program Office and the Office of System Architecture and Investment for information on the performance goals of the free flight tools during phase 1. In addition, we held discussions with officials from RTCA, which provides a forum for government and industry officials to develop consensus-based recommendations. We also reviewed documentation explaining how the tools are expected to and actually have helped increase system capacity and efficiency, thereby helping to minimize delays. We conducted our review from October 2000 through July 2001, in accordance with generally accepted government auditing standards. In addition to those named above, Nabajyoti Barkakati, Jean Brady, William R. Chatlos, Peter G. Maristch, Luann M. Moy, John T. Noto, and Madhav S. Panwar made key contributions to this report.
This report reviews the Federal Aviation Administration's (FAA) progress on implementing the Free Flight Program, which would provide more flexibility in air traffic operations. This program would increase collaboration between FAA and the aviation community. By using a set of new automated technologies (tools) and procedures, free flight is intended to increase the capacity and efficiency of the nation's airspace system while helping to minimize delays. GAO found that the scheduled March 2002 date will be too early for FAA to make an informed investment decision about moving to phase 2 of its Free Flight Program because of significant technical and operational issues. Furthermore, FAA's schedule for deploying these tools will not allow enough time to collect enough data to fully analyze their expected benefits. Currently, FAA lacks enough data to demonstrate that these tools can be relied upon to provide accurate data.
In our report, High-Risk Series: An Update, we identified agencies’ lack of comprehensive risk management strategies as an emerging challenge for the federal government. Increasingly limited fiscal resources across the federal government, coupled with the emerging requirements from the changing security environment, emphasize the need for DOD to develop a risk-based strategic investment approach. For this reason, we have advocated that DOD adopt a comprehensive risk management approach for decision making. Furthermore, DOD and other federal agencies are required by statute to develop a results-oriented management approach to strategically allocate resources on the basis of performance. The balanced scorecard—a concept to balance an organization’s focus across financial, customer, internal business, and learning and growth management areas— is one approach for developing results-oriented management that government agencies have recently started to adopt. At the direction of the Secretary of Defense, DOD developed a risk management framework that DOD later aligned with its results-oriented management activities through a DOD balanced scorecard. An emerging challenge for the federal government involves the need for the completion of comprehensive national threat and risk assessments in a variety of areas. For example, emerging requirements from the changing security environment, coupled with increasingly limited fiscal resources across the federal government, emphasize the need for agencies to adopt a sound approach to establishing resource decisions. We have advocated that the federal government, including DOD, adopt a comprehensive threat or risk management approach as a framework for decision making that fully links strategic goals to plans and budgets, assesses values and risks of various courses of actions as a tool for setting priorities and allocating resources, and provides for the use of performance measures to assess outcomes. Based on our review of the literature, as shown in figure 1, the goal of risk management is to integrate systematic concern for risk into the usual cycle of agency decision making and implementation. A risk management cycle represents a series of analytical and managerial steps, basically sequential, that can be used to assess risk, evaluate alternatives for reducing risks, choose among those alternatives, implement the alternatives, monitor their implementation, and continually use new information to adjust and revise the assessments and actions, as needed. Adoption of a risk management cycle such as this can aid in assessing risk by determining which vulnerabilities should be addressed, and how they should be addressed, within available resources. For the purposes of this report, we focused on the stages of the risk management cycle that involve DOD’s actions to set strategic goals and objectives, establish investment priorities based on risk assessments, and implementation and monitoring. Risk management’s objectives are essentially the same as those of good management, and they are consistent with the broad economy and efficiency objectives of good government—namely, to provide better outcomes for the same amount of money, or to provide the same outcomes with less money. Therefore, risk management’s objectives are also compatible with those of the federal government’s results-oriented management approach, which was enacted in the Government Performance and Results Act (GPRA) of 1993, and the balanced scorecard approach. Congress enacted GPRA to focus the federal government on achieving results through the creation of clear links between the process of allocating scarce resources and an agency’s strategic goals, or the expected results to be achieved with those resources. Building on GPRA’s foundation, the current administration has taken steps to strengthen the integration of budget, cost, and performance information by including budget and performance integration as one of its management initiatives under the umbrella of the President’s Management Agenda. The Budget and Performance Integration initiative includes efforts such as the Program Assessment Rating Tool (PART), improving outcome measures, and improving monitoring of program performance. The balanced scorecard approach is a management tool that some federal agencies have adopted to help them translate the strategy set forth in a results-oriented management approach into the operational objectives that drive both behavior and performance. The balanced scorecard consists of four management areas that organizations should focus on—financial, customer, internal business, and learning and growth. DOD introduced the risk management framework in its strategic plan, the 2001 QDR report. The 2001 strategic plan articulated the new administration’s emphasis on transforming military forces and defense business practices to meet the changing threats facing our nation. In his guidance to the department for the 2001 QDR strategic planning process, the Secretary of Defense stated the need for DOD to use a risk mitigation approach for balancing force, resource, and modernization requirements across defense planning timelines. This guidance also stated that DOD must include the identification of output-based measures to reduce inefficiencies through the department in any approach to risk management. Building on the guidance, the 2001 QDR outlined DOD’s risk management framework. According to the QDR, the framework would enable DOD to address the tension between preparing for future threats and meeting the demands of the present with finite resources. It was also intended to ensure that DOD was sized, shaped, postured, committed, and managed with a view toward accomplishing the strategic plan’s defense policy goals. DOD adapted the balanced scorecard concept to the risk management framework by substituting the four dimensions of risk—force management, operational, future challenges, and institutional—for the scorecard’s four management areas. The risk management framework was to be a transformational tool that would provide a balanced perspective of the organization’s execution of strategy and ensure a top-down approach. The 2002 policy guidance also designated four preliminary performance goals for each of the four risk quadrants. In addition, the guidance required that performance goals and measures were to be cascaded to the services and defense agencies. Figure 2 shows a comparison, as provided by DOD. Despite positive steps, DOD needs to take additional actions before the risk management framework is fully implemented and DOD can demonstrate real and sustainable progress in using a risk-based and results-oriented approach to strategically allocate resources across the spectrum of its investment priorities. For example, DOD is still in the process of developing department-level measures for the framework that address results-based management principles, such as linking performance information to strategic goals so that this information can be used to monitor performance results and determine how well the department is doing in achieving its strategy. Without more results-oriented performance measures, DOD may be unable to provide the services and other defense components with clear roadmaps of how their activities contribute to meeting DOD’s strategic goals. In addition, the framework’s performance goals and measures are not clearly linked to DOD’s current strategic plan and strategic goals. Furthermore, the extent to which the risk management framework is linked to the budget cycle is unclear. Without better measures, clear linkages, and greater transparency, DOD will be unable to fully measure progress in achieving strategic goals or demonstrate to Congress and others how it considered risks and made trade-offs in making investment decisions. DOD has taken positive steps toward developing measures for each of the performance goals under the framework’s four risk quadrants; however, developing a set of measures that can be used to monitor performance results is still a work in progress. Based on GAO’s prior work on results- based management principles, we found that leading organizations’ performance measures are: (1) designed to demonstrate results, or provide information on how well the organization is achieving its goals; (2) limited to a vital few, and balanced across priorities; and (3) used by management to improve performance. However, the set of measures DOD has developed for the risk management framework do not adequately address these principles. While DOD established four risk quadrants and developed performance goals and measures of two types—activity measures (measures to track initiatives) and performance measures—the majority of its measures do not provide sufficient information to monitor performance against the risk quadrants’ goals. First, DOD officials acknowledge that establishing department-level measures for the framework that demonstrate results is still a work in progress, as the majority of the risk management framework’s measures require further development or refinement. In fact, as shown in table 1, 44 of the 77 department-level measures for all four quadrants, or over 50 percent, are activity measures. According to DOD sources, activity measures are to result in a new performance measure, a new baseline or benchmark, or define a new capability, rather than monitor a specific annual performance target. Once these activities are completed, DOD officials stated that the department will be better able to monitor department-level performance against strategic goals. However, our analysis found that the activity measures, as defined in DOD’s external reports, typically do not provide sufficient information to monitor the department’s progress in achieving the stated goal they are to measure, such as developing a new performance measure or baseline. The desired outcomes for activity measures generally state that a task was or will be completed by a certain date but they do not provide sufficient information on whether the activity is on schedule, the interdependencies among tasks, or the contribution toward enhancing the department’s performance. Therefore, Congress and other external stakeholders lack information and adequate assurances that DOD is making progress in implementing a risk-based and results-oriented management approach to making investment decisions. Second, DOD’s department-level performance measures are still a work in progress in that these measures do not provide a well-rounded depiction of DOD’s performance. In our previous work, we have found that performance measurement efforts that are not balanced across priorities may skew an agency’s performance and keep its senior leadership from seeing the whole picture. For example, in developing department-level measures for the risk management framework, DOD appears to have overemphasized its force management priorities at the expense of operational risk. As illustrated in table 2, the operational risk quadrant has no performance measures, while the force management risk quadrant has a total of 36 measures, including 15 activity measures and 21 performance measures. In providing technical comments to a draft of this report, DOD objected to our recoding of five department-level performance measures as activity measures. We recoded these measures because they tracked milestones and events, which corresponded to DOD’s definition of an activity measure. The measures we recoded addressed the following: a civilian human resources strategic plan, a military human resources strategic plan, monitor the status of defense technology objectives, strategic transformation appraisal, and support acquisition excellence goals. Finally, DOD officials indicated that DOD is systematically using performance measures to monitor progress and improve performance for only one risk quadrant, although individual measures under the other three risk quadrants may be monitored. We have found that leading organizations use performance information to improve organizational performance and identify performance gaps, and to provide incentives that reinforce a results-oriented management approach. According to DOD officials, the force management quadrant is the only quadrant that is managed by one individual and one office—the Under Secretary of Defense for Personnel and Readiness and his office. These officials stated that this situation is a critical factor in the progress DOD has made in systematically monitoring performance across the force management quadrant on a routine basis. For example, officials stated that the Under Secretary of Defense personally leads quarterly monitoring sessions on the force management quadrant’s performance. DOD officials also told us that the Under Secretary of Defense for Personnel and Readiness has greatly facilitated this monitoring by developing a centralized database to capture the performance data used to track DOD’s performance in meeting the quadrant’s goals. Unless all of the risk management framework’s quadrants are systematically monitored, implementation of the framework may be hindered and the framework risks becoming a paper-driven, compliance exercise. Indeed, one DOD official told us that he views the risk management framework and its measures as a “reporting drill” and, in addition, his office would not change its processes if DOD was to no longer use the framework. DOD is still in the process of cascading the risk management framework’s goals and measures to the services. We have found that leading organizations seek to establish clear hierarchies of goals and measures that cascade down so that subordinate units have straightforward roadmaps to demonstrate how their activities contribute to meeting the organization’s strategy. According to DOD officials, all of the services are attempting to align their existing performance measures with the department-level performance goals and measures. However, service officials said that it is challenging to cascade the department-level activity measures, because these measures represent very broad initiatives that may not be applicable at all DOD levels. Officials from one service said they have had to develop new measures to align with the department-level measures, because they had been assessing performance with fewer measures than the Office of the Secretary of Defense had developed. The risk management framework’s performance goals and measures are not clearly linked—a key principle of results-oriented management—to a coherent strategic plan. The development of such a strategic plan is a critical next step in using a risk-based and results-oriented approach to making investment decisions. Without these linkages, DOD cannot easily demonstrate how achievement of a performance goal or measure contributes to the achievement of strategic goals and ultimately the organization’s mission. Our previous work indicated that DOD’s strategic plan, the 2001 QDR, did not provide a sound foundation for the risk management framework. We reported that the usefulness of the 2001 QDR was limited by the lack of focus on longer-term threats and requirements for critical support capabilities, and provided few insights into how future threats and planned technical advances could affect future force requirements. In turn, this lack of focus and insight limited the QDR’s usefulness as a foundation for fundamentally reassessing U.S. defense plans and programs and for balancing resources across near- and midterm risks. DOD officials indicated that DOD has not yet defined the linkages between the risk management framework’s performance goals and the strategic goals in the 2001 QDR. Furthermore, the Defense Business Board’s official minutes for its July 28, 2005, meeting contained a recommendation that the Secretary of Defense define department-level objectives, which should then be cascaded down the department. In discussing the ongoing 2005 QDR, DOD stated that although the department would continue its efforts to do so, establishing these linkages was very challenging because of the size and scope of DOD’s operations. However, as suggested by the Defense Business Board and our previous work, if DOD’s strategic plan is to drive the department’s operations, a straightforward linkage is needed among strategic goals, annual performance goals, and day-to-day activities. The ongoing 2005 QDR offers DOD the opportunity to strengthen its strategic planning. According to DOD officials, the department has begun to consider risk in its investment decision making; however, the full extent to which the framework’s risk-based and results-oriented approach has been linked to the fiscal year 2006 budget cycle is unclear. Our work indicates that leading organizations link strategy to the budget process through results- oriented management to evaluate potential investments or initiatives. DOD sources indicated that the department has begun to consider risk during its usual cycle of investment decision making. For example, according to DOD sources, the Secretary of Defense articulated broad areas for increasing or decreasing risk under each quadrant in the fiscal years 2006–2011 planning guidance, leaving it up to the defense components to decide how to structure their investment decisions within those broad areas consistent with the Secretary’s risk guidance. In addition, DOD officials stated that the framework has increased awareness within the department on the need to balance risk over time. For example, when DOD reduced the fiscal years 2006–2011 defense program by $30 billion, DOD officials stated that the department did not take the traditional budgetary approach of cutting each defense component’s budget by a certain percentage. Instead, DOD officials stated that the Secretary of Defense used a collaborative approach with service participation to discuss where to take the budget reductions and how these cuts would affect risk, although DOD officials offered various views on how extensively the framework was used to make those decisions. Second, DOD required that the services and other defense components offset any funding increase in one area with a funding decrease in another area for the fiscal years 2006–2007 budget submission. According to DOD officials, risk—whether on the basis of “professional judgment” or analysis—was considered in these deliberations. For example, the Army’s plan for fiscal years 2006–2023 articulated areas for increasing risks so that it could decrease risk in the operational risk dimension by investing in current capacity. However, the fiscal year 2006 budget submission does not include any specific information on how DOD systematically identified or assessed departmental risks to establish DOD-wide investment priorities. For example, the military services’ share of the Future Years Defense Program (FYDP) remained relatively unchanged from fiscal year 2005 to fiscal year 2006 (see table 3), providing one indication that the risk management framework may not yet be a useful tool for balancing departmental risks across the services. DOD has reported on the risk management framework in the department’s GPRA and other reporting requirements. For example, the fiscal year 2004 Performance and Accountability Report describes what DOD is doing, or plans to do, to define, measure, and monitor performance goals in the four risk quadrants but does not discuss the implementation status of the risk management framework. Furthermore, the fiscal year 2004 report, the most recent available, provided insufficient information to assist Congress in overseeing how DOD plans to prioritize investment decisions within or across the risk quadrants. Without more detailed information, Congress may have insufficient transparency into how DOD has identified and assessed risks and made trade-offs in its investment decision making. In addition, we reported in May 2004 that congressional visibility over investment decision making also was limited by the absence of linkages between the risk management framework and military capabilities planning and the FYDP. Because the FYDP lacked these linkages, we concluded that decision makers could not use it to determine how a proposed increase in capability would affect the risk management framework. Our work also has shown that the FYDP may understate the costs of weapon system programs; therefore, DOD may be starting more programs than it can afford. For example, our assessment of 54 major programs, representing an investment of over $800 billion, found that the majority of these programs were costing more and taking longer to develop than planned. Problems occurred because of DOD’s overly optimistic planning assumptions about the long-term costs of weapon system programs and its failure to capture early on the requisite knowledge that is needed to efficiently and effectively manage program risks. When DOD has too many programs competing for funding and approves programs with low levels of knowledge, it is accepting the attendant likely adverse cost and schedule risks. As a result, it will probably get fewer quantities for the same investment or face difficult choices on which investments it cannot afford to pursue. The findings of our work suggest that having a departmentwide investment strategy for weapon systems, to allocate resources across investment priorities, would help reduce these risks. Four key challenges impede DOD’s progress toward implementing the risk management framework. The first implementation challenge facing DOD is overcoming cultural resistance to change in a department as massive, complex, and decentralized as DOD. The second challenge is the lack of sustained leadership, and the third challenge is the absence of implementation goals and timelines. These challenges relate to DOD’s failure to follow crucial transformational steps. The fourth challenge— integrating the risk management framework with decision support processes and related reform initiatives, into a coherent, unified management approach for the department—relates to key results-oriented management practices. Unless DOD addresses these challenges and successfully implements the risk management framework, or a similar approach, it may continue to experience (1) a mismatch between programs and budgets, and (2) the proportional, rather than strategic, allocation of resources to the services. Transforming DOD’s organizational culture—from a focus on inputs and programs to strategically balancing investment risks and monitoring outcomes across the department—through the implementation of the risk management framework is a significant challenge for the department for several reasons. First, as we noted in our 21st Century Challenges report, to successfully transform, DOD needs to overcome the inertia of various organizations, policies, and practices that became rooted in the Cold War era. The department’s expense, size, and complexity, however, make overcoming this resistance and inertia difficult. In fiscal year 2004, DOD reported that its operations involved $1.2 trillion in assets, $1.7 trillion in liabilities, over $605 billion in net cost of operations, and over 3.3 million military and civilian personnel. For fiscal year 2005, DOD received appropriations of about $417 billion. Moreover, execution of its operations spans a wide range of defense organizations, including the military services and their respective major commands and functional activities, numerous large defense agencies and field activities, and various combatant and joint operation commands, which are responsible for military operations for specific geographic regions or theaters of operations. Second, DOD’s highly decentralized management structure is another contributing factor that makes cultural change difficult. Although under the authority, direction, and control of the Secretary of Defense, the military services have the legislative authority to organize, equip, and train the nation’s armed forces for combat under Title 10 of the U.S. Code. Furthermore, Congress directly appropriates funds to the services for programs and activities that support these purposes. In the opinion of knowledgeable DOD officials, this legislative authority has resulted in a culture that makes it difficult to develop department-level, or joint, management approaches. For example, the allocation of budgets on a proportional, rather than a strategic basis, among the military services is a long-standing budgetary problem that we have identified as a major management challenge for the department. In addition, the Joint Defense Capabilities Study, chartered by the Secretary of Defense in March 2003, made the following observations on how DOD’s organizational culture does not reinforce a departmental or joint approach to investment decision making and results management: DOD’s bottom-up strategic planning process did not support early senior leadership involvement and did not provide integrated departmentwide objectives, priorities, and roles as a framework for planning joint capabilities. Service-centric focus on programs and weapons platforms resulted in a process that did not provide an accurate picture of joint needs, nor did it provide a consistent view of priorities and acceptable risks across the department. The resulting budget did not optimize capabilities at either the department or the service level. Accountability and feedback focused on monetary input rather than output; therefore, much of the information provided did not support the senior leaders’ decision making as it did not tell how well the department was being resourced to meet current and future mission requirements. The lack of sustained leadership attention and appropriate accountability has challenged DOD’s progress in implementing the risk management framework. Our work has indicated that sustained leadership is a key transformational, or change management, practice. However, knowledgeable DOD officials indicated that DOD’s senior leadership did not provide sustained attention to the framework’s implementation. For example, a DOD official actively involved in the framework’s implementation stated that meetings with senior leadership that were to provide oversight of the framework’s implementation have not been regularly scheduled. DOD officials indicated that as a result of this lack of sustained leadership, DOD has not placed much emphasis on implementing the risk management framework at the department level. In addition, other DOD officials stated that changes in leadership have made it difficult to implement the risk management framework or develop performance measures. For example, since October 2004, DOD has experienced turnover in the following senior level positions, including the Deputy Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology and Logistics; and the Director of Program Analysis and Evaluation (PA&E). Lacking sustained leadership attention, DOD officials offered conflicting perspectives on the status of the risk management framework with some officials suggesting that the framework had been overtaken by other performance-based or risk-based management initiatives while another suggested that the framework was primarily a compliance exercise. DOD officials also held differing perspectives on the purpose of the framework, including the beliefs that it was developed to monitor the Secretary of Defense’s priority areas or that it was a programming and budgeting tool. Implementation of the risk management framework has also been challenged by the lack of clear lines of authority and appropriate accountability. No single individual or organization has been given overarching leadership responsibilities, authority, or the accountability for achieving the framework’s implementation. Instead, the responsibility for various tasks and performance measures have been spread among several organizations, including the Director, PA&E; the Under Secretary of Defense for Personnel and Readiness (P&R); and the Under Secretary of Defense, Comptroller/Chief Financial Officer. We testified in April 2005 that as DOD embarks on large-scale change initiatives, the complexity and long-term nature of these initiatives require the development of an executive position capable of providing strong and sustained leadership—over a number of years and various administrations. For this reason, we have supported legislation to create a CMO at DOD to provide such sustained leadership. A CMO could also provide the leadership needed to successfully develop a risk-based and results-oriented management approach at DOD, such as the risk management framework. Accountability for implementation of the risk management framework also has been hindered by the absence of implementation goals and timelines with which to gauge progress. As we have previously reported, successful change management efforts use implementation goals and timelines to pinpoint performance shortfalls and gaps, suggest midcourse corrections, and build momentum by demonstrating progress. However, DOD’s limited guidance on the risk management framework did not establish implementation goals and timelines, nor did it require that implementation goals and timelines be developed. According to knowledgeable DOD officials, DOD did not see the need for implementation goals or timelines because the framework was not meant to change processes or create new ones, but rather was a management tool to improve upon investment decision-making processes. Regardless of how DOD classifies the risk management framework, we have found that implementation goals and timelines are essential to any transformational change, such as that envisioned by the Secretary of Defense with the risk management framework, because of the number of years it can take to complete the change. Moreover, the absence of implementation goals and timelines makes it difficult to determine whether progress has been made in implementing the framework over the last 2 ½ years, and whether DOD’s revisiting of the framework during the 2005 QDR represents an evolutionary progression or implementation delays. DOD faces a significant challenge integrating the risk management framework with decision support processes for planning, programming, and budgeting and with related reform initiatives into a coherent, unified management approach. The goal of both risk management and results- oriented management is to integrate the systematic concern for risk and performance into the usual cycle of agency decision making and implementation. DOD’s challenge in meeting these goals is demonstrated by the number of initiatives, as shown in table 4, that DOD has put in place to improve investment decision making and manage performance results. For example, both capabilities planning and the risk management framework are to define risks and develop performance measures but, according to DOD officials, the department is still determining how to align capabilities planning with the risk management framework. Other initiatives, including GPRA and PART, are also to develop performance measures and DOD is still working on integrating these initiatives with the risk management framework and individual performance monitoring approaches of the services and other defense components into a single, integrated system. In December 2002, the Deputy Secretary of Defense issued a memorandum to correct this situation by requiring the alignment of the risk management framework and the President’s Management Agenda with DOD’s results-oriented management activities, including those associated with GPRA. We note that these reform initiatives address key business processes within the department and that we have placed DOD’s overall business transformation on our list of federal programs and activities at high risk of waste, fraud, abuse, and mismanagement. The Under Secretary of Defense for Acquisition, Technology and Logistics indicated that DOD plans to address the challenge associated with the integration of DOD’s planning, resourcing, and execution processes and initiatives, including the risk management framework. The Under Secretary stated that one task of the ongoing 2005 QDR was “strategic process integration.” The Under Secretary also stated that the department is planning to provide a roadmap with performance goals and timelines on how it will implement initiatives to improve strategic process integration. This roadmap is to be submitted with the 2005 QDR report to Congress in early 2006 with the fiscal year 2007 budget. DOD has made some progress in implementing the risk management framework, including establishing risk quadrants and performance goals. However, more work will be required for DOD to be able to put in place a management tool, such as the risk management framework, to strategically balance the allocation of resources across the spectrum of its investment priorities against risk over time and to monitor performance. The development of performance measures that clearly demonstrate results and that are cascaded down throughout the department would enable DOD to provide a clear roadmap of how its activities at all levels contribute to meeting its strategic goals and would assist the department in aligning the core processes and resources of its four military services and multiple defense agencies to better support a departmental or joint approach to national security. Furthermore, the risk management framework cannot be fully implemented until its performance goals are clearly linked to DOD’s strategic planning goals. Unless a cause and effect relationship can be demonstrated between the department’s performance measures and strategic goals, the framework’s usefulness as a tool for monitoring DOD’s execution of its strategic plan and identifying performance goals will be severely restricted, if not eliminated. Furthermore, the fiscal year 2006 budget submission does not provide sufficient information on how DOD identified or assessed departmental risks to establish DOD-wide investment priorities; thus, the linkages between the framework and the budget are unclear. Without better measures, clear linkages, and greater transparency, DOD will be unable to fully measure progress in achieving strategic goals or demonstrate to Congress and others how it considered risks and made trade-off decisions, balancing needs and costs for weapon programs and other investment priorities. The efforts of DOD’s senior leadership to establish a risk-based and results-oriented management approach have been impeded by some key challenges. The lack of sustained leadership and clear lines of accountability has hampered implementation of the risk management framework and the establishment and achievement of implementation goals and timelines. Strong and sustained leadership could enable DOD to overcome resistance to change that exists in a department as massive and complex as DOD. In addition, the establishment of implementation goals and timelines could enable DOD to determine what progress has been made in implementing the risk management framework. Furthermore, the successful integration of the risk management framework into DOD’s investment decision-making processes, including recent reform initiatives, could assist DOD in its overall transformation efforts. Until DOD develops a risk-based and results-oriented management approach for making investment decisions, it will likely continue to experience a mismatch between programs and budgets, and the proportional, rather than strategic, allocation of resources to the services. To address the challenges associated with implementing the risk management framework, or a similar risk-based management approach, we recommend that the Secretary of Defense take the following four actions: develop or refine department-level performance measures so that they clearly demonstrate performance results and cascade those measures down throughout the department, assign clear leadership with accountability and authority to implement and sustain the risk management framework, develop implementation goals and timelines, and demonstrate the integration of the risk management framework with DOD’s decision support processes and related reform initiatives to improve investment decision making and manage performance results. In written comments on a draft of this report, DOD partially concurred with our four recommendations. DOD’s written comments are reprinted in their entirety in appendix II. DOD also provided technical comments, which we incorporated as appropriate. DOD partially concurred with our first recommendation. DOD stated that it concurred with our recommendation that the Secretary of Defense refine department-level performance measures so that they clearly demonstrate results, but that it did not concur with the notion that effectively cascading the risk management framework has been inhibited by the current suite of performance measures. DOD noted that that a number of defense components—including the Army, DOD Comptroller, the Defense Logistics Agency, and the Defense Information Systems Agency—have successfully cascaded departmentwide strategic goals and implemented frameworks to measure their organization’s performance. DOD also believes that empowering the leadership at the component level to develop measures, while ensuring strategic alignment, is the most effective way of encouraging performance management and increasing its utility. In our report, we acknowledge that DOD has taken positive steps toward developing a performance monitoring system and cascading the framework’s goals and measures to defense components. However, our recommendation addresses limitations in those measures that currently hinder DOD’s ability to use the risk management framework as a management tool for aligning the components’ performance goals and measures with the risk management framework, or for strategic balancing investment decisions across the risk quadrants. For example, the majority of the risk management framework’s measures are activity measures, or initiatives, that do not monitor a specific annual performance target, nor do these measures provide sufficient information to determine whether the activity is on schedule or contributes to enhancing the department’s overall performance. Finally, our recommendation is not intended to suggest that DOD not empower the components to develop performance measures, but rather that DOD establish a clear hierarchy of goals and measures that provide straightforward roadmaps to demonstrate how the components’ activities contribute to meeting DOD’s strategic goals. DOD partially concurred with our second recommendation that the Secretary of Defense assign clear leadership with accountability and authority to implement and sustain the risk management framework. DOD stated that, although it agrees that such leadership is key to any successful performance management system, the department’s senior executives provide sufficient leadership and accountability for implementing and sustaining the risk management framework. DOD also stated that it did not agree that a new organization or bureaucratic structure is needed to ensure successful implementation and sustainment of the risk management framework. We agree that DOD has assigned specific roles and responsibilities for goals and measures associated with the risk management framework to various high-level DOD officials. However, we based our recommendation on the fact that no single individual, with appropriate authority, was held responsible for ensuring that the risk management framework was implemented across the department. Further, our recommendation does not propose that DOD set up a new organization or bureaucratic structure, but, as stated in this report, we continue to believe that one way to provide strong and sustained leadership for change initiatives, such as the risk management framework, over a number of years and various administrations is to legislatively establish a CMO. In partially concurring with our third recommendation to develop implementation goals and timelines, DOD agreed that tracking progress in implementing the risk management framework is a good management practice. DOD stated that it has established goals and timelines for the risk management framework that are unique to the individual metrics, or measures, and that because the risk management framework continually evolves over time, new metrics will be developed while others may be retired. As we stated in the report, successful change management efforts use implementation goals—such as, for example, linking the risk management framework to the budget—and timelines for meeting those goals, to pinpoint shortfalls and gaps, suggest midcourse corrections, and build momentum by demonstrating progress. Therefore, while DOD may continually refine the individual goals and measures associated with the framework’s risk quadrants, we believe that goals and timelines for the overall implementation of the framework across the department are essential for keeping this reform initiative on track. DOD partially concurred with our fourth recommendation that the Secretary of Defense demonstrate the integration of the risk management framework with DOD’s decision support processes and related reform initiatives to improve investment decision making and manage performance results. DOD stated that the department is currently studying ways to further integrate the risk management framework with other decision support processes, but no single framework or decision model can provide all the necessary information or flexibility needed by the Secretary of Defense and his senior leadership team. We recognize that DOD’s senior leadership needs reliable information from a variety of sources and flexibility to make decisions among alternative actions or solutions. However, if the risk management framework is to successfully serve as a management tool to assist decision makers in formulating top- down strategy, balancing investment priorities against risk over time, measuring near- and midterm outputs against strategic goals, and focusing on actual performance results—as intended by DOD’s senior leadership— it is crucial that it be successfully integrated with DOD’s investment decision-making processes, including recent reform initiatives. We are sending copies of this report to interested congressional committees; the Secretaries of Defense, Army, Navy, and Air Force; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www/gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9619 or pickups@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To assess to what extent the Department of Defense (DOD) has implemented the risk management framework, we obtained and analyzed DOD directives, briefings, and other documents that described the risk management framework’s purpose, implementation status, and performance measures. We also obtained and analyzed DOD’s 2001 Quadrennial Defense Review and annual strategic planning and budget documents. Moreover, we interviewed knowledgeable DOD and service officials involved with the implementation of the risk management framework. Specifically, we obtained testimonial evidence from officials representing the Office of the Secretary of Defense (OSD) offices—such as Program Analysis and Evaluation; Comptroller; Policy; Acquisition, Technology and Logistics; and Personnel and Readiness—the Joint Staff, the military services, and the Defense Business Board. To identify key risk- based and results-oriented management principles, we reviewed our prior reports and other relevant literature, including information on the balanced scorecard concept. For example, we identified characteristics of results-oriented performance measures. These characteristics focused on performance measures that are (1) designed to demonstrate results by providing information on how well the organization is achieving its goals; (2) limited to a vital few, and balanced across priorities; and (3) used by management to improve performance. As another example, risk-based and results-oriented management principles indicate that leading organizations seek to establish clear hierarchies of goals and measures that cascade down so that subordinate units have straightforward roadmaps to demonstrate how their activities contribute to meeting the organization’s strategy. We systematically analyzed and compared the risk management framework’s department-level performance measures with these characteristics. However, we did not validate the procedures that DOD has in place to ascertain the reliability of the data used to support the performance measures. Regarding strategic planning, these principles focused on (1) establishing clear linkages among strategic planning goals, resources, performance goals and measures and (2) integrating the consideration of risk into the usual cycle of agency decision making and implementation. While these principles do not cover all attributes associated with risk-based and results-oriented management approaches, we believe that they are the most important ones for assessing DOD’s progress in implementing the risk management framework. To identify the most significant challenges, we reviewed our previous work on change management principles. We then compared DOD’s implementation of the risk management framework to sound change management principles and interviewed knowledgeable DOD officials about the challenges that faced the department in implementing the risk management framework. In addition, we reviewed our previous work to determine to what extent deficiencies in DOD’s overall business transformation efforts might influence the implementation of the risk management framework. Our work was performed from October 2004 through September 2005 in accordance with generally accepted government auditing standards. In addition to the contact named above, David Moser, Assistant Director; Donna Byers; Gina Flacco; and Renee S. Brown made key contributions to this report.
The Department of Defense (DOD) is simultaneously conducting costly military operations and transforming its forces and business practices while it is also competing for resources in an increasingly constrained fiscal environment. As a result, GAO has advocated that DOD adopt a comprehensive threat or risk management approach as a framework for decision making. In its 2001 strategic plan, the Quadrennial Defense Review (QDR), DOD stated its intent to establish an approach--the risk management framework--to balance priorities against risk over time and monitor results against its strategic goals. GAO was asked to (1) assess the extent to which DOD has implemented the framework, including using it to make investment decisions, and (2) identify the most significant challenges DOD faces in implementing the framework, or a similar approach. DOD has taken some positive steps to implement the framework, but additional actions are needed before DOD can show real and sustainable progress in using a risk-based and results-oriented approach to strategically allocate resources across the spectrum of its investment priorities. For example, DOD defined four risk areas, and developed performance goals and department-level measures, but it needs to, among other things, further develop and refine the measures so that they clearly demonstrate results and provide a well-rounded depiction of departmental performance. DOD's current strategic plan and goals also are not clearly linked to the framework's performance goals and measures, and linkages between the framework and budget are also unclear. While DOD officials stated that risk was considered during the fiscal year 2006 budget cycle, DOD's budget submission does not specifically discuss how DOD identified or assessed risks to establish DOD-wide investment priorities. Without better measures, clear linkages, and greater transparency, DOD will be unable to fully measure progress in achieving strategic goals or demonstrate to Congress and others how it considered risks, and made trade-off decisions, balancing needs and costs for weapon programs and other investment priorities. DOD faces four challenges that have affected the implementation of the framework. First, DOD's organizational culture resists department-level approaches to priority setting and investment decisions. Second, sustained leadership, adequate transparency, and appropriate accountability are lacking. Further, no one individual or office has been assigned overall responsibility or sufficient authority for the framework's implementation. DOD also has not developed implementation goals or timelines with which to establish accountability, or measure progress. Finally, integrating the risk management framework with decision support processes and related reform initiatives into a coherent, unified management approach for the department is a challenge that DOD plans to address during the 2005 QDR. However, GAO has concerns about DOD's ability to follow through on this integration, because of its limited success in implementing other management reforms. Unless DOD successfully addresses these challenges and effectively implements the framework, or a similar approach, it will likely continue to experience (1) a mismatch between programs and budgets, and (2) a proportional, rather than strategic, allocation of resources to the services.
J-1 visas allow foreign nationals to participate as exchange visitors in cultural and educational programs in the United States. USIA is responsible for managing the J-1 visa program and designates organizations as program sponsors. In 1995, over 9,000 foreign physicians with J-1 visas were in the United States for graduate medical education or training. These exchange visitors constituted about one-tenth of all individuals receiving graduate medical education (see app. II). Because many exchange visitors are in the United States for several years for graduate medical education and training, each year a few thousand new physicians receive J-1 visas and enter the United States to begin graduate medical education and training while a few thousand complete their training. To ensure that the J-1 visa program works as intended in passing learning and experience to other countries, the Congress has imposed restrictions on J-1 visa holders, including physicians in graduate medical education. These physicians are required to return to their home country (or to their country of last legal residence) for at least 2 years after completion of training. However, they may obtain a waiver of this requirement and remain in the United States. For most physicians, the waivers are requested on their behalf by a federal agency or by a state agency or department that is responsible for public health issues. These federal agencies and states generally request waivers of the 2-year foreign residence requirement so that the physicians can practice for several years in underserved areas (see table 1). The federal agencies and states submit these requests to USIA. USIA reviews the program, policy, and foreign relations aspects of the case and forwards its recommendations to the INS Commissioner. For waiver requests made by interested U.S. government agencies or states, INS may only grant the waiver if USIA submits a favorable recommendation. Figure 1 illustrates the waiver process. While HHS is the federal agency responsible for addressing physician shortages it does not use waivers to do so. HHS endorses the philosophy that exchange visitors return home after completing their training to make their new knowledge and skills available to their home countries. As a result, HHS does not support waivers for physicians to remain in the United States to practice in underserved areas. Instead, HHS administers other federal programs, such as NHSC, to address physician shortages in the United States. NHSC supplies physicians and other health professionals to underserved areas primarily by (1) awarding scholarships to students who agree to serve in a shortage area after their health professions training is complete and (2) repaying a set amount of educational loan debt for each year of service in a shortage area. On December 31, 1995, 848 NHSC physicians and 685 other NHSC professionals who received scholarships or federal loan repayment were practicing in underserved areas of the country. In addition to NHSC, HHS has other programs to address medical underservice. For example, HHS provides federal grant funding to community health centers that are required to accept all patients regardless of their ability to pay. Begun as an exceptions policy, the number of physicians receiving waivers of the 2-year foreign residence requirement for J-1 exchange visitors has grown more than tenfold in the past 5 years. Several factors have contributed to the increase: more hospitals and other facilities have found the waiver to be a means to fill their empty positions; more agencies and states are making requests; and physicians are actively seeking waivers, in some cases allegedly paying recruiters and immigration attorneys to find them a position. Waiver physicians are practicing in virtually every state; most are primary care physicians. The number of waivers being processed for physicians to practice in underserved areas each year has grown from 70 in 1990, to 1,374 in 1995 (see fig. 2). In 1995, the number of waivers being processed for physicians was greater than the number of NHSC physicians (1,267) practicing in underserved areas, and it was enough to offset about 27 percent of the total physician shortage identified by HHS. Indications are that in 1995, about half of the foreign physicians that were supposed to return home were granted waivers of this requirement to practice in an underserved area in the United States. Why do facilities want to employ foreign physicians through the use of the waivers? In responding to our survey and during our visits to health centers, physician offices, clinics, and other health care facilities where these physicians were practicing, many officials said that their facilities had turned to these physicians because they were unable to recruit U.S. physicians. For example, the administrator of a county public health unit in Florida commented that most U.S. physicians are not willing to work in rural areas, but she has found many physicians with J-1 visas who had excellent references and credentials and who were willing to practice there. She said it would be a “travesty” to health care in rural areas if these waiver physicians were not available. Other reasons cited for hiring these physicians are their superior foreign language skills and cultural familiarity with a facility’s patient population. For example, several physicians received waivers to practice at a migrant health center in Eastern Washington. These physicians were recruited, in part, because they are native Spanish speakers, which enables them to effectively treat the center’s Spanish-speaking patients. The sudden increase in the number of waivers being processed in 1994 and 1995 probably reflects the fact that facilities had additional places to turn to for requesting the waivers. By 1995, four U.S. government agencies and 23 states were requesting waivers of J-1 visa requirements for physicians. Before 1993, the only agency requesting waivers for a number of physicians to practice in underserved areas was ARC. ARC began requesting waivers in the 1980s for physicians to practice in Appalachia. However, ARC requested around 200 or fewer waivers per year, peaking at 266 waivers in 1993. In addition to ARC, since 1993 the Department of Transportation (DOT) has requested waivers for a handful of physicians to practice in one rural area where the U.S. Coast Guard operates. The rapid growth in waivers began in late 1993 and 1994, when the U.S. Departments of Agriculture (USDA) and Housing and Urban Development (HUD) began requesting them for physicians to serve their rural and urban constituents. Senior officials at both agencies said that they initially responded to a constituent request to support a specific physician; however, their offices were subsequently flooded with requests for waivers for other physicians. Agency officials said that they would like to limit the number of waivers processed by their agencies, but have not found a way of effectively restricting them. The number of waivers also increased because the authority for states to request waivers was passed in 1994, and 23 states requested waivers in calendar year 1995. As a result of the entry of these federal agencies and states, physicians seeking waivers were no longer limited to practice locations in Appalachia and areas serving DOT personnel; instead, they could practice in rural and urban areas across the country. However, HUD officials have recently decided to reassess the department’s waiver policy and stopped accepting requests after August 30, 1996, to conduct a review. Table 2 shows the number of waiver requests submitted to USIA by each agency in 1995 and the reason for the requests. For information on the number of waivers requested by each agency since 1990, see appendix V. Another factor in the increase in waivers may be the interest among physicians with J-1 visas themselves. Health care facility officials, as well as state and federal health officials, said that they have been inundated with inquiries from physicians who would like to obtain a waiver by working in a shortage area. In addition, officials at several facilities said that they were contacted by professional recruiters or immigration attorneys regarding the availability of a physician to meet their facility’s needs if the physician could obtain a waiver. Some facility officials and physicians reported paying up to $25,000 in immigration attorney or recruiter fees for assistance in matching a physician with a facility and processing the waiver. During our site visits to facilities where physicians who had received waivers were practicing, physicians cited several reasons why they wanted waivers, including that (1) they would not be able to apply the medical skills they had learned in the United States in their home countries, (2) they were concerned about violence in their home countries, (3) they wanted to serve in an underserved area, (4) their families and relatives were in the United States, and (5) they had a general desire to stay in the United States. In 1996, waiver physicians were practicing in 49 states and the District of Columbia—every state except Alaska. However, the degree to which they are relied on to relieve physician shortages varies greatly from state to state. To measure the extent of this reliance, we compared the number of waivers granted or in process in 1994 and 1995 with the number of physicians identified by HHS as needed to remove the shortage area designations in a state. In five states (Alabama, Kansas, Kentucky, North Dakota, and West Virginia), the number of physicians for whom waivers were processed equaled more than 75 percent of the number of physicians needed to remove these designations in the state. In other states, such as California, such physicians equal less than 10 percent of the identified need. Physicians with waivers are practicing in a variety of settings. Our survey results show that more than one-third of physicians who received their waivers through federal agencies are practicing in nonprofit community or migrant health centers and about one-fourth are in a private or group practice. The rest are practicing in hospitals, for-profit health centers, or other settings (see fig. 3). See appendix VI for more detailed information on the results of our survey of facilities. Using our survey results, we estimate that almost all physicians practicing on January 1, 1996, whose waivers were processed through federal agencies were practicing in primary care specialties. Overall, more than half of them were practicing in internal medicine (see fig. 4). The other major primary care specialties were pediatrics and family practice. We estimate that one-third of the waiver physicians who had primary care specialties also had subspecialties. The most prevalent subspecialty was nephrology (medicine concerned with kidney disease), which was reported for about 7 percent of the primary care physicians. Other subspecialties included infectious diseases, cardiology, and gastroenterology. Requesting facilities and state officials had mixed views on the usefulness of subspecialties for meeting their needs. Officials from some states said that physicians with subspecialties are not as desirable because they may not remain in the area to practice primary care. In fact, several states have policies to not request waivers for physicians who have subspecialties. On the other hand, officials at some facilities said that they recruited specific physicians, such as a nephrologist, because their subspecialties enabled them to meet the needs of their patient populations. Requests for waivers for physicians with J-1 visas are not coordinated effectively among the agencies and states or with other medical underservice programs, such as NHSC. No single entity is responsible for coordinating practice locations of waiver physicians and HHS, perhaps the most logical candidate for doing so, opposes the way in which the waivers are being used. Because no single entity is responsible for coordinating physicians’ practice locations, the requesting agencies set up varying policies for requesting the waivers. Because of the lack of coordination, the number of waivers processed for physicians to practice in some states has been more than the amount needed to alleviate the identified physician shortage in that state. No single agency has management responsibility for use of the waivers to address physician shortages. While USIA and INS must recommend and approve all waivers of the 2-year foreign residence requirement for physicians requested by interested government agencies and states, USIA and INS officials said that they recommend and approve virtually all waiver requests. USIA officials said that while they check for required documentation, they almost always rely on the interested government agencies’ assertions that the waivers are in the public interest. INS officials said that refusal of the waiver is extremely rare if USIA has given a favorable recommendation. INS officials said that they are not in a position to second-guess USIA or the interested government agency as to whether the public interest would be served if the waiver was granted. “when the application demonstrates that the exchange visitor is needed merely to provide services for a limited geographical area and/or to alleviate a local community or institutional manpower shortage, however serious.” “In summary, this Department has viewed the J-1 visa to be a means of sharing advanced medical knowledge and allowing the benefits of training to accrue to the home country. The Department does not view waivers as a mechanism to help resolve the problems of shortage areas.” Without any overall management of the use of waivers, waiver policies vary considerably between agencies, leading in some cases to “shopping” by the physicians seeking a waiver to obtain the most advantageous terms. Policies vary with regard to such matters as eligible practice locations and state involvement and the consequences of the physician’s failure to complete the agreed-upon length of service. For example, ARC restricts physicians to practice locations in federally designated Health Professional Shortage Areas, while the physicians who received waivers through USDA and HUD have been allowed to practice in other areas, including designated Medically Underserved Areas. ARC officials said that they excluded the Medically Underserved Area designations because (1) this designation is not an accurate measure of physician shortage; (2) the designations have not been updated; and (3) including them would allow physicians to practice in virtually any location in Appalachia.Federal agency and state officials also said that and our review found cases where physicians or their immigration attorneys were shopping between agencies; that is, requesting waivers through multiple agencies at the same time. State health officials commented that they would like consistency in waiver policies across federal agencies. One state health official commented that participation of multiple federal agencies has resulted in confusing and sometimes contradictory program guidelines and has placed a burden on states to coordinate programs. Thus far, the various efforts to use waiver physicians to address medical underservice have operated largely independent of each other and of other programs to address medical underservice. By 1995, there were nearly 30 federal agencies and states processing requests for waivers for physicians with J-1 visas. Most of them were operating independent of one another. The four federal agencies have no formal process for coordinating their waiver requests and they have overlapping jurisdictions. For example, while USDA’s policy has been to request waivers for rural areas and HUD’s policy has been to request waivers for urban areas, the two agencies have not agreed on which areas are rural and which are urban. As a result, we found some locations, such as Buffalo, New York, and Decatur, Illinois, where USDA requested waivers for one or more physicians and HUD requested waivers for additional physicians to practice in the same city and in some cases the same facility. There is no mechanism for each federal agency to know how many waivers the other has requested to address the physician shortage in an area. Coordination is also lacking between state and federal efforts. State health officials do not always know where physicians receiving waivers through federal agencies are practicing and, therefore, they cannot coordinate these placements with state programs to address medical underservice. While ARC requires that facilities’ requests for waivers come through the states, other agencies do not. This leads to situations where the states are unaware of the level of placements that are occurring. For example, health department officials in Texas, which does not request waivers for physicians under the state authority, did not know how many physicians received waivers through federal agencies to practice in the state. As a result, when we scheduled our visits to practice sites in Texas, state officials were surprised to find out that federal agency records showed over 20 waiver physicians practicing in El Paso. Waivers for physicians also are not well-coordinated with other programs addressing underservice, such as those operated by HHS. One such program is NHSC. When combined with NHSC physicians, federal agencies and states have requested waivers for more physicians than are needed to remove the shortage designations in some states. We found that for eight states, the number of physicians who received waivers in 1994 and 1995 (or had waivers in process), combined with the number of NHSC physicians in service at the end of 1995, exceeded the number of physicians needed to remove the shortage area designations in the state. (See app. VII for more information on the identified need, number of waivers being processed, and the number of NHSC physicians practicing in each state.) Without information on the number of physicians needed in the area and the number of NHSC and waiver physicians already addressing that need, federal agencies and states will not know if the needs of an area are already being met when considering whether or not to request a waiver for a physician. Another HHS program with which physician waivers are not well-coordinated is the Community Health Center program. This means that federal agencies and states may not know of problems identified by HHS when considering requests from community health centers. For example, waivers were requested through HUD for several physicians to practice at a health center that had its HHS funding discontinued due to financial management problems. When requesting the waivers for these physicians, HUD officials did not know that HHS had identified problems with the health center. As a result, they could not take those problems into consideration when deciding whether the waivers were in HUD’s and the public’s interest. Coordination between the agencies involved in the requests and other programs to address medical underservice is important, because not all the agencies processing the waiver requests have expertise in addressing health care issues. For example, USDA and HUD officials involved in the waiver requests said that their offices lacked expertise in health issues. In USDA, waivers for physicians are processed in the department’s Agricultural Research Service by an office that has experience processing waiver requests for a small number of research scientists who were in the United States as exchange visitors. At HUD, the waivers were processed in the Office of the Deputy Assistant Secretary for Intergovernmental Relations. Although most physicians who obtain waivers of their J-1 visa foreign residence requirement are apparently complying with the terms of their service agreements, weak controls mean there is little to deter physicians or their employers from failing to comply if they choose to break these terms. For example, we found instances in which a physician never practiced at the intended facility, unbeknownst to the agency processing the request. Including all current waiver physicians when assessing compliance with requesting agency policies can present somewhat of a misleading picture, because so many of these physicians have been at their jobs for a relatively short time, in many cases for less than 1 year. To provide a more accurate picture of whether physicians stay for the full term of their agreement, we analyzed those physicians whose waivers had been requested through ARC from 1990 to 1992. We estimate that 90 percent completed the minimum employment period required by ARC, which was 2-years, for the facility that requested the waiver. On January 1, 1996, over one-fourth (28 percent) were still practicing at the same facility that requested the waiver and nearly half of these (13 percent) had been there for more than 4 years. We also examined the shorter-term compliance record of all physicians practicing on January 1, 1996, after receiving waivers through federal agencies between 1994 and 1995. We estimate that 96 percent of them were working at the facility for which the waiver was requested. The remaining 4 percent had left or did not plan to work at that facility. Although this percentage is similar to the percentage of ARC physicians who did not complete their 2-year agreements, the percentage may grow because many of the physicians had completed only a fraction of their employment contract by the start of 1996. For example, none of the physicians with waivers through HUD had been practicing for more than 1 year by that date. For the physicians in our sample and in the states we visited, several reasons they were not practicing at the location for which the waiver was requested had to do with changes made by the facility that initiated the request. We found cases in which a facility made the request and then determined that the physician was no longer needed. In at least one case, it appears that the employer made this determination before the waiver was even granted, but the physician still received the waiver. Here are examples in which the facility changed its mind: In letters asking USDA to request waivers for three physicians, a clinic in Illinois said that the physicians were needed to help meet an urgent primary care delivery crises in the rural community where the practice site was located. Six months after one of the physicians began working there, she was terminated because the clinic had determined that it was overstaffed. She is now practicing in another city in Illinois that has an identified shortage of physicians who serve Medicaid patients. The second physician was transferred from the location on the waiver request to another location that is not in a federally designated shortage area. The third physician was practicing only part-time at the practice site for which the waiver was requested. He said that because there were not enough patients in that location, he spends about half his time working at the main clinic in Champaign, Illinois. A medical group asked HUD to request waivers for three physicians to work at a practice purchased from a retiring physician outside of Atlanta. When we called the practice site, we were told that only one of the three was practicing there. An official from the medical group said that the practice no longer had enough patients to support these physicians. As a result, one physician never worked at the site, one physician worked a brief period and then went to practice at a prison in Michigan, and one physician remained to work for the new employer after the practice was sold. INS officials said that waivers had been approved for all three physicians, including the one who was never employed there. Before we notified them, HUD officials were unaware that the facility had been sold and that two of the physicians were not practicing there. We also found instances in which the reason for not meeting the requirements of an agreement resulted from the physician’s actions. For example, in two separate cases, physicians were fired when they refused to complete the requirement for working 40 hours a week at the requesting facility. In one instance, the fired physician notified USDA that he was going to practice at another hospital and when USDA officials told him he could not because the hospital was not in a shortage area, the physician broke off contact with them. The facility official said that he had heard that the physician was pursuing additional graduate medical education in the United States. In the second instance, the facility reported the physician’s firing directly to INS, which revoked his nonimmigrant work status. Reviews conducted by ARC’s Inspector General have disclosed similar instances in which conditions of agreements were not met. Six of eight reviews conducted by the Inspector General from 1994 to 1995 found that contrary to ARC policy, some physicians were not practicing primary care at least 40 hours per week in a Health Professional Shortage Area. Instead, employers were using the physicians in subspecialty practices or in locations not designated as shortage areas. Agency controls to help ensure that physicians comply with waiver agreements vary among the federal agencies and states. These controls range from periodic reports and site visits, to reliance on employers to enforce the employment contracts. For example, ARC requires the facilities to verify and the waiver physicians to certify that they are complying with ARC policies. In addition, the ARC Inspector General conducts site visits to the physicians’ practice locations. In contrast, while HUD and USDA officials said that they had started or planned to start requiring periodic reports, officials at both agencies said that they do not have the staff resources to monitor physician compliance. These officials said that because the use of waivers to address physician shortages is not authorized or funded as a program, their agencies do not have the resources available to effectively manage it as a program. In its site visits to monitor compliance, ARC’s Inspector General attributed most of the problems identified to the employers. However, for waivers requested through both federal agencies and states, the applicable federal laws and regulations do not specify penalties against employers that fail to comply with agency policies. ARC tries to address this shortcoming by requiring employers to sign a statement certifying that they will comply with the waiver policy, and applications from employers found to be in violation of the policy receive additional scrutiny to ensure that the problems have been corrected. The growth in the number of waiver physicians has not gone unnoticed by federal agency officials and legislators. They have recently taken actions that could address some of the coordination and compliance problems identified. A group of federal agency officials has met informally to discuss waiver requests and USIA has proposed regulations to make the waiver requests more consistent. In addition, recent amendments to the Immigration and Nationality Act impose additional requirements for waivers obtained through federal agencies. The new regulations, if finalized, and the 1996 amendments could address many of the coordination and compliance problems, but not all of them. Recognizing the need for better coordination, officials from USIA, INS, HHS, and the requesting federal agencies have been meeting since late 1995 to discuss the use of waivers to address physician shortages. The officials formed an informal interagency group that has discussed revising regulations addressing waiver requests. USIA, in working with the other agencies, published a proposed regulation in the Federal Register on September 5, 1996. In the preamble to the proposed regulation, USIA noted that with the entry of USDA and HUD into the waiver process, inconsistency in the administration of waiver requests among the different agencies has created some confusion. For a request by a U.S. government agency, the regulation would condition approval on the physician’s commitment to practice primary care for at least 3 years in a designated Health Professional Shortage Area or a Medically Underserved Area or to practice psychiatric care in a mental health Health Professional Shortage Area. To prevent physicians from shopping between agencies, the foreign medical graduate would have to certify that he or she is only requesting a waiver through one agency. The Omnibus Consolidated Appropriations Act, 1997, included amendments to the Immigration and Nationality Act that create greater consistency among waiver efforts by subjecting state and federally sponsored waiver physicians to the same statutory requirements. The amendments strengthen penalty provisions for federally sponsored waiver physicians by prohibiting them from obtaining permanent residence or U.S. citizenship without completing the required 3-year agreement. If they fail to complete the 3-year agreement, they must fulfill the 2-year foreign residence requirement. These changes (1) make the waiver conditions much more consistent, which may help to alleviate the confusion cited by agency officials, and (2) help to strengthen controls with regard to penalties for waivers requested through federal agencies. While the efforts of the interagency group and enactment of the 1996 amendments should improve coordination of the waiver requests for physicians with J-1 visas, they will leave several problems unaddressed. Specifically, they do not address the following issues: Fully coordinating with other underservice programs or with waiver requests by other agencies. The amendments neither designate an agency as responsible for managing the waivers nor require the waivers to be coordinated with HHS programs such as NHSC or the Community Health Center program. Among federal agencies and states requesting the waivers, the problems of overlapping jurisdictions and the lack of information on the practice locations of waiver physicians could result in more physicians practicing in an area than are needed, as identified by HHS; a continued need for physicians in other areas; and a lack of coordination with state efforts to address physicians shortages. In addition, although HHS has started to collect information on the number of physicians practicing under waivers in an area, there is no directive for this information to be used or shared in making decisions on waivers for physicians or other federal assistance. Ensuring that the use of waivers for physicians is a last resort. In an effort to ensure that the employers have a true need for a physician, ARC, USDA, and HUD policies, as well as the proposed USIA regulations, require the facilities to provide some documentation of past recruitment efforts. This procedure, however, does not ensure that the use of waivers for physicians is the option of last resort for areas with physician shortages. In some cases it appears that other qualified physicians are available, but the facility prefers to hire the physicians with J-1 visas. For example, officials from one multispecialty clinic told us that they interviewed several applicants for a specialist physician position, including candidates who were not under J-1 visas, but they chose the physician with a J-1 visa and obtained a waiver because he was the most qualified. The use of waivers is now a ready means for acquiring physicians, some of whom are being actively marketed by the physicians themselves or placement specialists such as recruiters. The current statute and regulations do not require waivers to be used only as a last resort. Monitoring compliance. It is unclear whether agencies would devote sufficient resources to effectively monitor compliance. USDA, for example, relies on employers to enforce the employment contracts, citing a lack of staff resources to conduct its own monitoring. However, as we and ARC’s Inspector General found, many of the examples of physicians who failed to comply with agency policies resulted from actions taken by the employers. As a result, a reliance on employers to do the policing does not appear adequate to prevent the kinds of situations we found. HUD officials also said that their monitoring efforts were limited by the availability of staff resources. Addressing the needs of the medically underserved. Under existing procedures, locating a waiver physician in a medical shortage area is no guarantee that the needs of the underserved will be addressed. An area’s underserved may be only a specific part of the population (such as migrant workers or low-income people), and not all federal agencies’ and states’ policies contain requirements or monitoring to ensure that a physician’s practice includes such groups. For example, if the underserved part of the population is low-income, the requesting agencies’ and states’ policies do not all require that a waiver physician in such an area accept Medicaid, have a sliding fee scale, or accept anyone for services regardless of his or her ability to pay. In one area where the identified need was care for migrant farm workers, a waiver physician was in a group practice a block away from a federally funded migrant health center. A senior official at the migrant health center said that the waiver physician did not impact the center’s patient load because they both served different patient populations. Establishing penalties against a facility for failing to comply with agency policies. The new regulations and the 1996 amendments do not establish any penalties for employers who fail to comply. The ARC Inspector General noted that the most significant programmatic issue that surfaced during that office’s review was the limited accountability of employers and the lack of potential actions against employers who did not use physicians with waivers in accordance with the intended purposes noted in the program. The use of waivers for physicians with J-1 visa requirements has become so extensive that this exception policy now resembles a full-fledged program for addressing medical underservice in the United States. Many health care facilities and states cite examples of the utility of these waivers in providing a qualified physician for an underserved area. However, while the agencies involved in processing the waivers are operating with the best of intentions, the growing use of waivers is not being managed as a program, and this is having detrimental results. Federal efforts to address physician shortages are not coordinated among the federal agencies or with the states. Several agencies, including those not traditionally involved in physician supply issues, have set up de facto physician supply programs using their existing authority and agency resources. Despite some improvements, monitoring efforts to ensure that physicians fulfill the terms of their agreements remain spotty. Accountability for reducing the actual conditions of underservice is limited. Physicians can practice in underserved areas but not actually target their efforts to that part of the population that is underserved. The rapid growth in waivers for physicians makes this an opportune time for the Congress to reassess what it wants the waiver provision to accomplish. The running disagreement between HHS and other federal agencies about the role of waivers in addressing physician shortages in underserved areas needs resolution, and better coordination and management of the overall effort are needed if it is to be continued. If the Congress wants to continue to address medical underservice in the United States through the use of waivers for physicians with J-1 visa requirements, it should consider requiring that the use of such waivers be managed as a program. Specifically, the Congress should consider the following: Clarifying how the use of waivers for these physicians fits into the overall federal strategy to address medical underservice. This should include determining the size of the waiver program and establishing how it should be coordinated with other federal programs. Designating leadership responsibility for managing the program. This responsibility could be given to a single federal agency, such as HHS; to several federal agencies, for example, through a memorandum of understanding; or it could be delegated to the states. Establishing penalties against facilities that fail to comply with requirements of the waiver. Directing the entity(ies) managing the program to implement procedures and criteria for the selection and placement of physicians and for monitoring compliance with waiver requirements. These procedures and criteria could include requiring the state to clearly support the use of the physician for addressing unmet need and to show that it has sought other options for fulfilling this need. We provided a draft copy of this report to seven agencies that are involved with waivers for physicians to practice in underserved areas. ARC, USDA, and USIA provided formal written comments (see apps. VIII, IX, and X). These comments indicate general agreement with our conclusions and matters for congressional consideration. HUD and Justice (the parent department for INS) chose not to provide formal comments. However, we discussed our findings with HUD and INS officials, and they raised no objections to our findings or matters for congressional consideration. DOT has had limited involvement in waivers for physicians with J-1 visas and did not have comments on the draft report. HHS did not submit formal comments by the end of our 30-day comment period. However, the Director of the department’s Office of International Affairs (who also chairs the department’s Exchange Visitor Waiver Review Board) informed us that his office had fully reviewed the draft report and was in general agreement with the findings. Regarding our matters for congressional consideration, he said that HHS favored the option of delegating responsibility for the waivers to the states. The three agencies that provided formal written comments also expressed their support for the need for better coordination between the participating agencies, states, and other programs to address medical underservice. One agency, USDA, also expressed concern about the lack of available funding to operate its program effectively. USDA suggested that an alternative to funding the program from appropriated research funds would be to initiate a fee-for-service type application fee to offset operational costs, which would require legislation to authorize the collection and utilizations of fees. We concur that any entity involved in managing waiver requests for physicians should commit adequate resources for oversight and operational support to ensure that the physicians address unmet needs for physician resources. Although we did not examine financing options for managing the waivers in our review, we did note that a few states, such as Michigan, have been requiring user fees of up to $500 per application. We also received comments on technical matters from several of the agencies, which we considered in preparing our final report. We are sending copies of this report to the Secretaries of Agriculture, Health and Human Services, Housing and Urban Development, and Transportation, as well as the Director of the United States Information Agency, the Federal Co-Chairman of the Appalachian Regional Commission, and the Attorney General. We also will make copies available to others on request. Please contact me on (202) 512-7119 if you or your staff have any questions. Major contributors to this report are listed in appendix XI. To accomplish our objectives, we interviewed (1) federal agency officials responsible for requesting the waivers at ARC, HUD, USDA, and DOT; (2) HHS officials in the department’s Office of International Affairs and the Health Resources and Services Administration; (3) officials responsible for processing the waiver requests at USIA and INS, including INS service centers; (4) officials from the Department of Labor and the State Department; and (5) officials from the Educational Commission for Foreign Medical Graduates (ECFMG), the National Association of Community Health Centers, the American Medical Association (AMA), the Council on Graduate Medical Education (COGME), and the U.S. Commission on Immigration Reform. We also reviewed relevant legislation, studies, and policy documents and conducted two mail surveys: one of the states regarding the use of waivers of the J-1 visa foreign residence requirement for physicians in their states, and another of the facilities that requested such waivers for physicians. We obtained and analyzed data on requests for waivers for physicians from USIA and the requesting federal agencies and reviewed a small sample of case files. We also visited three states—Washington, Texas, and Georgia. We selected these three states for a cross-section of states where waiver physicians were practicing: Washington was quick to establish a state program; Texas had a large number of physicians with waivers through federal agencies but the state was not requesting waivers; and Georgia had a state program as well as physicians whose waivers were requested through ARC, HUD, and USDA. During our site visits, we met with state and other health officials, visited 14 sites where waiver physicians were practicing, and interviewed health care facility officials and 20 physicians who received waivers. We selected the sites in order to visit physicians in a variety of practice settings, including federally funded community and migrant health centers, a health center serving residents in public housing, city and county health departments, a capitated-rate program for the Medicare- and Medicaid-eligible elderly, and private and group practices affiliated with both public and for-profit hospitals. We conducted our work between November 1995 and September 1996 in accordance with generally accepted government auditing standards. To determine the number of waivers for physicians granted at the request of ARC, USDA, and HUD, we requested copies of the agencies’ databases. Each database contained information about when the agency requested that USIA recommend the waiver. We used the date that the agencies sent the request to USIA in our calculations because neither USIA nor INS has a cost-effective means of identifying waiver requests by occupation and USIA and INS officials said that they recommended or approved virtually all the physician waiver requests made by the interested U.S. government agencies. While we did not review the agencies’ computer-based systems, we did review the requesting agencies’ data for consistency and accuracy and selectively compared the agency data with that held by USIA. We obtained information on the waiver requests made by DOT from its Office of the General Counsel. We obtained information on state requests for waivers from our survey of states regarding waivers for physicians and follow-up telephone calls to state officials. Our scope did not include waiver requests from VA or requests from other agencies for physicians to conduct research. Because the agencies requesting waivers do not consistently track the practice dates of the physicians, we could not identify the number of physicians in practice at any given point in time. Instead, we used the dates that the agencies and states submitted their requests for waivers to USIA and assumed that those physicians whose waivers were requested in 1994 or 1995 were either already practicing at the facility listed in agency data on December 31, 1995, or had their waivers in process to begin practicing shortly thereafter. We compared this number with (1) the number of physicians needed to remove the primary care Health Professional Shortage Area designations in the state on December 31, 1995, and (2) the number of NHSC physicians (who received NHSC scholarships or NHSC federal loan repayment in return for practicing in an underserved area) who were practicing on December 31, 1995. We obtained these data from HHS’ Health Resources and Services Administration. We also compared the number of waivers with the number of NHSC physicians practicing in underserved areas on September 30, 1995, including NHSC physicians who did not have NHSC scholarship or federal loan repayment obligations. To estimate the number of physicians who completed graduate medical education and training in 1995 who would be subject to the 2-year foreign residence requirement, we subtracted the number of exchange visitor physicians who were continuing applicants in the 1995-96 academic year from the number of physicians sponsored by ECFMG in the prior academic year. While this number is not exact because it may include a small number of physicians who were involved in research and some physicians who did not complete their training, it does represent a reasonable estimate of the number of exchange visitor physicians with J-1 visas who completed graduate medical education or training who would be required to return home without a waiver to remain in the United States. The number of waiver requests for physicians to practice in underserved areas that the agencies sent to USIA in 1995 (1,374) is about 64 percent of this figure. Therefore, we estimate that half of the physicians who were supposed to return home after completing their graduate medical education or training received waivers to practice in underserved areas in the United States instead. To identify characteristics of the physicians who received waivers of the J-1 visa foreign residence requirement and to measure the compliance and retention of these physicians, we selected a random sample of 40 from 355 physicians for whom ARC requested waivers between 1990 and 1992. Because most federal agencies only began requesting waivers in the past several years, we also selected a random sample of 211 of 1,994 physicians for whom ARC, DOT, HUD, and USDA received waiver requests in 1994 and 1995 (this was a stratified sample, including 40 of 362 ARC requests; 2 of 2 DOT requests; 49 of 477 HUD requests, and 120 of 1,153 USDA requests). We sent a questionnaire to the contact person at the facility that had requested the waivers, using the information provided by the federal agencies. For each physician, we asked the contact person to tell us (1) if the physician worked or planned to work at the facility; (2) if the physician was working at the facility as of January 1, 1996; (3) if the physician left and the date he or she stopped working at the facility; (4) whether or not the physician obtained permanent residency during his or her employment; and (5) the physician’s medical specialty, subspecialty, and practice setting. We received responses for 39 of the 40 physicians in our 1990 to 1992 ARC sample and for 200 of 211 physicians in our 1994 to 1995 samples (38 of 40 ARC physicians, 2 of 2 DOT physicians, 49 of 49 HUD physicians, and 111 of 120 USDA physicians). We used the survey results of the 1990 to 1992 ARC sample to estimate the rate of completion of ARC’s required 2-year contract among all waiver physicians whose waivers were requested between 1990 and 1992. We counted those physicians who worked for at least 1.75 years as meeting the ARC minimum contract period at that time, which was 2 years. We used 1.75 years of practice as our measure of compliance to allow for vacation and other leave. At a 95-percent confidence level, the rate of compliance among the 1990 to 1992 requests is at least 80 percent and the percent still at the requesting facility on January 1, 1996, is at least 19 percent. We used the survey results of the 1994 to 1995 samples to estimate the rate of compliance, to date, of physicians whose waiver requests were received by ARC, DOT, HUD, and USDA from 1994 to 1995. We counted those physicians who were working on January 1, 1996, for the facility listed in agency data as in compliance. At a 95-percent confidence level, the rate of compliance among the 1994 to 1995 requests (those practicing on January 1) is at least 93 percent. We also used the survey results of the 1994 to 1995 samples to estimate the practice specialties and practice settings of physicians who were practicing on January 1, 1996. For this analysis, we included those 150 physicians who were practicing on January 1, 1996, for the facilities listed by the agency. The estimates at the 95-percent confidence intervals are shown in tables VI.2 and VI.3. We also obtained comments on the use of waivers for physicians from the survey respondents. To identify the states’ participation in requesting waivers for physicians, we used a questionnaire for information on (1) whether or not the state had requested or planned to request waivers for physicians with J-1 visas in fiscal years 1995 and 1996, and (2) the state’s involvement in waivers for these physicians. We sent a questionnaire to the contact person provided by USIA or the official responsible for public health issues in all 54 eligible jurisdictions, including the 50 states, the District of Columbia, Guam, Puerto Rico, and the U.S. Virgin Islands. Each state reported on the number of waivers requested by the state, if any; factors considered in state requests; monitoring activities; and state involvement in requests for waivers made by federal agencies. The respondents also commented on the use of waivers for physicians to address medical underservice and provided a copy of their state’s written policies, if any, regarding these waivers. In addition, to obtain information on the number of waivers for physicians requested by the states in 1995, we telephoned officials at those states that indicated they had requested waivers in fiscal years 1995 or 1996. To determine the conditions attached to the waivers, we interviewed state and federal agency officials, reviewed their written waiver policies, and analyzed the results of our state survey. To look at coordination of physician placements, we cross-tabulated the agency data on waiver requests received by the agencies between 1994 and 1995 by state and selected those physicians whose waiver requests were sent to USIA between 1994 and 1995. We obtained the number of physicians who were NHSC scholarship or federal loan repayment recipients who were practicing in each state as of December 31, 1995, from HHS’ Bureau of Primary Health Care. We added the number of waiver physicians and NHSC physicians and compared them with the number of full-time-equivalent physicians needed to remove primary care Health Professional Shortage Area designations in that state as of December 31, 1995. We used the shortage area dedesignation level because it is the primary measurement used by HHS and the requesting agencies to establish the need for physicians. We used USIA’s data file to identify those locations for which more than one agency requested waivers for physicians and checked the requesting agencies’ data to see if they showed a request for that practice location. We identified instances where physicians did not comply with the terms of the waiver through (1) discussions with the ARC Inspector General and a review of reports from that office, (2) our survey of facilities where requesting agencies believed that the physicians were practicing, (3) site visits to facilities where the physicians were supposed to be practicing, and (4) discussions with USIA and other agency officials. If a facility indicated that the physician never worked there, we contacted the facility, INS, or both to obtain information on the reason the physician never worked there and to confirm that a waiver had been granted. We also reviewed case files at the requesting agencies and USIA to check for documentation, if any, of the physician’s departure from the facility or noncompliance. For physicians that did not work at or left the facilities, we tried to locate the physician through AMA data; the unique provider identification number database, which is maintained by the Medicare program; telephone listings; state licensing boards; and other sources. Exchange visitors are only a portion of physicians in graduate medical education programs. As shown in table II.1, about 1 in 10 physicians in programs accredited by ACGME was an exchange visitor in August 1995. Of those who were international medical graduates—physicians who did not graduate from U.S. or Canadian medical schools—about 1 in 3 was an exchange visitor. Only these exchange visitor physicians are subject to the J-visa 2-year foreign residence requirement. Hence, while policy changes regarding waivers for exchange visitors will affect more than one-third of the international medical graduates in graduate medical education or training, most international medical graduates will not be affected. Exchange visitor (J-visa) Nonimmigrant (H-visa) Does not include graduates of Canadian medical schools. Medical school type was not indicated for 454 residents (0.5 percent of all residents). Under the Mutual Educational and Cultural Exchange Act of 1961, the Director of USIA establishes programs intended to promote mutual understanding between the people of the United States and other countries by means of educational and cultural exchanges. Under these exchange visitor (J-1 visa) programs, designated organizations sponsor nonimmigrant aliens’ temporary visits to the United States for the purposes of teaching, instructing or lecturing, studying, observing, conducting research, consulting, demonstrating special skills, or receiving training. ECFMG is the designated sponsor for exchange visitors participating in graduate medical education. After completing this program, it is expected that participants will return to their home countries and impart what they have learned and experienced to the people of their country. Section 212(e) of the Immigration and Nationality Act requires that certain J-1 visa program participants, including participants in graduate medical education, reside at least 2 years in the countries of their nationalities or last residences after leaving the United States. They must meet this requirement before they are eligible to apply for nonimmigrant visas (H and L) as temporary workers, for permanent residencies in the United States, or as immigrants. There was no 2-year foreign residence requirement or waiver provision in the exchange visitor program authorized with the passage of the U.S. Information and Educational Exchange Act of 1948. The act required participants to depart the United States after completing their programs. The 2-year foreign residence requirement and its related waiver provision evolved through a number of legislative changes after the exchange visitor program was authorized in 1948. “the amendment would make perfectly clear to all concerned...and, above all, the foreign nationals themselves—that the exchange program is not an immigration program and should not be used to circumvent the operation of the immigration laws.” The 1956 amendment also provided for a waiver of the foreign residence requirement on the basis of a request from an interested U.S. government agency showing the waiver to be in the public interest. “To make available the services of exchangees who possess talents desired by our universities, foundations and other institutions, the language of the House bill was modified to permit the waiver of the foreign residence requirement on the request of an interested U.S. Government agency.” An amendment to the foreign residence provision in 1970 removed the blanket application of the foreign residence requirement for exchange visitors and imposed it only on participants (1) whose participation was financed in some way by the United States or their home countries or (2) whose home countries clearly needed their services. Also, participants could no longer meet the 2-year foreign residence requirement by residing in other foreign countries but had to reside in the countries of their nationalities or their last foreign residences before coming to the United States. This requirement still applies. The 1970 act also established two additional bases for waivers: persecution because of race, religion, or political opinion and statements by the participant’s home countries that they had no objections to the waivers. These bases still apply except that the statement of no-objection waiver is no longer available to participants in graduate medical education or training. “that there is no longer an insufficient number of physicians and surgeons in the United States such that there is no further need for affording preference to alien physicians in admission to the United States under the Immigration and Nationality Act.” In light of this finding, the Congress tightened immigration laws for foreign doctors and strengthened requirements affecting J-1 visa program participants who were coming to the United States for graduate medical education or training. The latter were made subject to the 2-year foreign residence requirement whether or not their programs were financed by a government, made ineligible to apply for waivers on the basis of no-objection statements from their home countries, limited to 3-year stays in the United States, required to make a commitment to return to their home countries after completing their training, and required to provide written assurance by their home countries that after completing their training and returning home, they would be appointed to positions in which they would fully use the skills acquired in their education or training. In 1981, USIA asked the Congress to extend the limit up to 7 years for medical doctors to encourage them to study in the United States rather than in a Communist country. The House Committee on the Judiciary questioned USIA officials regarding the likelihood that physicians would be willing to return home after 7 years, during which time they may have raised families in the United States. The Congress increased the usual permissible duration of stay to 7 years, but it imposed additional requirements: Graduate medical education or training participants were required, as a continuing reminder, to furnish annual affidavits to INS attesting that they would return to their home countries upon completion of the education or training for which they came to the United States. U.S. officials were required to issue an annual report to the Congress on participants who had submitted affidavits, including their names and addresses, the programs in which they are participating, and their status in the programs. In reporting on this legislation, the House Committee on the Judiciary “notes the flagrant abuse of the exchange program during the past decade and seeks to alleviate possible ‘brain drain’ from various countries.” It said that the affidavits were to ensure that the physicians comply with the terms of their agreement. Amendment of the Immigration and Nationality Act in 1994 established another basis for physicians to obtain waivers of the J-1 visa foreign residence requirement. Under the amendment, up to 20 waivers for physicians with J-1 visas may be granted at the request of a statedepartment of public health or its equivalent each fiscal year. The law imposed several conditions for state-requested waivers: The alien physician must (1) demonstrate a bona fide offer of full-time employment at a health facility, (2) agree to begin employment at that facility within 90 days of receiving the waiver, and (3) agree to work there for at least 3 years while maintaining a nonimmigrant work status (H-1B visa). (The physician’s status as a nonimmigrant may not be changed until the employment contract is fulfilled.) The alien physician must agree to practice medicine for at least 3 years in a geographic area or areas designated by the Secretary of HHS as having a shortage of health care professionals. If the alien physician is otherwise contractually obligated to return to a foreign country, that country’s government must furnish a statement to the Director of USIA that it has no objection to a waiver. If the physician fails to fulfill the contract, he or she must reside and be physically present in the country of his or her nationality or last residence for at least 2 years after departing the United States before becoming eligible to apply for an immigrant visa, for permanent residence, or for any other change of nonimmigrant status The 1994 amendments apply only to exchange visitors who were admitted to the United States under a J-visa or acquired J-visa status before June 1, 1996. Other amendments to the Immigration and Nationality Act regarding waivers for physicians with J-1 visas were passed in the 104th Congress. The amendments were included in the Omnibus Consolidated Appropriations Act, 1997, and (1) impose additional requirements for waivers requested by interested U.S. government agencies, and (2) extend authorization for waivers for aliens entering the United States with a J-visa or acquiring such status through May 31, 2002. The amendments subject physicians seeking waivers through interested U.S. government agencies to some of the same requirements as those sponsored by state agencies. For example, the amendments require such physicians to (1) agree to work for at least 3 years for the health facility named in the application, (2) work in an area designated by the Secretary of HHS as having a shortage of health care professionals, (3) begin work within 90 days of receipt of the waiver, and (4) maintain a nonimmigrant status until their 3-year commitment is completed. Physicians who do not fulfill this commitment become subject to the 2-year foreign residence requirement. The U.S. General Accounting Office is conducting a review of J-1 visa waivers for physicians to practice in medically underserved communities. As part of this study, we are collecting information on state J-1 visa waiver programs as well as on states’ roles in requests for J-1 visa waivers made by U.S. government agencies. We are sending this questionnaire to all 50 states, the District of Columbia, Guam, Puerto Rico, and the Virgin Islands of the United States. Please complete the questionnaire and return it, along with a copy of your state’s policies on J-1 visa waivers for physicians, if any, within ONE WEEK of receipt. You can use the enclosed pre-addressed envelope or send it via FAX on (206) 287-4872. The questionnaire should take about 15 minutes to complete. Please provide the name, title, and telephone number of the individual who completed this questionnaire so that we may consult him or her, if necessary, for clarification of your responses or additional information. J-1 visa waivers did your state receive from employers between October 1, 1995 and March 31, 1996? (enter 0 if no applications were received during that time) 112 (total) applications please continue on next page--> ) 5. Based on your current situation, how 7. We are also interested in any activities your adequate is the annual limit of 20 state J-1 visa waivers to meet your state’s needs for physicians under this program? (check one) state conducts to monitor compliance with the conditions of the J-1 visa waiver. Which of the following activities, if any, does your state conduct or intend to conduct this year for state J-1 visa waiver physicians? (check all that apply) 18% much more than adequate 29% more than adequate 27% adequate 35% We require periodic reports by the 6% less than adequate 0% much less than adequate 50% We require periodic reports by the 18% too early to tell 21% We conduct periodic site visits 82% We rely on the employers to enforce 6. Listed below are some of the factors you might consider when reviewing state J-1 visa waiver applications. Which of the following factors, if any, does your state consider in deciding whether to request state J-1 visa waivers? (check all that apply) 15% We rely on other federal or state 38% We act in response to reports from other federal or state agencies 32% other (please specify) 97% Whether the practice location is in a health professional shortage area (HPSA) please continue on next page--> 65% Whether the practice location is in a medically underserved area/population (MUA/MUP) 29% Whether other state J-1 visa waiver physicians are practicing in the area 24% Whether other J-1 visa waiver physicians who received waivers through U.S. government agencies are practicing in the area 27% Whether physicians under National Health Service Corps (NHSC) obligations are practicing in the area 85% Whether the facility has tried recruiting in the past without success 0% None of the above factors J-1 VISA WAIVERS REQUESTED BY U.S. GOVERNMENT AGENCIES 9. 8. J-1 visa waivers may also be requested by interested U.S. government agencies (e.g., Dept. of Agriculture, Appalachian Regional Commission, Dept. of Housing and Urban Development). Which of the following activities does your state conduct for interested government agencies’ requests for J-1 visa waivers for physicians in your state? (check all that apply) If you have any comments on the J-1 visa waiver program, such as the reasons for participation or nonparticipation, please enter them in the space below. (Attach separately if additional space is needed) 72% Prepare a letter from a state official when supporting the waiver 78% Verify that the request is for a 37% Track practice locations of the agencies’ J-1 visa waiver physicians 15% Monitor physician compliance with the conditions of the waiver 48% Provide assistance to facilities applying for U.S. government agencies’ J-1 visa waivers 7% Other (please explain) 10. Please include a copy of your state’s written policies, if any, regarding J-1 visa waivers for physicians, with your completed questionnaire. 33 states provided written policies Thank you for your assistance. When our study is complete, we will send you a copy of our report. This appendix contains the responses to questions we asked facilities that requested waivers of the J-1 visa foreign residence requirement for physicians. We sent the questionnaire to the facilities for 211 physicians who had their waivers requested through ARC, DOT, HUD, and USDA from 1994 to 1995 (determined on the basis of the date the agency received the waiver request). We analyzed the practice specialties and practice settings for the 150 physicians who were practicing on January 1, 1996, for the facilities listed on agency data. Table VI.2: Practice Settings of Waiver Physicians Practicing on January 1, 1996 DOT (n=2) ARC (n=38) HUD (n=29) USDA (n=81) Includes rural health clinics, mental health clinics, health department clinics, and other practice settings. DOT (n=2) ARC (n=38) HUD (n=29) USDA (n=81) This appendix contains information for each state showing (1) the identified physician need in the state (2) the number of physicians with waivers granted or in process, and (3) the number of physicians who received NHSC scholarships or federal loan repayment who were practicing in the state. We used the number of full-time-equivalent physicians identified by HHS as needed to remove primary care Health Professional Shortage Area designations in the state on December 31, 1995, because it is the primary measurement used by HHS and the requesting agencies to establish the need for physicians. Although physicians with waivers may also practice in designated Medically Underserved Areas, HHS does not remove this designation and, as a result, there is no dedesignation level to measure the need for physicians in the Medically Underserved Area. To measure the number of waiver physicians who would be practicing on December 31, 1995, or shortly thereafter, we used data from the requesting agencies and states on the number of waiver applications sent to USIA from 1994 to 1995. For the number of NHSC physicians in a state, we used the number of NHSC scholarship and loan repayment recipients who were practicing on December 31, 1995. This is a conservative number of NHSC physicians; however, because it does not include the number of physicians who were NHSC state loan repayment recipients practicing in shortage areas, which was not available by state from NHSC at the time of our review. As shown in table VII.1, the degree to which the identified physician shortage can be offset by the waiver physicians practicing in a state varied between the states. In addition, when combined with the NHSC physicians practicing there, the number exceeded the number of physicians needed to remove some states’ primary care Health Professional Shortage Area designations, while other states’ identified physician needs were not met by these two physician sources. NHSC physicians as a percent of identified need (continued) In addition to those named above, the following individuals made important contributions to this report: Sarah F. Jaggar, Issue Area Director; Susan Lawes, Senior Evaluator; Susie Anschell, Evaluator; Julie Rachiele, Technical Information Specialist; Evan Stoll, Computer Specialist; Jerry Aiken, Computer Specialist; Julian Klazkin, Senior Attorney; Stan Stenersen, Evaluator; Lisa DeCora, Intern; Kathleen Belfi, Support Services Technician; and William J. Carter-Woodbridge, Communications Analyst. Council on Graduate Medical Education. First Report of the Council, Vols. I and II. Washington, D.C.: U.S. Department of Health and Human Services, 1988. Council on Graduate Medical Education. Sixth Report: Managed Health Care: Implications for the Physician Workforce and Medical Education. Washington, D.C.: U.S. Department of Health and Human Services, 1995. Iglehart, John K. “The Quandary over Graduates of Foreign Medical Schools in the United States.” The New England Journal of Medicine, Vol. 334, No. 25 (1996), pp. 1679-83. Institute of Medicine. The Nation’s Physician Workforce: Options for Balancing Supply and Requirements. Washington, D.C.: National Academy Press, 1996. Mullan, F., and others. “Medical Migration and the Physician Workforce.” Journal of the American Medical Association, Vol. 273, No. 19 (1995), pp. 1521-27. Pew Health Professions Commission. Critical Challenges: Revitalizing the Health Professions for the Twenty-First Century. San Francisco: University of California San Francisco Center for the Health Professions, 1995. U.S. General Accounting Office. Health Care Shortage Areas: Designations Not a Useful Tool for Directing Resources to the Underserved. GAO/HEHS-95-200, Sept. 8, 1995. U.S. General Accounting Office. National Health Service Corps: Opportunities to Stretch Scarce Dollars and Improve Provider Placement. GAO/HEHS-96-28, Nov. 24, 1995. U.S. General Accounting Office. U.S. Information Agency: Inappropriate Uses of Educational and Cultural Exchange Visas. GAO/NSIAD-90-61, Feb. 16, 1990. U.S. General Accounting Office. U.S. Information Agency: Waiver of Exchange Visitor Foreign Residence Requirement. GAO/NSIAD-90-212FS, July 5, 1990. Whitcomb, Michael E. “Correcting the Oversupply of Specialists by Limiting Residencies for Graduates of Foreign Medical Schools.” The New England Journal of Medicine, Vol. 333, No. 7 (1995), pp. 454-56. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the extent to which state and federal agencies used waivers to meet physician shortages in medically underserved areas, focusing on: (1) how many foreign physicians with J-1 visas receive waivers, where they practice, and their medical specialties; (2) whether federal agencies and states effectively coordinate policies and procedures for granting these waivers; and (3) the extent to which foreign physicians who receive waivers comply with waiver requirements to practice in underserved areas. GAO found that: (1) the number of waivers for physicians with J-1 visas to work in underserved areas has risen from 70 in 1990 to over 1,300 in 1995; (2) requesting waivers for physicians with J-1 visas has become a major means of providing physicians for underserved areas; (3) in 1994 and 1995, the number of waivers processed for these physicians equaled about one-third of the total identified need for physicians in the country; (4) almost all of these waiver physicians have primary care medical specialties and they are practicing in 49 states and the District of Columbia; (5) nearly 30 federal and state agencies were processing waiver requests for physicians from hospitals, health centers, and other health care facilities by 1995; (6) among them, no agency has clear responsibility for ensuring that placement efforts are coordinated; (7) although the federal agencies are now working together informally, they still have differing policies, overlapping jurisdictions, and varying communication with the states; (8) the Department of Health and Human Services (HHS) believes that the physicians should return home after completing their training to meet the intent of the exchange visitor program, and the other agencies view the waiver provision as a means to secure physicians to meet the health care needs of their constituents; (9) while more than 9 of every 10 physicians whose waivers were processed between 1994 and 1995 were practicing at their locations in January 1996, controls are somewhat weak for ensuring that physicians continue to meet the terms of their agreements; (10) even when the physicians and facilities follow the agencies' rules, the rules do not restrict physicians from working with those segments of the population that already are adequately served; and (11) proposed regulations published by the United States Information Agency and developed in working with the informal interagency group, coupled with recent amendments to the Immigration and Nationality Act would address many of the coordination and compliance problems, but not all of them.
DIA was built to replace Stapleton International Airport (SIA), which in 1994 was the eighth busiest airport in the world. A great deal of controversy was generated by DIA’s construction. Proponents pointed to various inadequacies related to SIA’s facilities, limits on expansion, and noise pollution. Opponents raised objections related to DIA’s construction and operating costs, levels of future passenger demand, and long-term financial viability. The airport, which opened for business on February 28, 1995, experienced numerous construction delays and cost overruns. Allegations of inadequate disclosures in bond offerings to the public have resulted in an SEC investigation and several lawsuits. About 65 percent of DIA’s revenues are collected from the airlines for space rental and landing fees. The remaining 35 percent of revenues come from concessions, passenger facility charges (PFCs), interest income, and other sources. To help ensure that revenues will cover costs, DIA has a rate maintenance covenant with bondholders. This covenant requires DIA to set annual rates and fees to result in an amount that, when combined with funds held in reserve in the coverage account, is equal to (1) all costs of operating the airport plus (2) 125 percent of the debt service requirements on senior bonds for that year. Senior bonds comprise about $3.5 billion of DIA’s total $3.8 billion bond debt. DIA’s revenue bonds were issued under the 1984 General Bond Ordinance, which promises bondholders that the rate maintenance covenant will be honored in setting billing rates for airlines. Under the airlines’ use and lease agreements, each airline is required to pay rates and charges sufficient to meet the rate maintenance covenant after taking into consideration all airport revenues. Because there are no limits on costs built into the rate maintenance cost recovery model, DIA has agreed to share 80 percent of net receipts with airlines for 5 years from February 28, 1995, and lower percentages thereafter. After sharing net receipts with the airlines, DIA estimates that it will retain an estimated $6.3 million to $7.6 million a year for fiscal years 1996 through 2000, which will be transferred into the capital fund. Many airports calculate the airlines’ cost per enplaned passenger as a benchmark. This cost is based on the airlines’ share of airport costs, divided by the actual number of enplaned passengers. DIA’s lease contract with United Airlines includes a provision for nullifying the contract if the cost per enplaned passenger rises beyond a predetermined level. To identify risks that could affect DIA’s financial performance, we read and evaluated risk disclosures in DIA’s Official Statements; interviewed DIA, Colorado Springs Airport, and SEC officials; obtained financial information on United Airlines; interviewed airline industry experts, including airline executives, aviation forecasters, and airline financial consultants; and obtained data from American Express on ticket prices at DIA. To review DIA’s revenues, we (1) sampled DIA’s daily revenue transactions for March through May 1995 and examined supporting documentation regarding collections of airline rents and landing fees, (2) tested supporting documentation for revenues from concessions such as parking, fees from rental car companies, food and beverage concessions, and retailers, (3) extracted data from reports on City of Denver investment income and journal vouchers on receipts of passenger facility charges from airlines, (4) compared actual revenues for March through May 1995 to the monthly estimates of cash flows DIA prepared for 1995, (5) analyzed and studied for consistency DIA’s long-term estimates covering 1996 through 2000 for revenue and other financial information, and (6) reviewed the terms of lease agreements with airlines and cargo carriers, obtained and analyzed passenger data from airline landing reports for March through August 1995, and became familiar with the rates and charges methodology DIA used to set rental rates and landing fees for airlines. To review DIA’s debt service requirements, we examined DIA’s plan of finance, which summarized details on all outstanding revenue bonds at DIA and contained detailed amortization schedules for paying off revenue bonds. We compared selected payments on this schedule to bond documents. We also inspected documentation for actual transfers of operating funds to DIA’s bond fund for March through May 1995. To review DIA’s operating costs, we obtained DIA’s weekly cash flow statements for March through May 1995 and operating expense data files for that period and traced samples from those files to supporting documentation. We also reviewed DIA’s operations and maintenance cost budgets by studying supporting documentation, such as contracts and other DIA budgetary analysis, for all budgetary line items exceeding $1 million. We compared DIA’s budgets to those of other operating airports. Finally, we interviewed DIA and City of Denver officials to gain an understanding of the accounting system for DIA expenses and to obtain further information about transactions tested. We used information from our tests of revenues, bond debt, and expenses to prepare a statement of actual cash flows for March through May 1995. We also analyzed the cash balances the City of Denver maintained in DIA operating and cash reserve accounts. To review DIA’s actual cash reserves and cash flows, we obtained cash reserve balances from City of Denver accounting records and reviewed the audit work papers of DIA’s auditors, identified restrictions on the use of reserve funds, interviewed bond analysts, and performed detailed analyses of DIA documentation supporting cash receipts and disbursements. We also interviewed DIA managers and airline officials and reviewed testimony before a congressional subcommittee by proponents and opponents of DIA. We performed our work between March 1995 and November 1995 in accordance with generally accepted government auditing standards for performance audits. This report is not intended to be a financial projection under the American Institute of Certified Public Accountants’ standards for such reporting. There are certain risks inherent in any projection of financial data to future periods. Specifically, differences between expected and actual results of operations may arise because events and circumstances frequently do not occur as expected, and those differences may be material. In addition, DIA’s future financial performance could be threatened by a number of factors specific to the airport’s operations, most notably the overall volatility of the airline industry in general and any future deterioration in the financial health of its major tenant, United Airlines. Also, because DIA’s revenues are primarily driven by passenger volume, increased ticket prices may be a concern if they result in significant passenger declines. Other risks include the possibility of (1) unknown construction defects resulting in major unexpected costs or (2) adverse actions arising from a current Securities and Exchange Commission investigation and/or lawsuits filed by bondholders against DIA. The potential severity of the effect on DIA’s future financial condition varies with each of these risk elements. Financial results of the airline industry, a key risk factor, have been volatile since deregulation in 1978. Most airlines have reported substantial net losses since 1990, with total losses of about $13 billion from 1990 through 1994. For example, one of the airlines that used DIA, MarkAir, filed for bankruptcy in April 1995 and went out of business in October 1995. In addition to the condition of the airline industry in general, an important factor affecting DIA’s financial viability is the financial health of its major tenant, United Airlines. United accounted for over 70 percent of passenger enplanements during the first 4 months of 1995, as discussed later. Also, DIA has projected that 43.1 percent of enplanements for 1995 will be passenger transfers as a result of United’s hubbing operation. United Airlines reported annual losses in 1993, 1992, and 1991 of $50 million, $957 million, and $332 million, respectively. United reported profits in 1994 for the first time since 1990, with net earnings of $51 million shown on its audited financial statements for the calendar year 1994. United Airlines is thinly capitalized, with net equity of about $76 million and debt of about $12 billion, reported as of March 31, 1995. In late October 1995, United announced record profits of $243 million for the quarter ended September 30, 1995. In commenting on a draft of this report, DIA’s Director of Aviation acknowledged that there are risks inherent with any business venture and related financial projections and that the volatility of the airline industry could impact the financial performance of DIA. The comments point out DIA’s view that the risks associated with the financial health of United Airlines are offset by several factors, including DIA’s strong market in both origination and destination travel as well as regional connecting traffic. Risks to passenger volume is another key consideration in DIA’s future financial health. One factor influencing passenger volume, in turn, is ticket prices. Ticket prices at DIA increased 20 percent to 38 percent compared to those charged a year earlier at SIA. American Express recently reported that the average fare paid at DIA for March 1995 was 20 percent higher than fares at SIA in March 1994, with an average fare of $290 at DIA compared to $241 at SIA. American Express also reported that the average fare nationally, based on 215 domestic city pairs, showed no change during that period. In addition, the American Express review for the second quarter of 1995 reported that the average fare paid at DIA in June 1995 was 38 percent higher than the average fare at SIA in June 1994, while the average fare was up 7 percent nationally during that period. We also reviewed the Department of Transportation’s (DOT) airfare statistics, which are based on a broader 10 percent sample of all domestic airline travel. DOT’s data showed that the average fare for Denver travel for the second quarter of 1995—DIA’s first full quarter of operation—was 9 percent higher than the SIA fare for the same period in 1994. According to DOT’s statistics, the average fare nationwide for the second quarter of 1995 was 2.4 percent higher compared to the average fare 1 year earlier. According to airline industry representatives we interviewed, airport charges to airlines for rental costs and landing fees represent a small fraction of airlines’ total costs, which also include, for example, aircraft fuel and maintenance costs and personnel and benefits expenses. Thus, the industry officials indicated that the lack of a competitive market in the Denver area for United Airlines, rather than DIA airline charges, is probably the most important factor affecting the price of tickets. United Airlines dominates the market at DIA, carrying about 70 percent of all passengers enplaned in Denver during the first 4 months of 1995. Historically, Continental Airlines was United’s major competition in the Denver market; however, as discussed later, Continental has eliminated its hubbing operation from Denver. Airlines that have a reputation for low fares, such as Southwest, have stated in media reports that they have chosen not to use DIA because its rates are too high. The airport at Colorado Springs, which is located about 70 miles south of Denver, has attracted a low fare airline, Western Pacific Airlines, that is offering competition to DIA. Colorado Springs expects to enplane 1.4 million passengers in 1995 compared to 791,000 in 1994, a 72-percent growth rate. Colorado Springs Airport officials told us that some of the growth is fueled by Denver passengers, although they have not performed any studies to verify this. Future growth at Colorado Springs, however, will be limited by its size; it is currently operating at full capacity with only about 7 percent of DIA’s passenger volume. Our analysis of landing reports generated by the airlines for the first 6 months of operations at DIA showed that DIA enplaned 100.3 percent of forecasted passengers for March, April, and May 1995. However, volumes declined through the summer of 1995 as compared to forecasts, with 94.5 percent in June, 90.6 percent in July, and 89.0 percent in August. DIA officials stated that higher ticket prices were the primary cause of the decline in passenger volume in the summer of 1995, as well as the loss of Continental’s hubbing operation. Passenger volume has improved in recent months, with 90.3 percent of forecasted passengers enplaned in September, 94.8 percent in October, and 99.1 percent in November. Another critical risk factor that we identified are the many allegations that have been made about improper construction practices at DIA, involving the main terminal, concourses, and runways. Although investigations to date have not disclosed major deficiencies that would result in significant repair costs, if undisclosed defects are present that eventually cause expensive repairs, DIA’s cost structure could be materially affected. It should be noted, however, that the City of Denver’s contracts with its DIA building contractors included a standard “Latent Defect Clause.” This clause states that any hidden defects that develop as a result of materials and equipment incorporated into the project will be remedied by the contractor at no extra cost to the city. The City of Denver has advised us that the Securities and Exchange Commission (SEC) is conducting a formal investigation regarding the adequacy of the city’s disclosure of information in bond offering documents with respect to the automated baggage system and related delays in opening the airport. Current estimates of whether the city will be able to repay investors would not appear to be within the scope of that investigation. Generally, when the SEC finds a violation of federal security law, it has the discretion to pursue a range of enforcement mechanisms and penalties. The SEC may, for example, require correction of public filings, direct future compliance, or, in some circumstances, ask a court to impose monetary penalties. The City of Denver provided us with a copy of a letter dated October 11, 1995, in which SEC regional staff advised the city that as a result of its investigation, the staff planned to recommend that the Commission institute an administrative action, the next step in the SEC’s enforcement process. The city was given an opportunity to submit a written statement (known as a “Wells Submission”) to the SEC to counter the staff’s recommendation. The city advised us that it issued its Wells Submission on December 7, 1995, and denied violating federal securities laws in connection with the financing of DIA. Also, in February 1995 and March 1995, four class action lawsuits were filed in United States District Court for the Colorado District by DIA bondholders seeking damages from the City and County of Denver. The four lawsuits allege that the city misrepresented the design and construction status of the automated baggage system and the opening date of DIA. In addition, two of the lawsuits make allegations that the city and other defendants engaged in a conspiracy to conceal adverse facts from the investing public in order to artificially inflate the market price of the bonds. On May 1, 1995, a class action complaint was filed in Denver District Court by the four plaintiffs in the federal court cases, making substantially similar allegations. An SEC determination resulting from its investigation that disclosures were not fair or complete could aid litigants claiming losses from improper disclosures. In its Official Statement published in June 1995 to promote bond sales, DIA noted several investment risk factors that could potentially affect the security of DIA bonds, including the ongoing SEC investigation and bondholder litigation discussed above. In addition, we have summarized the following risk factors from that statement as items that must be noted as part of any analysis of DIA’s long-term financial condition. DIA estimates operating revenues of about $500 million per year for the period 1995 to 2000, and anticipates receiving federal grants in amounts adequate to retire $118 million in subordinate bonds over the 5-year period. Grants require congressional action that cannot be assured. Many of the airlines operating at DIA, including United, Continental, Delta, Northwest, TWA, and others, have sent letters objecting to various aspects of the rates and charges for the airport. DIA officials stated that only TWA has filed a complaint with DOT, and DOT resolved TWA’s complaint in favor of the City of Denver. Other factors that will affect aviation activity at DIA include (1) the growth of the economy in the Denver metropolitan area, (2) airline service and route networks, (3) national and international economic and political conditions, (4) the price of aviation fuel, (5) levels of airfares, and (6) the capacity of the national air traffic control system. Based on our review of DIA’s long-term budgets and the data available on actual operations from its opening on February 28, 1995, through August 31, 1995, we found no significant issues which would lead us to believe that DIA will be unable to meet its financial obligations. However, the risks we identified in the previous section must be carefully considered by users of our report. Passenger enplanements are a key measure primarily because United Airlines, which accounts for over 70 percent of DIA passengers, has an agreement with DIA that it will honor its lease as long as costs per enplaned passenger do not exceed a specified level. DIA’s leases also include a rate maintenance agreement that allows it to charge rates and fees sufficient to cover DIA’s debt service and operating costs. Thus, the effectiveness of this agreement in supporting DIA’s ability to meet its obligations is based upon maintaining the level of enplanements and costs per enplaned passenger within limits specified by the United lease agreement. During its initial 6 months of operations, DIA’s volume of enplaned passengers averaged 95 percent of estimates. Both DIA and the Federal Aviation Administration (FAA) expect enplanement levels to increase over the next 5 years. Although leases were below anticipated levels due to Continental Airlines’ removal of its hub from Denver and MarkAir’s bankruptcy, DIA estimates that it will have positive net revenues of $19.5 million for 1995. Debt service requirements have been spread relatively evenly over the next 30 years. DIA’s current budgeted operating costs were based on contractual agreements and detailed budgets. DIA expects these operating expenses to increase with the levels of inflation over the next 30 years. DIA posted positive cash flows during the period under review and has adequate cash reserves to draw on in case of emergency in the immediate future. DIA’s ability to generate sufficient revenues to cover its operating costs and debt service requirements ultimately depends upon the number of passengers that choose to use the airport. Passenger volume dictates airline demand for space at DIA and is directly linked to the financial success or failure of DIA concessions. We analyzed airline landing reports for the first 6 months of operations at DIA and found that its volume of enplaned passengers was about 95 percent of its estimates. DIA and FAA both expect enplanement levels to increase in future years. Provided DIA does not suffer a significant decline in passenger levels, a risk we previously discussed, and have unanticipated costs, it should be able to keep its cost per enplaned passenger within the limits specified by its lease agreement with United Airlines. In October 1995, DIA estimated that passenger enplanements for 1995 would be 15.9 million, while FAA estimated that they would be 15.1 million. Both estimated that enplanements would rise from 1995 to 2000, reaching 18.2 million in 2000. DIA estimated an annual growth rate of about 2.6 percent in passenger volume from 1995 through 2000, while FAA estimated an annual growth rate of about 4 percent from 1995 through 2010. United Airlines has an agreement with DIA that it will honor its 30-year lease as long as costs per enplaned passenger do not exceed $20, measured in 1990 dollars. In June 1995, DIA estimated that United’s cost per enplaned passenger in 1995 would be $16.31 in 1990 dollars and, if enplanement levels approximate estimates and unanticipated costs are not incurred, would drop to $13.22 by the year 2000. In our October 1994 report, we estimated that, with all other factors remaining constant, passenger traffic would have to drop to between 12 million and 12.5 million enplaned passengers in 1995 to drive costs above $20 per enplaned passenger. DIA has three concourses containing a total of 90 jet gates; however, as of September 1, 1995, only 76 of the gates were being used by airlines, with 69 of them covered by lease agreements. DIA is operating substantially below capacity due to Continental Airlines’ decision to remove its hub from Denver and, to a lesser extent, MarkAir’s bankruptcy and failure. Although this reduced the level of operations, DIA’s reports show that it has covered its costs and achieved positive cash flows for its first 6 months. Following DIA’s April 1995 agreement allowing Continental to reduce its lease commitment from 20 gates to 10, DIA raised its rental rates to airlines, effective May 1, 1995, by 6.8 percent. Other airlines, primarily United, have increased passenger volume due to Continental’s pullout. In addition, reported operating costs have been below budget. All these factors have contributed to DIA’s positive financial results to date. Furthermore, because DIA is operating below capacity, it is positioned to meet the expected increase in passenger volumes in future years without constructing new facilities. DIA’s 14 idle gates were all on concourse A, which was planned to support Continental Airlines’ hubbing operation. Continental entered into an agreement with DIA in August 1992 to lease 20 of the 26 gates on concourse A but had eliminated most of its Denver operations by the time DIA opened in 1995. In April 1995, Continental’s lease commitment was reduced to 10 gates for 5 years. Further, Continental was allowed to sublease up to 7 of these gates. As of September 1, 1995, Frontier was subleasing 4 gates and America West was subleasing 1 gate from Continental. Two other gates on concourse A were used by Mexicana Airlines and Martinair Holland. All 44 gates on concourse B were leased by United Airlines for 30 years. The 20 gates on concourse C were used by various airlines, with 13 gates leased as of September 1, 1995, generally under 5-year leases. The remaining seven gates were used by non-signatory airlines. Airlines operating on a non-signatory basis pay 20 percent higher rates for space rent and landing fees and do not share in the year-end dividend based on 80 percent of DIA’s net receipts. Five of those unleased gates on concourse C were used by MarkAir, which filed for bankruptcy in April 1995. In October 1995, MarkAir went out of business, owing DIA about $2.9 million. DIA also hosts a substantial air cargo operation. It has lease agreements with several major cargo carriers, including Federal Express, United Parcel Service, and Emery Worldwide. According to DIA’s estimate, which we reviewed and found reasonable, this operation was to produce $3.3 million in space rent plus about $5 million in landing fees for fiscal year 1995. Debt service requirements and operations and maintenance are DIA’s two major cost components. Debt service costs are expected to remain relatively stable over the next 30 years. Operating costs are expected to rise with inflation over that time frame. Debt service payments constitute over 60 percent of DIA’s estimated annual costs. DIA’s bonds are scheduled to be paid off in relatively equal installments over the next 30 years. After a bond sale in June 1995, DIA had bonds payable of about $3.8 billion. DIA’s June 22, 1995, estimates included two future bond sales to finance capital improvements. The first of these sales, held on November 15, 1995, after the end of our review, yielded $107,585,000 in bond principal. The second sale was scheduled for January 1, 1997, for $40,835,000 in bond principal. Based on its current contractual agreements with bondholders and estimated servicing requirements on the two additional bond sales, DIA’s cash requirements for servicing the debt on its bonds will be spread relatively evenly over the next 30 years. Annual bond payments will rise from about $288 million in fiscal year 1996 to about $327 million in fiscal year 2005. From fiscal years 2006 through 2024, the payments are to range from $307 million to $329 million, with a final bond payment in fiscal year 2025 totaling $267 million. In addition to debt service payments, operations and maintenance and other expenses of the Denver Airport System (including upkeep of Stapleton International Airport) comprise DIA’s other major cost element. DIA estimated that these costs would be about $159 million in fiscal year 1996 and would increase by about 3 percent a year as a result of inflation. Table 1 lists DIA’s estimated operations and maintenance costs for fiscal year 1996 by cost category. We reviewed DIA’s budgets for operations and maintenance costs by category and found the estimated amounts to be reasonable and supported by adequate documentation. Many cost categories were supported by contracts for services, including cleaning services, parking system management, and operation and maintenance of the underground train. Other categories were based on detailed, documented budgets that were developed using data such as number of employees, utility costs per square foot of building space, and other standard estimating methods. Estimates beyond the current year are based on 1996 estimates that were adjusted for a reasonable inflation factor. Estimates and analyses of short- and long-term cash flows are valuable financial management tools, especially when cash flows are volatile or uncertain—for example, when an operation is just getting underway or during periods when significant construction and capital improvement programs are being carried out. Used in conjunction with an entity’s other important financial reports, cash flow estimates and statements provide useful analytical information. For example, comparing cash flows with accrual-based accounting information can yield valuable management information. In response to our request, DIA prepared estimates of cash flows for fiscal years 1996 through 2000. In April 1995, DIA officials also provided estimates of cash flows by month for 1995. We compiled DIA’s actual cash flows for March through May 1995 and found that DIA produced a positive cash flow of $1.5 million in its first 3 months of operations. In September 1995, DIA’s finance office provided us with cash flow statements it prepared for March through August 1995. The statements showed a positive cash flow of $1.8 million for March through May, which approximates the results of our analysis, and $12.1 million for June through August 1995. We confirmed that the statement’s $49.9 million ending cash balance as of August 31, 1995, matched the balance on DIA’s general ledger. At the time of our review, DIA officials said they were not required to prepare long-term cash flow estimates or statements. DIA’s Finance Director told us that DIA did not use long-term cash flow estimates and analysis to assist in managing DIA operations. She stated that financial information available on the accrual basis of accounting was not materially different from information available on the cash basis and, in DIA’s view, is sufficient for long-term planning. Finally, she stated that DIA’s rate maintenance covenant ensures that DIA will generate adequate receipts to cover all disbursements. We surveyed seven airports about their use of cash flow estimates as a management tool. Two of the seven stated that they use cash flow estimates. For example, an Atlanta airport official stated that cash flow estimates were particularly valuable in its new concourse construction program. The five airports that did not use cash flow analyses had stable operations that experienced minimal fluctuations from year to year in receipts and disbursements. In commenting on a draft of this report, DIA’s Director of Aviation reiterated DIA’s position that cash flow estimates beyond the current fiscal year are not useful for several reasons and that the airport’s 5-year feasibility study is an adequate long-term planning tool. We believe, however, that cash flow estimates would have been a valuable management tool during the period of our review as DIA completed construction. Also, in conjunction with DIA’s other financial data, such estimates could continue to provide useful analytical data as the airport’s operations stabilize during its initial years of operations. DIA’s comments also stated that weekly cash flow estimates had been prepared since January 1994 and that weekly estimates were rolled up into monthly and quarterly reports. During the course of our work, we made repeated requests for such estimates, including a writen request on January 27, 1995. In a letter dated February 2, 1995, DIA’s Assistant Director of Aviation for Finance advised us that the monthly cash flow estimates for 1995 had not been completed. As stated earlier in this section, we did not receive DIA’s estimates of cash flows for fiscal year 1995 by month until April 1995. As of September 25, 1995, the date of DIA’s latest available reserve fund statement, DIA had an operating cash balance of $57 million and held $420 million in reserve funds. In the event of a temporary financial crisis, about $260 million of these reserve funds could be used, subject to certain restrictions. Table 2 presents DIA’s reported reserve fund balances as of September 25, 1995. The following restrictions apply to the use of the reserve funds: Bond Reserve Fund. Under terms of the bond ordinance, money can be withdrawn from this fund only to meet debt service requirements. Withdrawn funds must be paid back at the rate of 1/60th of the amount owed each month. Our analysis showed that about $200 million could be withdrawn from this fund before the payback requirements would exceed the remaining balance. However, according to bond analysts to whom we spoke, drawing on this fund could have a negative effect on DIA’s bond ratings if DIA seeks future bond financing. As previously discussed, only one additional bond sale is currently being planned. Capital Fund. This fund can be used without restriction to pay for capital improvement costs, extraordinary costs, or debt service requirements. DIA anticipates that in the ordinary course of business, it will draw upon this fund for capital improvements. Coverage Fund. DIA’s rate maintenance covenant requires that net revenues of the airport, combined with the coverage fund, equal no less than 125 percent of the debt service requirement on senior bonds for the upcoming year. The coverage fund amount is calculated at the end of each year and must be fully funded at that time. In June 1995, DIA reported that the December 31, 1996, coverage fund requirement will be $58.4 million. Any amounts withdrawn from the coverage fund must be replenished by December 31 of each year, which effectively limits the use of this fund in a financial crisis. Operations and Maintenance Reserve Fund. This fund must be fully funded by January 1, 1997. Full funding requires that 2 months of operations and maintenance expenses be on deposit in the fund, a requirement of about $27 million. This fund can be used to cover operations and maintenance expenses if net cash from operations is inadequate. We requested written comments on a draft of this report from the Secretary of Transportation and the Director of Aviation, DIA, of the City of Denver. A representative of the Secretary advised us that the Department of Transportation had no comments on the report. DIA’s Director of Aviation provided us with written comments, which are incorporated in the report as appropriate and reprinted in appendix I. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Secretary of Transportation; the Director, Office of Management and Budget; officials of the City of Denver; and interested congressional committees. We will also make copies available to others upon request. Please contact me at (202) 512-9542 if you or your staff have any questions. Major contributors to this report are listed in appendix II. The following are GAO’s comments on the letter from Denver International Airport’s Director of Aviation dated January 22, 1996. 1. See the “Health of the Airline Industry and United Airlines” section of the report. Also, we did not reprint the referenced article. 2. See the “DIA Cash Flows” section of the report. Thomas H. Armstrong, Assistant General Counsel The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Denver International Airport's (DIA) financial condition, focusing on DIA: (1) cash reserves and estimated cash flows; and (2) ability to meet its financial obligations. GAO found that: (1) predicting the future financial performance of DIA is difficult, since it has been operating for less than 1 year; (2) the difficulties in projecting DIA financial performance relate to the volatility of the airline industry, unexpected construction delays and costs, and the city of Denver's ability to repay airport investors; (3) the Securities and Exchange Commission is formally investigating the adequacy of the city's disclosure of information in bond documents with respect to delays in opening the airport; (4) there is no evidence that DIA will be unable to meet its financial obligations, since DIA has generated positive cash flows in its first 6 months of operation despite operating at well below capacity; (5) DIA debt service costs are expected to remain stable over the next 30 years, while operating and maintenance costs are expected to rise with inflation; and (6) as of September 1995, DIA had an operating cash balance of $57 million and held $420 million in reserve funds, of which $260 million could be used in the event of a financial crisis.
OSHA was established after the passage of the Occupational Safety and Health Act in 1970. In the broadest sense, OSHA was mandated to ensure safe and healthful working conditions for working men and women. The act authorizes OSHA to conduct “reasonable” inspections of any workplace or environment where work is performed by an employee of an employer. The act also requires that OSHA conduct investigations in response to written and signed complaints of employees alleging that a violation of health or safety standards exists that threatens physical harm, or that an imminent danger exists at their worksites, unless OSHA determines that there are no reasonable grounds for the allegations. OSHA inspections fall into two broad categories: those that are “programmed” and those that are “unprogrammed.” Programmed inspections are those the agency plans to conduct because it has targeted certain worksites due to their potential hazards. Unprogrammed inspections are not planned; instead, they are prompted by things such as accidents or complaints. How OSHA responds to complaints has changed over time. In the wake of the Kepone case, OSHA started to inspect virtually any complaint, which led to a backlog of complaint-driven inspections, according to interviewed officials. In its early response to the backlog, OSHA adopted a complaint process whereby each complaint was categorized based on whether or not it was written and signed by complainants. “Formal” complaints met both conditions, while “nonformal” complaints were oral or unsigned. OSHA further categorized complaints by the seriousness of the hazard alleged. Formal complaints were inspected regardless of whether the hazard alleged was serious, although offices were given longer time frames for responding to those that were other than serious. The agency generally handled nonformal complaints by sending the employer a letter. Agency officials said that as a result of these distinctions, the agency was able to reduce some of its backlog. A new effort to reform the complaint procedures was made through the Complaint Process Improvement Project, which was part of the Department of Labor’s overall reinvention effort from 1994 to1996. In January 1994, two area offices were selected as pilot sites to develop and test new procedures for handling complaints. Their work focused on an effort to (1) reduce the time needed for handling complaints, (2) speed the abatement of hazards, (3) allow OSHA to focus its inspections resources on workplaces where they were needed most, and (4) ensure consistency. The new procedures placed a greater emphasis on the seriousness of the alleged hazard as a factor for determining how the office would respond to a complaint. In addition, they introduced the use of telephones and fax machines as the means to notify employers of an alleged hazard instead of regular mail and provided specific procedures for following up with employers to make sure hazards were abated. These new policies were adopted and outlined in an OSHA directive dated June 1996. Policies regarding complaints are established by the Office of Enforcement Directorate in Washington, D.C.. Regional administrators in each of OSHA’s 10 regional offices oversee the enforcement of these policies within their own regions (see fig. 1). Each region is composed of area offices—there are 80 in total—each under an area director. The area directors oversee compliance officers—there can be as many as 16 in an office—some of whom play a supervisory role. Compliance officers play a key role in carrying out the directive. At almost all area offices, compliance officers take turns answering the phones, and taking and processing complaints, a collateral responsibility in addition to their duties in the field. OSHA primarily responds to complaints based on the seriousness of the alleged hazard using a priority system that the agency credits with having improved its efficiency. However, its determinations can be affected by inadequate or inaccurate information. OSHA officials usually conduct an on-site inspection if an allegation is of a serious nature. Agency policy also requires on-site inspections in cases where a written and signed complaint from a current employee or their authorized representative provides reasonable grounds to believe that the employer is violating a safety or health standard. In general, OSHA officials conduct an inquiry by phone and fax—referred to as a phone/fax investigation—for complaints of a less serious nature. Many OSHA officials, especially compliance officers, told us this priority-driven system has been more effective in conserving their time and resources. Nevertheless, many of the compliance officers also said that some inspections may occur that are not necessarily warranted because complainants have inadequately or inaccurately characterized the nature of the hazard. On the other hand, almost everyone with whom we spoke said the agency prefers to err on the side of caution so as not to overlook a potential hazard. Many of the OSHA officials we interviewed, as well as officials from states that run their own safety and health programs, suggested approaches to improve the validity of the information accompanying the complaints. According to policy, OSHA initially evaluates all incoming complaints (whether received by fax, e-mail, phone, letter, or in person) to decide whether to conduct an on-site inspection or a phone/fax investigation (see fig. 2). OSHA conducts on-site inspections for alleged serious violations or hazards and makes phone/fax inquiries for allegations of a less serious nature. OSHA considers serious violations or hazards to be those that allege conditions that could result in death or serious physical harm. Specifically, OSHA initiates on-site inspections when the alleged conditions could result in permanent disabilities or illnesses that are chronic or irreversible, such as amputations, blindness, or third-degree burns. As seen in figure 2, though, OSHA will also go on-site when a current employee or his representative provides a written and signed complaint that provides reasonable grounds for believing that a violation of a specific safety and health standard exists. While immediate risks to any employee’s health or safety are the primary factors driving OSHA’s complaint inspections, additional criteria can also prompt an on-site inspection. For example, if an employer fails to provide an adequate response to a phone/fax investigation, OSHA’s policy is to follow up with an on-site inspection. Area office supervisors or compliance officers may call the complainant, if needed, to help understand the nature of the hazard. OSHA officials told us they might ask complainants to estimate the extent of exposure to the hazard and report how long the hazard has existed. If an area office supervisor decides that an on-site inspection will be conducted, OSHA’s policy is to limit the inspection to the specific complaint. A violation or another hazard that is in clear sight may be considered, but compliance officers cannot expand the scope of their inspection to look for other violations—a specification that underscores the importance of the complaint’s accuracy. Phone/fax investigations, meanwhile, afford an opportunity to resolve a complaint without requiring a compliance officer to visit the worksite. Instead, the compliance officer contacts the employer by telephone and notifies him or her of the complaint and each allegation. The employer is also advised that he or she must investigate each allegation to determine whether the complaint is valid. The employer can resolve the complaint, without penalty, by providing OSHA with documentation such as invoices, sampling results, photos, or videotape to show that the hazard has been abated. Upon receiving documentation from the employer, the area office supervisor is required to review it and determine whether the response from the employer is adequate. For both on-site inspections and phone/fax investigations, OSHA’s policy is to keep the complainants informed of events by notifying them by letter that an on-site inspection has been scheduled, the outcome of either the inspection or the phone/fax investigation, and the employer’s response. In the case of a phone/fax investigation, the complainant has the right to dispute the employer’s response and request an on-site inspection if the hazard still exists. OSHA can also determine that the employer’s response is inadequate and follow with an on-site inspection. Of the 15 officials who told us they worked for OSHA prior to 1996, and whom we asked about past practices, nearly half said the agency’s current complaint policy has allowed them to better conserve their resources. For example, one 26-year veteran said phone/fax investigations have relieved his compliance officers of traveling to every complaint site for inspections that once averaged as many as 400 per year. Because the employer investigates the allegation first, the phone/fax inquiry is an efficient use of time, according to this supervisor. Of the 20 compliance officers that we asked about this topic, 18 said phone/fax investigations took less time to conduct than on-site inspections. Nearly one-half of these compliance officers told us the phone/fax investigation procedures reduced travel time or eliminated time spent writing inspection reports. The agency handled about two-thirds of all complaints it received in fiscal years 2000 through 2002 through phone/fax investigations. Several OSHA officials we interviewed said OSHA’s phone/fax investigation procedures ease the burden on employers because the employers have an opportunity to resolve the problem. As a result, these officials told us that their interaction with employers has improved. While few of the employers we interviewed had the complaints against them resolved through phone/fax investigations, the three that did expressed satisfaction with the way the allegation was handled. These employers reported that responding to phone/fax investigations required 3 hours, 5 hours, and 2 to 3 days respectively. Only the employer reporting the greatest amount of time believed that the time he invested was inappropriate given the nature of the alleged hazard. A 1995 internal OSHA report, which reviewed the new complaint procedures implemented in two area offices as part of a pilot project, also credited phone/fax investigations with improving efficiency, specifically by reducing the time it took to notify employers of alleged hazards and to correct them, as well as with reducing the offices’ complaint backlog. The report found that using phone/fax investigations reduced notification time by at least a week, reduced the average number of days to correct hazards by almost a month in the two offices, and eliminated one office’s backlog and reduced the other’s backlog by almost half during its involvement in the pilot project. The report attributed these gains to compliance officers being able to phone and fax employers to inform them of the allegations instead of relying on mail, promptly contacting employers to clarify allegations and to offer feasible methods for correcting hazardous conditions, and more employees choosing to have their complaints resolved with phone/fax investigations. More than half of the 20 nonsupervisory compliance officers we interviewed told us that complainants’ limited knowledge of workplace hazards and their reasons for filing complaints can affect the quality of the information they provide, which, in turn, can affect OSHA’s determination of the hazard’s severity. They said complainants generally have a limited knowledge of OSHA’s health and safety standards or may not completely understand what constitutes a violation; consequently, they file complaints without knowing whether a violation exists. As a result, the level of hazard can be overstated. For example, one nonsupervisory compliance officer said he received a complaint that alleged a construction company was violating the standards for protecting workers from a potential fall, but found upon arriving at the site that the scaffolding in question was well within OSHA’s safety standard. Over half of the nonsupervisory compliance officers (13 of 20) said that there were “some or great” differences between what complainants allege and what is ultimately found during inspections or investigations, because complainants may not completely understand what constitutes an OSHA violation or they have a limited knowledge of OSHA’s standards. Complainants’ limited knowledge of OSHA’s health and safety standards can also result in compliance officers not knowing which potential hazards to look for when conducting on-site inspections. For example, one compliance officer noted that employees might complain about an insufficient number of toilets but not about machinery on the premises that could potentially cause serious injury. In addition, another compliance officer noted that many times complainants’ descriptions of hazards are too vague, a circumstance that prevents her from locating the equipment that was alleged in the complaint, such as a drill press, and OSHA’s rules preclude her from expanding the scope of the inspection in order to locate the hazard. The quality of the information complainants provide to OSHA can also be influenced by their motives for filing a complaint. For example, half (27 of 52) of the area office directors and compliance officers we interviewed said they have received complaints from employees who filed them as retribution because they were recently terminated from their jobs or were angry with their employers. Although this practice was described as infrequent, OSHA officials said that in some instances complainants intentionally exaggerated the seriousness of the hazard or reported they were current employees when in fact they had been fired from their jobs. One official asserted that disgruntled ex-employees have taken advantage of OSHA’s complaint process to harass employers by having OSHA conduct an on-site inspection. Several of the employers we interviewed (4 of the 15) also claimed that disgruntled employees have used the complaint process to harass them. They expressed the view that OSHA should improve its procedures for evaluating the validity of complaints. Some of the compliance officers we interviewed said it is not unusual to experience an increase in the number of complaints during contract negotiations. One official told us that in a region where he once worked, union workers filed multiple complaints in order to gain leverage over the employer. A union official acknowledged that this occurred but noted that it was infrequent. Other OSHA officials told us that competitors of companies sometimes file complaints when they lose a competitive bid for a work contract. One official said that while company representatives do file complaints against each other to disrupt the other company’s work schedule, such tactics are not typical in his region. Despite these problems, several of the OSHA officials we interviewed said OSHA’s obligation is to evaluate whether there are reasonable grounds to believe that a violation or hazard exists, rather than trying to determine a complainant’s motives for filing the complaint. In fact, 34 of the 52 officials we interviewed told us that almost all of the complaints they see warrant an inspection or an investigation, and as a result, many of the area offices inspect or investigate most of the complaints that are filed. One official said he would prefer to conduct an inspection or do a phone/fax investigation for an alleged hazard, rather than not address the complaint and have it result in a fatality. When asked during interviews about ways OSHA could improve its process for handling complaints, officials from OSHA and from states that run their own health and safety programs suggested approaches the agency could take to improve the information they receive from complainants. Although some offices were actively engaging in these practices, others reported that they were being used only to some or little extent. Their recommendations were of three types; the first was in regard to strategies for improving the validity of complaints that OSHA considers. Many OSHA area directors and compliance officers said the agency could warn complainants more explicitly of penalties for providing false information, which could be as much as $10,000 or imprisonment for as long as 6 months, or both. This warning is printed as part of the instructions on the complaint form available on OSHA’s Web site. However, OSHA’s complaint policies and procedures directive states that area offices will not mail the form to complainants; consequently, complainants primarily receive the penalty warning only if they access the Web-based form. In contrast, an official from one of the state programs reported that his state’s program requires complainants to sign a form with penalty information printed in bold above the signature line. According to the state official, this policy has reduced by half the number of invalid complaints. Several OSHA supervisors and directors expressed reservations about having compliance officers make verbal warnings to complainants about providing false information while taking their complaints, saying it could prevent some complainants who are already fearful from reporting hazards. Of the 52 OSHA officials we interviewed, 23 said the extent to which they remind complainants of the penalty for providing false information is “little or none at all.” Furthermore, several officials said complainants report hazards based on a perceived violation; therefore, they doubted a hazard that turned out to be invalid would result in a penalty. To further improve the validity of complaints, one official pointed to his state’s practice of generally conducting on-site inspections only for a current employee or an employee’s representative. According to the state health and safety official, this policy improves the validity of information because current employees can more accurately describe the hazard than an ex-employee who has been removed from the environment for some time and whose relationship with the employer may be strained. Another state’s health and safety official said her state has a policy that allows its managers to decline any complaint they determine is intended to willfully harass an employer, which also helps improve the reliability of complaints. According to this official, however, managers seldom find that a complaint was filed to willfully harass an employer. The state also has a policy that allows managers to dismiss any complaint they determine is without any reasonable basis. A second approach suggested by many OSHA officials was to improve complainants’ ability to describe hazards accurately. Of the 52 officials that we interviewed, 14 said OSHA could, for example, conduct more outreach to educate both employees and employers about OSHA’s health and safety standards. Although OSHA area offices already participate in outreach activities, such as conducting speeches at conferences or making presentations at worksites, several of the officials we interviewed said the agency could do more. For example, one compliance officer suggested developing public service announcements to describe potential hazards, such as trenches without escape ladders, and to provide local OSHA contact information for reporting such hazards. One official expressed the opinion that if OSHA were to conduct more outreach to employees, the quality of complaints would likely improve. Another compliance officer suggested that OSHA engage in more preconstruction meetings with employers to discuss OSHA’s regulations and requirements and share ideas for providing safer working environments. One interviewee said if employers were more knowledgeable about hazards, there would be less need for workers to file complaints. Finally, OSHA officials said the agency could take steps to improve the ability of employers and employees to resolve complaints among themselves before going to OSHA. Many of the officials that we interviewed said their offices could encourage employers to form safety committees or other internal mechanisms to address safety concerns. Ten of the 52 officials we interviewed told us the extent to which their offices promote or encourage safety committees was “little to none at all.” Only some of these officials said that this lack of promotion stemmed from the requirements of the National Labor Relations Act (NLRA), which some believe may prohibit or hinder the establishment of safety committees. OSHA’s policy for responding to complaints requires compliance officers to address complaints in a systematic and timely manner; however, we found practices used by area offices to respond to complaints varied considerably. While some of these practices involved departures from OSHA policy, others were practices that varied to such a degree that they could result in inconsistent treatment of complainants and employers. In particular, we found several instances where area offices departed from the directive by persuading complainants to choose either an on-site inspection or a phone/fax investigation, and by having nonsupervisory compliance officers evaluate complaints. We also found several instances where practices were inconsistent. Among the 42 offices we contacted, we found that some conducted follow-up inspections on a sample of closed investigation cases to verify employer compliance, and others did not. Since issuing its new directive for handling complaints in 1996, however, OSHA has issued no guidance to reinforce, clarify, or update those procedures. In addition, while OSHA requires its regional administrators to annually audit their area office operations, some administrators do not, and further, for those who do, OSHA does not have a mechanism in place to review the results and address problems on an agencywide level. In our interviews with 52 randomly selected supervisory and nonsupervisory officials in 42 of the 80 area offices, we found practices that appeared to depart from OSHA’s official policies. In particular, agency policy calls for supervisors to evaluate each complaint. However, 22 of the 52 officials to whom we talked said nonsupervisory compliance officers in their offices are sometimes the decision makers for whether complaints are inspected or pursued through phone/fax investigations. In some of these offices, compliance officers make the decision if the complaint is less than serious. In addition, some officials told us that if the case was earmarked for an inspection or was challenging, the supervisor would then review it. While OSHA’s directive addresses supervisory review within the context of inspections, an OSHA national director informed us that it is agency policy to have supervisors review each and every complaint. In addition, agency policy prescribes that compliance officers explain to complainants the relative advantages of both phone/fax investigations and inspections, if appropriate. However, 16 of the 52 officials to whom we spoke said they encourage complainants, in certain circumstances, to seek either an inspection or an investigation. For example, one official said that his office “sells” phone/fax investigations because they are faster to conduct and lead to quicker abatement than on-site inspections. However, an OSHA national director stressed to us that duty officers should not attempt to persuade complainants. Another practice that appeared inconsistent with policy was the treatment of written, signed complaints. Current employees and their representatives have the right to request an inspection by writing and signing a complaint, but before an inspection may take place, OSHA must determine that there are reasonable grounds for believing there is a violation of a safety or health standard or real danger exists. Area office supervisors are to exercise professional judgment in making this determination. Of the 52 officials with whom we spoke 33 said their offices exercise professional judgment by evaluating written and signed complaints. However, most of the remainder were about equally split in reporting that they evaluate these complaints “sometimes” (7 of 52) or forgo evaluation altogether and automatically conduct on-site inspections (8 of 52). Finally, while we found that complaint policy was generally followed at the three OSHA offices where we reviewed case files, we did find that one office had not been sending a letter to complainants to notify them of a scheduled inspection. According to the OSHA directive, complainants should be notified of inspections. During telephone interviews, officials described practices that, while they did not depart from agency policy, varied significantly from office to office. For example, offices differed in whether they treated e-mails as phone calls or as written and signed complaints. Of the 52 officials with whom we spoke, 12 said they treated complaints received via e-mail as written and signed complaints, while 34 said they treated them as phone complaints. While agency policy is silent on how to classify e-mail complaints, this inconsistency is important because written and signed complaints are more likely to result in on-site inspections. Offices also differed in whether or not they performed random follow-up inspections for phone/fax investigations. While 10 of the 52 officials said they did not know if their offices conducted follow-up inspections, most of the remainder were about equally split in reporting that either they did (18 of 52) or did not (20 of 52) do them. Although the directive does not require follow-up inspections, the OSHA letters sent to employers says they may be randomly selected for such inspections. This inconsistency in practice across offices is significant insofar as follow-up inspections can be seen either as an added burden to employers or as an important safeguard for ensuring abatement. We also found variation in how offices determined whether a complainant was a current employee. The employment status of a complainant is important, as it is often a factor in evaluating the complaint. Of the 52 OSHA officials with whom we spoke, 30 said their offices determine whether a complainant is a current employee simply by asking the complainant; 11 said they asked probing questions of the complainant, and 5 said they asked the complainant for some type of documentation, such as a pay stub. While the directive does not specify how compliance officers are to verify employment status, the methods used to obtain this information can affect its accuracy. Finally, we found that some area offices differ significantly in how they respond to complaints for which OSHA has no standard, specifically those involving substance abuse in the workplace. For example, during a site visit to one area office, an official explained that his office would not do a phone/fax investigation in response to complaints alleging drug use at a workplace, but would refer them to the police instead. However, another area office conducted a phone/fax investigation for a complaint about workers drinking alcoholic beverages while operating forklifts and mechanical equipment. An official in a third area office told us that his office has sometimes referred complaints about drug use at a workplace to the local police and at other times has responded to similar complaints with a phone/fax investigation. An OSHA national director told us that area offices are obligated to do phone/fax investigations for alleged drug use in the workplace. OSHA policy requires that regional administrators annually audit their area offices and that audit results be passed on to the Assistant Secretary. However, this is not current practice. Regional administrators are required to focus the audits on programs, policies, and practices that have been identified as vulnerabilities, including the agency’s complaint-processing procedures. However, according to OSHA’s regional administrators, only 5 of the agency’s 10 regions conduct these audits annually, while 3 conduct the audits, but only for a proportion of their area offices each year, and 2 do not conduct the annual audits at all. In addition, according to one national director, all of the regional administrators are to submit the results of their audits to a Program Analyst in the Atlanta area office for review. The results of this review are to be reported to the Deputy Assistant Secretary for Enforcement, as well as to the responsible directorate, and they are responsible for addressing issues of noncompliance and determining what, if any, policy changes are needed. However, the Program Analyst in Atlanta said he does not receive all of the audits from each region as required, and an official from one of OSHA’s directorates told us his office does not receive such reports. The findings from the seven audits we reviewed underscore their value for monitoring consistency. These audits showed that most of the audited offices were (1) not correctly following procedures for meeting the time frames for initiating on-site inspections, (2) closing phone/fax investigation cases without obtaining adequate evidence that hazards had been corrected, and (3) not including all required documentation from the case files. To some extent, complaints have drawn OSHA compliance officers to sites with serious hazards. According to OSHA’s data for fiscal years 2000 and 2001, compliance officers found serious violations at half the worksites inspected in response to complaints, a figure comparable to inspections conducted at worksites targeted for their high injury and illness rates. However, in one of our earlier reports, we expressed concern that for targeted inspections a 50 percent success rate may raise questions about whether inspection resources are being directed at sites with no serious hazards. Complaint-driven inspections shared other similarities with planned inspections; specifically, compliance officers cited similar standards during both types of inspections. On the other hand, complaint inspections often required more time to complete. Finally, we found a correlation between hazardous industries and complaints inspections. Specifically, those industries that, according to BLS data, had more injuries and illnesses also generally had a larger number of complaint inspections according to OSHA data. OSHA compliance officers found serious violations in half of the worksites they inspected when responding to complaints alleging serious hazards according to OSHA’s data for fiscal years 2000 and 2001 combined. These are hazards that pose a substantial probability of injury or death. During some planned inspections—those conducted at worksites targeted for their high injury and illness rates—OSHA compliance officers found serious violations, such as those involving respiratory protection and control of hazardous energy, in a similar percentage of worksites. Specifically, as shown in table 1, OSHA compliance officers found serious violations in 50 percent of the 17,478 worksites they inspected during complaint-driven inspections. Likewise, they found serious violations in 46 percent of the 41,932 worksites they targeted during planned inspections. In a previous report we noted that this percentage might indicate that inspection resources are being directed to worksites without serious hazards. According to OSHA, many complaints come from the construction industry, where the work is often dangerous and of a short duration. As a result, even if an inspection begins immediately, “citable” circumstances may no longer exist, a fact that according to the agency, might explain why the number of serious violations that result from complaints is not higher. We found that, in contrast to planned inspections, complaint-driven inspections require, on average, more hours per case to complete. Table 2 shows that OSHA compliance officers have required about 65 percent more time for complaint-driven inspections in comparison to planned inspections—29.7 hours on average compared with 18.1 hours— suggesting that while outcomes are similar, complaint-driven inspections are more labor intensive than planned inspections. Compared with planned inspections, complaint-driven inspections have a higher rate of health inspections, which, according to an OSHA national director, place extra time demands on compliance officers to obtain samples, test them, and document the results. In comparison with inspections, phone/fax investigations require, on average, far less time than either complaint- driven or planned inspections. In terms of the types of hazards they uncover, complaint-driven inspections shared some similarities with planned inspections that target the most hazardous sites. Of the 10 standards OSHA compliance officers cited most frequently for violations during complaint-driven inspections, 7 were also among the 10 most frequently cited during planned inspections. Table 3 shows the rank ordering of hazards cited most frequently during planned inspections and complaint-driven inspections. However, table 3 also shows that there were some differences in the frequency with which compliance officers cited particular hazards during planned inspections, compared with complaint-driven inspections. For example, the standard most frequently cited during planned inspections, general requirements for scaffolds, is the 18th most frequently cited standard during complaint-driven inspections. Likewise, the standard cited with the second highest frequency in planned inspections, “fall protection,” is not within the 10 standards most frequently cited for complaint-driven inspections. Such examples indicate that some differences exist in the type of hazards compliance officers found at worksites about which workers have complained and at those OSHA targeted for inspection. Our analysis found a correlation between injuries and illnesses reported in industries and the rate at which complaints were inspected. As shown in figure 3, industries associated with higher rates of injuries and illnesses also tended to have a higher rate of complaint inspections than did industries with lower injury and illness rates, according to OSHA’s data. For example, one industry, transportation equipment, had 12.6 injuries and illnesses per 100 full-time workers in 2001 and had a relatively high rate of complaint inspections, .016 per 100 full-time workers. Conversely, the motion picture industry, which had only 2.5 injuries and illnesses per 100 full-time workers in 2001, had a relatively low incidence rate for complaint inspections, .0015 complaint inspections per 100 full-time workers. For a handful of industries the pattern of high injury and illness rates associated with high complaint inspection rates did not apply. For these industries, the number of complaint inspections per 100 full-time workers was either far higher or far lower than might have been expected given the number of injuries and illnesses per 100 full-time workers. For example, the air transport industry had the highest injury and illness rate for 2001, but its complaint inspection rate was lower than those for all but 1 of the 10 industries with the highest injury and illness rates. In another example, while the general building contractors industry had the highest complaint inspection rate of any industry, over a third of all industries had higher injury and illness rates. Table 4 shows industries that were highest or lowest in terms of injuries and illness and their corresponding rates of complaint inspections. Since 1975, OSHA has had to balance two competing demands: the need to use its inspection resources efficiently and the need to respond to complaints about alleged hazards that could seriously threaten workers’ safety and health. In light of this ongoing challenge, OSHA has adopted complaint procedures that, according to agency officials, have helped OSHA conserve its resources and promptly inspect complaints about serious hazards. Nonetheless, in deciding which complaints to inspect, OSHA officials must depend on information provided by complainants whose motives and knowledge of hazards vary. Many OSHA officials do not see the quality of this information as a serious problem. However, considering that serious violations were found in only half of the workplaces OSHA officials inspected when responding to complaints, it seems likely that the agency, employers, and workers could all be better served if OSHA improved the quality of information it receives from complainants. When OSHA conducts inspections of complaints based on incomplete or erroneous information, it potentially depletes inspection resources that could have been used to inspect or investigate other worksites. In addition, employers may be forced to expend resources proving that their worksites are safe when no hazard exists. OSHA should certainly not discourage workers from making complaints or pursuing a request for an OSHA inspection. Indeed, the correlation we found between those industries designated as hazardous and those that generate complaints inspections suggests that using complaints to locate hazardous worksites is a reasonable strategy for the agency to pursue. However, to the extent that OSHA officials could glean more accurate information from complainants, such as by deterring disgruntled employees from misrepresenting hazards or their employment status, the agency could benefit in several ways. With better information, OSHA could better conserve its inspection resources, minimize the burden on employers, and further enhance the agency’s credibility in the eyes of employers. In addition, if the strategies described by OSHA officials as effective means to improve the quality of complaints are not being fully utilized, OSHA may miss opportunities to maximize the efficiency its complaint process might afford. Some variation in how OSHA officials respond to complaints is inevitable, particularly considering that there are 80 area offices with as many as 16 compliance officers in each office. Nevertheless, the inconsistencies that we found have ramifications when considering the size of the agency and the judgment that comes into play when handling complaints. Moreover, OSHA has much to gain by upholding a reputation for fairness among employers. When employers buy into OSHA’s standards and comply voluntarily, the agency can better use its 1,200 compliance officers to ensure worker safety at the more than 7 million worksites nationwide. However, OSHA’s credibility could be damaged by procedural inconsistencies if, for example, they resulted in different treatment and disposition of similar complaints. While OSHA requires regional audits for monitoring consistency, the failure to maximize the value of this information limits the agency’s ability to ensure one of the underlying principles of its complaint policy. We are making recommendations that the Secretary of Labor direct the Assistant Secretary for Occupational Safety and Health to instruct area offices to pursue practices to improve the quality of information they receive from complainants, such as reminding complainants of the penalties for providing false information, conducting outreach to employees regarding hazards, and encouraging employers to have safety committees that could initially address complaints. We are also recommending that the Secretary direct the Assistant Secretary for Occupational Safety and Health to take steps to ensure that area offices are consistently implementing the agency’s policies and procedures for handling complaints. As a first step, the agency should update and revise the 1996 directive. In revising the directive, the agency should update and clarify how complainants are advised of the process, how written and signed complaints are evaluated, how to verify the employment status of complainants, how to treat e-mail complaints, and how to address complaints involving hazards for which the agency has no specific standard. In addition, we are recommending that the Secretary direct the Assistant Secretary for Occupational Safety and Health to develop a system for ensuring the regions complete audits and develop a system for using the audit results to improve consistency of the complaint process. We received comments on a draft of this report from Labor. These comments are reproduced in appendix II. Labor also provided technical clarifications, which we incorporated where appropriate. Although Labor recognized in its comments that most complaints are anonymous and unsigned—a fact that makes it difficult to find employees to obtain their views about the complaint process—the agency recommended that we acknowledge in the report the limited number of employees we interviewed. At the beginning of the report and again at the end, we acknowledged that we interviewed 6 employees. Further, Labor questioned whether the number of employees we interviewed was an adequate number on which to base the conclusions reached in this report. Our conclusions about OSHA’s complaint process were not based solely on employee interviews but were based on a variety of data, including interviews with 52 OSHA officials. In determining which OSHA officials to interview, we deliberately included area directors, assistant area directors, and compliance officers, which resulted in us obtaining information from officials at various levels in 42 of OSHA’s 80 area offices. Labor also noted that our findings from OSHA’s database which showed that only half of complaint inspections result in citations for serious violations do not recognize that many complaints come from the construction industry, where the work is often dangerous and of a short duration so that even if an inspection begins immediately, “citable” circumstances may no longer exist. We added language to the body of the report to reflect this information. In responding to our first recommendation about improving the quality of information received through complaints, Labor stated that OSHA has taken many steps, both in its online and office-based complaint-taking procedures, to provide guidance to employees to ensure that all complaints are valid and accurate. We maintain, however, that OSHA can do more to improve the validity and accuracy of the complaints it receives. Labor did not comment on our recommendations that OSHA develop a system for ensuring that the regions complete audits of the complaint process and for using the results of these audits to improve the consistency of the process. We will make copies of this report available upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or any of your staff has any questions about this report, please contact me at (202) 512-7215 or Revae Moran, Assistant Director, at (202) 512-3863. Our criteria for selecting our site visits were geographical diversity and volume of complaints. We received data from the Occupational Safety and Health Administration (OSHA) regarding the number of complaints each of its area offices processed in 2000, 2001 and 2002. On the basis of these data, we selected the three sites with the largest number of complaints processed in their respective regions and which roughly approximated the east, south and western regions. Those sites were Pittsburgh, Pennsylvania; Austin, Texas; and Denver, Colorado. In each of these offices, we examined a statistical sample of case files. We used a standard set of questions, pretested on case files in the Philadelphia, Pennsylvania office, to conduct the case file reviews. In addition, we interviewed compliance officers—both supervisory and nonsupervisory. We randomly selected 38 cases in Denver, 30 cases in Austin, and 34 cases in Pittsburgh from the available list of complaint files processed by these offices in 2000, 2001, and 2002. Austin and Pittsburgh had disposed of their case files for phone/fax investigations for 2000, according to area directors there, who said this was allowed by agency rules for how long files must be kept. As a result, our random selections for Austin and Pittsburgh were selected from lists that did not include phone/fax investigations for 2000. In addition to our site visits, using standard sets of questions, we interviewed by telephone randomly selected area directors, assistant area directors, and compliance officers in 42 area offices. We obtained from OSHA a list of area directors, assistant area directors (who are supervisory compliance officers), compliance officers, and regional administrators. We randomly selected 20 of the agency’s 80 area directors and 32 of its 1,200 compliance officers (12 assistant area directors and 20 nonsupervisory compliance officers). We also interviewed officials in all 10 regional offices. Additionally, we conducted telephone interviews with health and safety officials from 13 states that operate health and safety programs apart from OSHA. We selected these 13 states, in part, based on discussions with OSHA. In addition to OSHA officials, we also interviewed employers whose worksites were the subject of a complaint and employees who had filed complaints. OSHA provided us with a database of all employers who in 2000, 2001, or 2002 had worksites that were the subject of complaints and employees who had filed complaints in the same year. From the database we randomly selected 90 employers and 90 employees. We took steps to make sure that employers’ and employees’ contact information was kept separate from their identity and any information collected from them during their interviews. We also obtained a guarantee of confidentiality from the report’s requester. Of the 90 employers randomly selected, we succeeded in interviewing 15. Of the 90 employees, we succeeded in interviewing 6. Some of the employee complaints randomly selected had been filed anonymously, so contact information was not available. In most cases, those selected could not be reached. Finally, we examined data for fiscal years 2000 through 2002 related to complaints in OSHA’s Integrated Management Information System (IMIS) and looked at data on injuries and illnesses collected and published by the Bureau of Labor Statistics (BLS) for calendar year 2001 as they related to complaints. In addition, for the IMIS data we obtained and reviewed documentation of internal controls and manually tested the data. We interviewed both OSHA and BLS officials to establish the reliability of the data. We found the data to be reliable for our purposes. The following are GAO comments on Labor’s letter dated May 21, 2004. 1. We rephrased our recommendations to reflect Labor’s administrative procedures. 2. Our conclusions are based on site visits to 3 area offices processing large numbers of complaints, reviews of case files in those offices, interviews with 52 OSHA officials—area directors, assistant area directors, and compliance officers— who represented 42 of OSHA’s 80 area offices, interviews with officials in all 10 of OSHA’s regional offices, interviews with the director of the Office of Enforcement, interviews with officials in 13 states that have their own safety and health programs, analysis of data on complaints from OSHA’s Integrated Management Information System, analysis of BLS data on injuries and illnesses, interviews with 15 employees whose companies were the subject of complaints, interviews with 6 employees who filed complaints, and the review of agency documents related to the complaint process. In the appendix on scope and methodology, we corrected the number of employee interviews, changing it to 6 from 8. 3. We have included the agency’s explanation in the final version of the report. 4. We added a note to table 4 acknowledging that OSHA’s jurisdiction is limited in the transportation area and corrected the source of the data in the table. 5. On the basis of our interviews with OSHA officials who said the agency could do more to improve the quality of information received from complainants, we continue to believe that adopting our recommendation would help the agency better manage its inspection resources. Moreover, we believe that the agency could take such actions without discouraging employees from filing legitimate complaints. Carl Barden, Sue Bernstein, Karen Brown, Amy Buck, Patrick di Battista, Barbara Hills, Mikki Holmes, Cathy Hurley, Julian Klazkin, Jim Lawrence, Luann Moy, Corinna Nicolaou, Sid Schwartz, and Michelle Zapata made key contributions to this report. Workplace Safety and Health: OSHA's Voluntary Compliance Strategies Show Promising Results, but Should Be Fully Evaluated Before They Are Expanded. GAO-04-378 March 19, 2004. Workplace Safety and Health: OSHA Can Strengthen Enforcement through Improved Program Management. GAO-03-45 November 22, 2002. Worker Protection: Labor's Efforts to Enforce Protections for Day Laborers Could Benefit from Better Data and Guidance. GAO-02-925 September 26, 2002. Workplace Safety and Health: OSHA Should Strengthen the Management of Its Consultation Program. GAO-02-60 October 12, 2001. Worker Protection: OSHA Inspections at Establishments Experiencing Labor Unrest. HEHS-00-144 August 31, 2000. Occupational Safety and Health: Federal Agencies Identified as Promoting Workplace Safety and Health. HEHS-00-45R January 31, 2000.
Each year, OSHA receives thousands of complaints from employees alleging hazardous conditions at their worksites. How OSHA responds to these complaints--either by inspecting the worksite or through some other means--has important implications for both the agency's resources and worker safety and health. Responding to invalid or erroneous complaints would deplete inspection resources that could be used to inspect or investigate other worksites. Not responding to complaints that warrant action runs counter to the agency's mission to protect worker safety and health. Considering OSHA's limited resources, and the importance of worker safety, GAO was asked: (1) What is OSHA's current policy for responding to complaints in a way that conserves its resources, (2) how consistently is OSHA responding to complaints, and (3) to what extent have complaints led OSHA to identify serious hazards? In general, the Occupational Safety and Health Administration (OSHA) responds to complaints according to the seriousness of the alleged hazard, a practice that agency officials say conserves inspection resources. OSHA officials usually conduct on-site inspections for alleged hazards that could result in death or serious injury. For less serious hazards, OSHA officials generally investigate by phoning employers and faxing them a description of the alleged hazard. Employers are directed to provide the agency with proof of the complaint's resolution. OSHA officials said the availability of both options allows them to manage resources more effectively when responding to complaints. However, many agency officials we interviewed said some complainants provide erroneous information about the alleged hazard, which can affect the agency's determination of the hazard's severity. For example, some complainants lack the expertise to know what is truly hazardous and, as a result, file complaints that overstate the nature of the hazard. Others, particularly disgruntled ex-employees, may have ulterior motives when filing complaints and misrepresent the nature of the hazard. In the 42 area offices where we conducted interviews (there are 80 area offices), OSHA officials described practices for responding to complaints that varied considerably. For example, the degree to which supervisors participated in decisions about which complaints would result in inspections and which would not varied across offices. While OSHA requires annual audits that would identify the extent to which its area offices are correctly employing the complaint policies, some regions are not conducting these audits, and agency officials have told us that OSHA does not have a mechanism in place to address agencywide problems. To some extent complaints direct inspection resources where there are serious hazards. At half the worksites OSHA inspected in response to complaints, compliance officers found serious violations--those that posed a substantial probability of injury or death, according to OSHA's own data for fiscal years 2000-2001.
General aviation encompasses a wide variety of activities, aircraft types, and airports. About 85 percent of all general aviation hours flown falls into one of five categories of flying activity, as defined by FAA and described in figure 1. The largest of these categories is recreational flying, which is defined as flying for pleasure or personal transportation and not for business purposes. In 2002, recreational flying accounted for about 41 percent of all general aviation hours flown. The remaining categories include activities such as medical services, aerial advertising, aerial mapping and photography, and aerial application of seeds or chemicals. Various types of aircraft can be used in general aviation operations, including single-engine and multi-engine piston aircraft, turboprops, turbojets, helicopters, gliders, and experimental aircraft. The general aviation fleet in the United States consists of about 211,000 active aircraft. While this fleet is diverse, certain activities are generally associated with specific types of general aviation aircraft. For example, corporate flying generally involves the use of turboprop and turbojet aircraft, while personal and instructional flying generally involves the use of single- engine propeller-driven aircraft. The largest category of general aviation aircraft is single-engine propeller, which in 2002 made up 68 percent of the general aviation fleet. Types of general aviation aircraft and their uses are described in figure 2. There are approximately 14,000 private-use and 4,800 public-use general aviation airports in the United States, and about 550,000 active general aviation pilots and instructors. Non-U.S. citizens can also possess active student pilot certificates in the United States, according to FAA. Although general aviation aircraft can take off and land at almost any airport, including most of the nation’s commercial service airports, there is an extensive system of general aviation airports nationwide. Figure 3 identifies the categories of airports in the United States. Public-use general aviation airports can range in size and complexity from the short, grass landing strip in rural areas to the very busy urban airports with multiple paved runways of differing lengths that can accommodate large jet aircraft. Figure 4 illustrates examples of a rural general aviation airport with a grass landing strip and a more complex urban general aviation airport. General aviation industry interests are represented by a variety of national organizations. One of the functions of these organizations is disseminating information from federal agencies to their members. These associations also provide their members with security best practices and recommendations tailored to their members’ specific needs. Table 1 provides an overview of some of the largest industry associations and their role in general aviation. Prior to the passage of the Aviation and Transportation Security Act in November 2001, FAA had primary responsibility for securing all civil aviation, including general aviation. Although the act transferred much of that responsibility from FAA to TSA, FAA maintains a security role because of its regulatory authority over the imposition of temporary flight restrictions (TFR) and its disbursement of grants to fund safety and security enhancements at commercial and general aviation airports. Most of the civil aviation security regulations TSA assumed from FAA did not apply to general aviation, but rather to commercial passenger air carriers and commercial airports. Although the security of general aviation airports remains largely unregulated, the Aviation and Transportation Security Act and subsequent laws required TSA to develop additional regulations that affect specific segments of general aviation— flight training schools and certain charter flight operations. Among other things, with regard to all modes of transportation, the Aviation and Transportation Security Act also required TSA to receive, assess, and distribute intelligence information related to transportation security; assess threats to transportation security and develop policies, strategies, and plans for dealing with those threats, including coordinating countermeasures with other federal organizations; enforce security-related regulations and requirements; and oversee the implementation, and ensure the adequacy, of security measures at airports and other transportation facilities. TSA and other federal agencies have not conducted an overall, systematic assessment of threats to, or vulnerabilities of, general aviation to determine how to better prepare against terrorist threats. However, in July 2003, TSA issued a limited assessment of threats associated with general aviation activities. In addition, the FBI stated that intelligence indicates that terrorists have considered using general aviation aircraft in the past to conduct attacks. To determine vulnerabilities, TSA conducted vulnerability assessments at some general aviation airports based on specific security concerns or requests by airport officials, and have conducted less intensive security surveys at selected general aviation airports. To better focus its efforts and resources, TSA intends to implement a risk management approach to assess the threats and vulnerabilities of general aviation aircraft and airports, and conduct on- site vulnerability assessments only at those airports the agency determines to be nationally critical. However, TSA has not yet developed a plan with specific milestones for implementing these tools and assessments. While TSA has partnered with industry associations to develop security guidelines for general aviation airports and communicate threat information to airport operators, we found limitations in the communication of threat information. Industry and state aviation officials we spoke with stated that security advisories distributed by TSA were general in nature and were not consistently received. Risk communication principles provide that specific information on potential threats include— to the extent possible—the nature of the threat, when and where it is likely to occur, over what time period it is likely to occur, and guidance on actions to be taken. Applying these principles presents problems for TSA because, among other things, the agency receives threat information from other federal agencies and that information is often classified. Neither TSA nor FBI has conducted an overall systematic assessment of threats to, or vulnerabilities of, general aviation to determine how to better prepare against terrorist threats. In July 2003, TSA issued a brief summary assessment of the threats associated with general aviation. However, the assessment was not widely distributed or made available to general aviation airports or other stakeholders. In 2004, the Secretary of the Department of Homeland Security acknowledged that the department, along with the Central Intelligence Agency (CIA), FBI, and other agencies, lacked precise knowledge about the time, place, and methods of potential terrorist attacks related to general aviation. Additionally, industry and TSA officials stated that the small size, lack of fuel capacity, and minimal destructive power of most general aviation aircraft make them unattractive to terrorists and, thereby, reduce the possibility of threat associated with their misuse. Historical intelligence indicates that terrorists have expressed interest in using general aviation aircraft to conduct attacks. The following are examples of intelligence information indicating terrorist interest in general aviation: CIA reported that terrorists associated with the September 11 attacks expressed interest in the use of crop-dusting aircraft (a type of general aviation aircraft) for large area dissemination of biological warfare agents such as anthrax. CIA reported that one of the masterminds of the September 11 attacks originally proposed using small aircraft filled with explosives to carry out the attacks. In May 2003, the Department of Homeland Security issued a security advisory indicating that al Qaeda was in the late stages of planning an attack, using general aviation aircraft, on the U.S. Consulate in Karachi, Pakistan, and had also planned to use general aviation aircraft to attack warships in the Persian Gulf. TSA and industry stakeholders we spoke with stated that general aviation airports are vulnerable to terrorist attack. TSA officials stated also that it would be difficult for the agency to systematically conduct on-site assessments of the vulnerabilities of individual general aviation airports to terrorist activities because of the diversity and large number of airports. Officials cited the nearly 19,000 general aviation airports nationwide, noting that each has distinct characteristics that may make it more or less attractive to potential terrorists. TSA’s efforts to assess vulnerabilities at specific general aviation airports have been limited. At the time of our review, TSA had conducted vulnerability assessments at selected general aviation airports based on specific security concerns or requests by airport officials. TSA officials stated that the resources associated with conducting vulnerability assessments, and the diverse nature of general aviation airports, makes it impractical to conduct assessments at the approximately 19,000 general aviation airports nationwide, or even the approximately 4,800 public-use general aviation airports. TSA officials said, however, that they had conducted a less intensive security survey at additional general aviation airports. TSA selected these airports, among other things, in preparation for special security events such as the G-8 summit and national Republican and Democratic political conventions. In response to industry requests for federally endorsed security protocols, TSA issued security guidelines in May 2004 meant to enable individual general aviation airport managers to assess their own facility’s vulnerability to terrorist attack and suggest security enhancements. Although these guidelines were issued after we conducted our survey of general aviation airport managers, we found that the majority of airport managers surveyed stated that they would use a security review/vulnerability assessment tool if it were provided. To produce these security guidelines, TSA partnered with industry associations participating in the Aviation Security Advisory Committee’s Working Group on General Aviation Airports Security. The guidelines include an airport characteristic measurement tool that allows airport operators to assess the level of risk associated with their airport to determine which security enhancements are most appropriate for their facility. The guidelines also contain security guidance based on industry best practices. TSA officials emphasized that, because security at general aviation airports is not currently regulated by TSA, the security enhancements suggested by the guidelines are voluntary and are to be implemented at the discretion of the airport manager. While TSA’s and general aviation airport managers’ assessments at specific general aviation airports have been limited, TSA has identified a number of factors that could make general aviation aircraft and airports vulnerable to exploitation by terrorists. In order to address challenges in assessing threats and vulnerabilities to all modes of transportation—including general aviation—and focusing scarce resources, TSA plans to implement a risk management approach based on assessments of criticality, threat, and vulnerability. TSA’s risk management approach, as it relates to general aviation security, is summarized below. TSA plans to use a criticality tool to provide the basis for prioritizing which transportation assets and facilities require additional or special protection. On the basis of a criticality assessment, TSA intends to provide greater security scrutiny to general aviation airports that require special protection. TSA plans to apply threat scenarios of how terrorists might conduct attacks in specific situations in airport environments to assess threats faced by individual general aviation airports. TSA is developing an online self-assessment toolintended to help general aviation airport managers develop a comprehensive security baseline for their facility. TSA is developing a Transportation Risk Assessment and Vulnerability Evaluation tool for conducting on-site assessments of general aviation airports that are deemed to be nationally critical. TSA intends to compile baseline data on security vulnerabilities from these tools and use the data to conduct a systematic analysis of security vulnerabilities at general aviation airports nationwide. TSA officials stated that such an analysis will allow the agency to establish the need, if any, for minimum security standards; determine the adequacy of current security regulations; and help the agency and airports better direct limited resources. They noted that because airports will not be required to use the tool, the usefulness of the data gathered will be dependent on the number of airports voluntarily submitting assessment results to TSA. Despite these plans, however, TSA has not developed an implementation plan with specific milestones for conducting its risk management efforts. These efforts have been under development for over a year and were originally scheduled to have been completed between June and August of 2004. Without a plan that establishes specific time frames for implementation of the tools and assessments, it will be difficult for TSA to monitor the progress of its efforts and hold responsible officials accountable for achieving desired results. Similarly, without a plan that includes estimates of the resources needed to effectively implement the agency’s risk management approach, TSA’s ability to allocate its resources to areas of greatest need could be impaired. A plan could also address alternative approaches that could be implemented if the extent of voluntary participation of general aviation airport managers does not provide sufficient data needed to establish the desired security baseline of vulnerabilities. TSA faces challenges in ensuring that threat information is effectively communicated to the general aviation community due to the generality of intelligence information given, and the lack of a current, reliable, and complete list of airport contacts. In addition, intelligence information may be classified or sensitive, thus limiting with whom it can be shared. TSA partners with industry associations that are part of a General Aviation Coalition as a primary means for communicating threat information and developing security guidelines for general aviation airport managers. Specifically, rather than notifying general aviation airport operators directly, TSA communicates threat advisories to these industry associations, which in turn are to provide it to their members. A majority of general aviation airport managers we surveyed reported that they had at least some contact with nonfederal entities such as state aviation officials or industry associations such as the American Association of Airport Executives or the National Business Aviation Association. Additionally, a majority indicated that they had established procedures for disseminating security-related information to airport employees and tenants. TSA issued threat advisories for dissemination by general aviation associations to general aviation airports. However, industry association representatives and state aviation officials we spoke with stated that these security advisories were general in nature and were not consistently received. An example of one of TSA’s threat advisories is shown in figure 5 below. Timely, specific, and actionable information are three key principles of effective risk communication. However, TSA faces inherent challenges in applying risk communication principles because of: (1) the generality of intelligence information received from the intelligence community, (2) a limited capability to identify appropriate officials and airports to receive threat information, and (3) potential restrictions placed on communicating classified or sensitive security information to general aviation stakeholders. Providing threat information to the public or those with a need to know in accordance with these principles is challenging and extends beyond threat communications related to general aviation. The first challenge TSA, along with other federal agencies, faces in applying risk communication principles is the generality of intelligence information and the difficulties the government faces in developing such information. According to TSA, gathering specific threat information is difficult because the threat posed by a particular person or group varies over time with changes in the terrorist organization’s structure, objectives, methodologies, and capabilities. Targets also change depending on the security of the target in question; likelihood of success; mission complexity; and potential psychological, emotional, and financial impact of the attack. These variations in groups and targets make predicting how and when a terrorist event could occur difficult. Nonetheless, we have reported that public warning systems should, to the extent possible, include specific, consistent, accurate, and clear information on the threat at hand, including the nature of the threat, location, and threat time frames along with guidance on actions to be taken in response to the threat. According to risk communication principles, without adequate threat information, the public may ignore the threat or engage in inappropriate actions, some of which may compromise rather than promote the public’s safety. A second challenge faced by TSA in communicating threat information to general aviation airports is the lack of current, reliable, and complete information about who to contact to facilitate communication. General aviation airport operators are widely spread among a diverse range of airports that have historically been subject to little or no federal regulation or contact. As a result, contact information about who the owners or operators of individual airports are may not be complete, current, or readily available. Neither FAA nor TSA maintains a current database with contact information for all general aviation airports. Thus, identifying who should receive threat information at the nearly 19,000 airports poses a significant challenge. While general aviation industry associations typically maintain contact information on their members, association officials stated that when they need contact information on general aviation airports they generally use data from the FAA. A third challenge TSA faces in providing classified threat information to general aviation airport operators is determining which airport officials have a need and clearance to receive classified or sensitive intelligence information. In general, the more detailed and specific the threat information, the more likely the information is classified and, therefore, not available to those without appropriate security clearances. TSA officials said they had sanitized threat information in order to issue the five security advisories to general aviation industry associations in an unclassified format. TSA officials said they had also granted security clearances to individuals at certain industry associations who were willing to undergo the required background check process. However, although TSA has developed the ability to communicate classified threat information to some general aviation industry representatives, the agency still faces limitations on its ability to ensure that airport operators with a need to know have access to classified threat information, and have the appropriate clearances. According to TSA officials, the agency’s approach to risk management should improve its ability to communicate threat information to the general aviation community by addressing the three challenges mentioned above. Specifically, once TSA completes threat and criticality assessments and—in coordination with general aviation airport managers— vulnerability assessments, the agency will have a greater sense of the threats that individual general aviation airport managers should be aware of and therefore be able to communicate more useful and specific threat information. Conducting vulnerability and criticality assessments should also help TSA identify airports for which current and reliable contact information is needed, and identify airport officials with a need to know classified threat information. TSA and FAA have taken steps to address security risks associated with general aviation through regulation, guidance, and funding. However, in response to the September 11 attacks, TSA has primarily focused on strengthening the security of commercial aviation and meeting associated congressional mandates. As a result, TSA has dedicated fewer resources to strengthening general aviation security, and both TSA and FAA continue to face challenges in their efforts to further enhance security. For example, TSA has developed a regulation governing background checks of foreign candidates for flight training at U.S. flight schools and issued security guidelines for general aviation airports. However, TSA has not yet developed a schedule for conducting inspections or determined the resources needed for monitoring compliance with new regulations. In addition, should TSA establish security requirements for general aviation airports, it may be difficult for airport operators to finance security enhancements independently and federal funding will also be a challenge since general aviation airports’ needs must compete with the needs of commercial airports for security funding. FAA, in coordination with TSA and other federal agencies, has implemented airspace restrictions over certain landmarks and events, among other things, to guard against potential terrorist threats. FAA officials said that they intermittently reviewed the continuing need for flight restrictions limiting access to airspace for indefinite periods of time—those established at the request of the Department of Defense and for the defense of the national capital region. However, they had not established written procedures or criteria for revalidating the need for restrictions to ensure such reviews were consistently conducted. In addition, we found limitations in the process used by TSA to review and make recommendations regarding waivers to allow general aviation pilots to fly through security related flight restrictions. Recognizing the threat posed by larger aircraft, whether carrying passengers or cargo, the Department of Justice, in February 2003, issued a requirement that all non-U.S. citizens seeking flight training in aircraft weighing 12,500 pounds or more must undergo a comprehensive background check. Both TSA and FAA subsequently issued regulations intended to limit access to aircraft for certain segments of the general aviation community by increasing requirements for background checks of pilots. As table 2 shows, TSA and FAA promulgated new regulations governing the screening and validation of pilot and student pilot identities. Prior to September 11, FAA did not require background checks of anyone seeking a pilot license, also referred to as a pilot certificate. In November 2001, the Aviation and Transportation Security Act required that foreign student pilots seeking training in aircraft weighing 12,500 pounds or more undergo a background check by the Department of Justice. Under regulations issued by the Department of Justice, flight training providers are responsible for ensuring that aliens applying for flight training in aircraft weighing 12,500 pounds or more fill out and submit a Department of Justice Flight Training Candidate Checks Program form and are fingerprinted. The Foreign Terrorist Tracking Task Force is to perform a criminal history background check of the foreign candidate and notify the flight training provider whether or not the foreign candidate is cleared to receive flight training. According to officials from the Foreign Terrorist Tracking Task Force, a number of foreign student pilot candidates have been denied from enrolling in a flight training program between March 17, 2003 and August 18, 2004. FAA officials said that in February 2002 they took additional steps to make sure that foreign student pilots who already had student pilot certificates when the new requirements went into effect were checked. In December 2003, the Vision 100—The Century of Aviation Reauthorization Act (Vision 100) transferred responsibility for conducting background checks from the Department of Justice to TSA and expanded the background check requirement to include all foreign student pilots regardless of the aircraft’s size in which they train. TSA has developed a regulation implementing the mandates of Vision 100 and, at the time of our review, planned to publish the final regulation and assume the background check responsibilities from the Department of Justice by September 30, 2004. According to TSA officials, TSA’s Alien Flight Student program will be similar to the Department of Justice’s Flight Training Candidate Checks Program. A key challenge for TSA is fulfilling its responsibility to enforce security related regulations will be monitoring the compliance of flight training programs in the United Sates and Puerto Rico with this new requirement. We found limitations in the monitoring of these flight-training programs. In addition to the Department of Justice regulations governing foreign student pilots, FAA, in July 2002, implemented changes to the process of issuing a U.S. pilot certificate to foreign nationals already holding a pilot certificate from a foreign country. Historically, FAA issued pilot certificates to pilots who held licenses issued by nations that are members of the International Civil Aviation Organization based on their foreign license. Members of the organization, including the United States and 187 other nations, (including nations known to sponsor terrorism) agreed to issue private pilot certificates to those holding pilot licenses from other organization member nations without requiring them to undergo skills testing. Because of the destructive potential of larger aircraft, the Aviation and Transportation Security Act directed TSA to promulgate new rules governing security requirements for certain public and private charter operations. Generally, the “twelve-five rule” requires nonscheduled or on- demand charter services (for passengers or cargo) using aircraft weighing 12,500 pounds or more to implement a specific program of security procedures similar to those required of scheduled commercial airlines and public charters. Similarly, the “private charter rule” requires private charter services using aircraft weighing 100,309.3 pounds (45,500 kilograms) or more, or that have 61 or more passenger seats, to implement many of the same security procedures required of the major airlines. However, we found that TSA faces challenges in monitoring compliance with these new security regulations. Figure 6 shows that selected existing security requirements have been expanded from commercial air carriers to public and private charter aircraft. Since September 11, 2001, FAA has issued temporary flight restrictions (TFR) for some Department of Defense facilities and for the protection of the national capital region for indefinite periods without a documented process to justify their continuance. FAA imposes TFRs to temporarily restrict aircraft operations within designated areas. Prior to September 11, FAA issued TFRs primarily to safely manage airspace operations during events of limited duration. Since then, however, FAA, in coordination with TSA, the Department of Defense, and the Secret Service, among others, has increasingly used TFRs for the purposes of national security over specific events and critical infrastructure. FAA has authority over the U.S. National Airspace System and is the agency responsible for implementing TFRs via the Notice to Airmen system. For security-related TFRs, FAA generally requests that TSA’s Office of Operations Policy evaluate requests received from federal and nonfederal entities—such as the FBI, the Department of the Interior, and state or local government entities—associated with National Special Security Events and selected sporting events. TSA evaluates such requests using security related criteria. Based on their evaluation of requests for selected security-related TFRs, TSA officials will make recommendations to FAA regarding whether the TFR should be issued. On the basis of this information, FAA will make a determination whether to issue the TFR through the Notice to Airmen system. According to FAA officials, prior to September 11, 2001, TFRs were rarely issued for security purposes. Since then, however, FAA has issued numerous TFRs for the purpose of national security as a result of increased focus on aviation security. FAA officials stated that Notices to Airmen and other records of TFRs were historically not kept after the restrictions were removed, thus they were unable to provide accurate information on the number of TFRs issued for national security purposes prior to September 11, 2001. Since that time, however, FAA officials said the agency had issued approximately 220 Notices to Airmen and associated TFRs. The size—that is, the amount of airspace restricted both vertically and laterally—of some TFRs has increased. For example, prior to September 11, TFRs for presidential visits had a radius of 3 nautical miles with a ceiling of 3,000 feet. Since then, presidential TFRs have had a radius of 30 nautical miles, with a ceiling of 18,000 feet. The rationale for increasing the size of presidential TFRs, according to FAA, was based on the difficulty the military might have in preventing an airborne attack on the President once an aircraft was within the 3-nautical mile zone. Figure 7 illustrates the area now covered by a presidential TFR over the Crawford Ranch in Texas when the President is in residence. In the case of the national capital region and selected military installations, the duration of TFRs implemented for national security reasons has been put in place and subsequently extended for indefinite periods of time. For example, temporary flight restrictions in and around the national capital region were established shortly after September 11 and according to FAA officials, no set date has been established for their removal. These restrictions in and around Washington, D.C., are the flight-restricted zone and the Washington, D.C. Metropolitan Air Defense Identification Zone, as shown in figure 8. In addition, FAA issued 21 TFRs around various military facilities throughout the country because of security concerns at these facilities after the terrorist attacks of September 11. While 8 of these TFRs have since been canceled, 13 were still in effect as of July 27, 2004, with no scheduled date for removal or documented analysis to justify their continued need. According to FAA officials, the agency plans to convert 11 of these areas to national security areas. Once FAA publishes revised aeronautical charts reflecting the new, permanent advisories recommending that pilots avoid the airspace, FAA officials said they plan to cancel the TFRs. In January 2004, FAA issued proposals for converting the remaining two TFRs to permanently prohibited airspace (where no flights are permitted). At the time of our review, FAA was still reviewing comments on the proposal to permanently restrict the surrounding airspaces. Figure 9 shows the status of security-related TFRs FAA established over military installations since September 11. TSA, FAA, and general aviation industry stakeholders we spoke with stated that TFRs negatively affect primarily general aviation operators and airports. According to aviation industry representatives we contacted and FAA, the increase in the number, size, and duration of TFRs and, at times, limited notice given prior to their establishment since September 11 has resulted in numerous inadvertent violations of restricted airspace. For example, the Washington, D.C. Air Defense Identification Zone has been violated over 1,000 times, constituting over 40 percent of all TFR violations since September 11, 2001. As figure 10 shows, since September 2001, the number of violations of all TFRs has increased dramatically. General aviation has accounted for most TFR violations committed within U.S. airspace. Further, about 95 percent of all TFR violations occurred in airspace secured for either presidential security or other national security purposes. Although no TFR violations have been shown to be terrorist related, violators are subject to disciplinary action. According to FAA officials, violations of a TFR typically result in a suspension of the pilot’s certificate ranging anywhere from 15 days to 90 days. They said that the most common reason for TFR violations is pilots not reading the Notices to Airmen for the flight area, a required preflight procedure. Other reasons for violations included weather problems, mechanical failures, and pilot in-flight disorientation (i.e., getting lost). FAA officials stated that the number and severity of disciplinary actions imposed on pilots violating TFRs have increased since September 11. However, FAA officials were unable to provide statistical information on the number and severity of disciplinary actions for pilots violating TFRs before or since September 11. The imposition of TFRs can also have an economic impact on general aviation operations. TSA, FAA, and industry associations we spoke with stated that the costs associated with restricting airspace can be significant. The National Business Aviation Association commissioned a study to estimate the economic impact TFRs have had on general aviation since September 11. While we did not independently assess the validity of the association’s assumptions or calculations, the study estimated that general aviation passengers and firms lost over $1 billion because of increased costs to passengers and lost revenues and additional operating costs for general aviation firms. We visited St. Mary’s Airport in Brunswick, Georgia, to discuss the economic impact of TFRs with an affected general aviation airport operator. St. Mary’s is located approximately 3 miles south of the Kings Bay Naval Base, where FAA issued a security-related TFR shortly after September 11. The airport operator stated that the loss of much of the general aviation traffic through his airport resulting from the TFR had significantly reduced his ability to generate revenue to sustain operations. According to the operator, the airport’s proximity to the TFR around the base significantly deters pilots from using the airport. Other airport operators we visited that were affected by TFRs also cited their negative economic impacts. A sign warning pilots to avoid restricted airspace near the St. Mary’s Airport is pictured in figure 11. Although TFRs may have economic and other negative impacts on the general aviation industry, FAA did not establish a systematic process for periodically reviewing the continuing need for TFRs over the national capital region and the 13 TFRs over military installation, or determine the long-term economic or other impacts on general aviation operations of these restrictions. While FAA officials said they frequently reviewed TFRs on an informal basis, they did not conduct routine assessments of the continuing need for indefinite TFRs based on a consistent, documented set of criteria or determine the impact of these restrictions on general aviation. In June 2004, FAA officials, in reporting to Congress on the Air Defense Identification Zone, did not cite specific criteria or the process used to determine the continuing need for the restrictions. Instead, FAA based its report primarily on unspecified security reasons submitted by TSA. TSA officials cited the continuing threat posed to the national capital region by organizations such as al Qaeda. While the air defense identification zone around the national capital region is unique, it is possible that future circumstances may warrant the issuance of other temporary flight restrictions of indefinite duration. Without documented procedures and criteria, FAA cannot ensure that future reviews of flight restrictions issued for indefinite periods are properly conducted, or consistently ensure that restrictions on airspace are still needed. We also found that TSA and FAA were limited in their ability to mitigate the threat of airborne attack. This is a result of limitations in airspace restrictions, and the practice of granting pilots waivers to enter temporarily restricted airspace. Enhancing general aviation security is difficult because of funding challenges faced by the federal government and general aviation airport operators. General aviation airports have received some federal funding for implementing security upgrades since September 11, but have funded most security enhancements on their own. General aviation stakeholders we contacted expressed concern that they may not be able to pay for any future security requirements that TSA may establish. In addition, TSA and FAA are unlikely to be able to allocate significant levels of funding for general aviation security enhancements, given competing priorities of commercial aviation and other modes of transportation. About 3,000 general aviation airports are eligible to receive FAA Airport Improvement Program grants. General aviation airports can use Airport Improvement Program grant funds for projects that provide safety and security benefits. For example, 6 of the 31 airport managers we interviewed, including one of the largest general aviation airports in the country, said they used Airport Improvement Program grants to pay for some of their security enhancements after September 11, 2001. In fiscal year 2002, general aviation airports received $561 million in Airport Improvement Program grants, of which $3.2 million (or about 0.6 percent) was awarded for security projects, and in fiscal year 2003, $680 million, of which $1.3 million (or about 0.2 percent) was awarded for security projects. Because general aviation airports are generally not subject to any federal regulations for security, in order to meet eligibility requirements for their grants, general aviation airport projects are generally limited to those related to safety but have security benefits, such as lighting and fencing, as well as the acquisition and use of cameras, additional lighting, and motion sensors. FAA officials stated that if new security requirements were established for general aviation airports, security-related enhancement projects related to these requirements would be eligible and receive priority for Airport Improvement Program funding. However, given the competing demands of commercial airports, the large number of general aviation airports eligible for such funding, and the limitations of the Airport Improvement Program, funding could be uncertain for general aviation airport operators to meet any new security- related requirements. The Office for Domestic Preparedness within the Department of Homeland Security administers two grant programs that could benefit general aviation airports—the State Homeland Security Grant Program and the Urban Areas Security Initiative. Under these programs, states may purchase equipment to protect critical infrastructure, including equipment for general aviation airports, if the state declares general aviation airports critical infrastructures. During the course of our review, we learned of one state that plans to spend a small amount of Department of Homeland Security grants to improve the security of general aviation airports. According to officials in Wisconsin, the state plans to use at least $1.5 million of its $41 million Homeland Security Grant in 2004 to enhance security at general aviation airports located along the Great Lakes. Vision 100 also authorized the Department of Homeland Security to establish a $250 million Aviation Security Capital Fund administered by TSA to alleviate some of the demand on the Airport Improvement Program for security enhancement grants. Of this amount, $125 million is discretionary, with priority given to the installation of baggage-screening equipment at commercial airports while the balance is allocated by formula based on airport size and other security considerations. TSA officials noted that Congress did not provide an appropriation for fiscal year 2004 for the fund. If Congress decides to make appropriations in the future for these purposes, general aviation airports will still have to compete with commercial airports for this discretionary funding. Given the extent of unmet security funding needs at commercial airports, it seems unlikely that a significant proportion of funding would be available for general aviation. For example, estimates to install explosive detection system machinery with commercial airport baggage systems range from $3 billion to $5 billion. At the time of our review, $1.2 billion had been appropriated for this effort, and according to the House Committee on Appropriations, airports will be funded, at best, for about half of their installation needs. Even if funds were available, TSA would face a challenge in establishing and prioritizing security projects eligible for Aviation Security Capital Fund grants across a wide spectrum of general aviation airports with diverse characteristics. Although funding is limited for airport improvement, someairport managers we spoke with said they had expended thousands or hundreds of thousands of dollars for security in order to attract more tenants to their facility or to retain their existing tenants. Nonfederal stakeholders with an interest in general aviation security— including industry associations, state governments, general aviation airport operators (owners and managers), and users of general aviation airports and aircraft—have taken steps to strengthen the security of general aviation airports and operations. Industry associations have developed and provided recommendations on best practices for enhancing security around general aviation airports, have partnered with the federal government to develop federally endorsed security guidelines, and have sponsored and provided training for their own voluntary security programs. Some states also have suggested best practices, established regulations, and provided funding to general aviation airports to reduce security vulnerabilities. General aviation airport operators and tenants, such as air charter services, have also implemented policy and procedural measures to restrict access to airport property and aircraft. Many airports we visited and surveyed had installed physical security enhancements, such as fencing, lighting, surveillance cameras, and electronic access control gates, and had hired additional security guards. General aviation aircraft owners have also taken steps to protect their aircraft from misuse. Many of the general aviation industry associations we contacted had developed guidance to help enhance the security of general aviation operations and airports. For example, the following are some of the recommendations or best practices designed to strengthen security at general aviation airports made by some of the members of the Aviation Security Advisory Committee’s Working Group on General Aviation Airports Security: Posting signs at general aviation airports warning against unauthorized use of aircraft. Securing aircraft when unattended using existing mechanisms such as door locks, keyed ignitions, and locked hangars to protect aircraft from unauthorized use or tampering. Controlling vehicle access to areas where aircraft operate by using signs, fences, or gates. Installing effective outdoor lighting to help improve the security of aircraft parking, hangar, and fuel storage areas, as well as airport access points. Allowing local law enforcement operational space at the airport to provide a security presence that serves as a natural deterrent to terrorism. Several general aviation industry associations, in partnership with TSA, have also initiated their own voluntary security programs to address the security of general aviation operations and airports. For example: The Aircraft Owners and Pilots Association, working with TSA, established and operates the Airport Watch program. The program was formed in March 2002—similar in concept to a neighborhood watch program—to improve general aviation airport community awareness. Through the program, the association provides warning signs for airports, informational literature, and training videotapes to educate pilots and airport employees on how the security of their airports and aircraft can be enhanced. TSA operates a toll-free hotline (866-GA-SECURE) where airport operators, managers, and pilots can report suspicious activity to TSA. In May 2004 the hotline began receiving calls regarding a variety of airport users’ concerns of suspicious activities or individuals in and around general aviation airports. Figure 12 shows an example of the posters identifying the hotline TSA provides to general aviation airports. The National Business Aviation Association developed a set of security procedures that corporate aircraft operators can put into place to increase the security of their operations. In January 2003, the association, in partnership with TSA, initiated a pilot project, called the TSA Access Certificate program, at Teterboro Airport in New Jersey for operators who had established these procedures in a security program and had their security program reviewed and approved by TSA. TSA approval allows operators to operate internationally without the need of a waiver each time they enter the country. (In August 2003, TSA expanded the program to include corporate aircraft operators based at Morristown, New Jersey, and White Plains, New York.) According to association officials, the concept of a TSA-approved security program could be applied to other types of general aviation operations. Officials also stated that one operator of a single general aviation aircraft applied for and received a TSA access certificate to operate internationally. The National Agricultural Aircraft Association created a program to educate aerial application pilots on safety and security issues (the Professional Aerial Applicators Support System). According to association officials, the training program qualifies operators in most states to meet continuing education requirements needed to maintain state agricultural aviation licenses. In addition to providing security guidance and developing security programs, 10 general aviation industry associations worked together to make security recommendations to TSA to help prevent the unauthorized use of general aviation aircraft in a terrorist attack. The group met throughout the summer of 2003 to review and discuss numerous general aviation airport security recommendations and evaluated each recommendation for its appropriateness and effect on enhancing security at general aviation airports. On the basis of this review, the group issued a report to TSA on suggested security guidelines. We visited 10 states and found that their efforts to enhance general aviation security reflected a range of activities. Some states had implemented new requirements for security, funded security enhancements, or provided guidance on best practices. Specifically, 2 of the 10 states we visited had imposed requirements for general aviation airports and aircraft owners and operators since September 11, 2001. In July 2002, the Massachusetts Aeronautics Commission issued a requirement that all airport employees—including general aviation airport employees—wear special photo identification badges. According to state officials, the badges enable airport personnel to distinguish between those who are, and are not, authorized to be on airport property. In March 2003, the Governor of New Jersey issued an executive order that directed aircraft owners and operators who use the state’s 486 licensed general aviation facilities to take steps to limit access to aircraft. Called the “two-lock rule,” the executive order requires that all aircraft parked or stored at a general aviation facility in New Jersey for more than 24 hours be protected by a minimum of two locks that secure or disable the aircraft to prevent illegal or unlawful operations. Four of the 10 states we contacted provided funding for security enhancements at general aviation airports. This funding, however, was generally limited to matching funds for federal grants used to install measures that had both a safety and a security benefit, such as airport perimeter fencing and lighting projects. Some states had grant programs that could be used strictly for security enhancements: For fiscal years 2002 through 2004, Georgia’s Department of Transportation Aviation Programs provided a total of $1,174,000 in grants to general aviation airports for fencing, lighting, and electronic card-reader gates. In February 2002, Tennessee’s Aeronautics Commission issued a policy that the state would provide 90 percent of the cost (not to exceed a total of $50 million annually) on security-related projects at general aviation airports. Eligible projects include security fencing and gates, signage, security lighting and motion sensors, and surveillance cameras and monitors. In 2003, the State of Washington established a $2 million annual matching grant program for general aviation airport security enhancements funded by proceeds from the state’s aviation fuel tax. In 2004, Virginia appropriated $1.5 million to the state’s Department of Aviation specifically for security upgrades at general aviation airports. California’s Aviation Division established a grant program for research and development projects that could fund security enhancements at general aviation airports. However, the Aviation Division’s budget has not been sufficient to provide any grants from the program over the past 3 years. One of the 10 states we contacted provided guidance on security best practices, while 2 others provided guidance on preparing airport-specific security plans and self-assessments of vulnerabilities. In 3 of the 10 states, the incentive for airports to develop security plans is tied to funding eligibility. In March 2003, Virginia’s Aviation Department Director issued a set of best practices and later established a voluntary security certification program, encouraging airports to assess their vulnerabilities and develop airport- specific security plans. In May 2002, Tennessee’s Aeronautics Division issued guidance on developing an airport emergency and security plan. In April 2003, Washington’s Aviation Division issued security guidelines for general aviation airports based on recommendations from a task force of pilots, general aviation associations, airports, law enforcement, and government agencies. Unlike commercial service airports, general aviation airports are not subject to current federal security regulations, and, therefore, general aviation managers and aircraft owners determine what security measures they will use to protect their assets. To determine security measures undertaken since September 11, we judgmentally selected and visited 31 general aviation airports in 10 states open to the public and part of FAA’s National Plan of Integrated Airports. Airport managers we contacted reported spending as little as $10 for providing forgery-proof identification badges for airport employees to as much as $3 million on, among other voluntary measures at one airport, fencing and around-the-clock security guards. In our survey, about a third (36 percent) of managers reported that funds to pay for security improvements had come from airport revenues, while about a fifth reported receiving federal grants (21 percent) and a fifth reported receiving state grants (22 percent) to finance security improvements. According to 18 of the 31 airport managers and 3 of 5 tenants (e.g., fixed base operators) we visited, the security measures and practices they implemented following the September 11 attacks were self-initiated, common sense kinds of measures that were expected by the public and their clients to help protect property from vandalism or theft. Many of these measures were no-cost or low-cost security enhancements based primarily on procedural changes. For example, for those airports that did not have formal written security plans, airport managers said they generally discussed security issues with their tenants on a regular basis through meetings and e-mails. Other airports that had formal written security plans or procedures updated those security plans and procedures based on recommendations from industry associations. Some of the 31 airport managers we visited said they had arranged for more frequent patrols by local law enforcement officers since September 11, some for no cost to the airports. Many of the airports we visited had implemented an “airport watch” program—similar to neighborhood watch programs—and displayed signs designed and provided by the Airline Owners and Pilots Association, as discussed above. Other airports absorbed the cost of installing new signs warning against trespassing. Our survey of airport managers identified an increase in the use of security awareness training since September 11. For those aircraft owners who do not store their aircraft in a hangar, forms of securing their aircraft from unauthorized use include attaching devices to propellers, known as “prop locks,” to prevent them from rotating; and devices to cover throttle levers, known as “throttle locks,” to prevent someone from being able to start the aircraft. Figure 13 shows two kinds of prop locks aircraft owners use. According to airport and state aviation officials, prop locks range in cost from about $150 to about $300. Several of the airport managers we visited had invested in high-cost security measures to minimize access by potential criminals and terrorists to airport property and, thus, tenants’ aircraft. Specifically, airport officials we visited had obtained federal or state grant assistance for purchasing additional fencing and lighting or purchasing high-tech surveillance cameras. However, several airport managers and tenants considered additional security a cost of conducting business in the post-September 11 environment. Airports officials generally said that they spent between $25,000 and $500,000 on security enhancements such as fencing, lighting, and electronic access gates. While airport officials said they would like to add more security enhancements, they were reluctant to spend much more on enhancing security until TSA issued guidance on what security measures, or combination of security measures, TSA considers appropriate. (As noted previously, TSA issued security guidelines with recommended enhancements in May 2004, after the majority of our site visits.) Officials from the National Business Aviation Association said that corporate aviation departments are more likely to take high-cost measures to protect their aircraft. For example, some of the large member corporations had provided information on the types of security measures they used before September 11, to protect their aircraft from tampering, theft, or hijacking. According to the association, these included the types of security initiatives shown in table 3. From its inception, TSA has primarily focused its efforts on enhancing commercial aviation security to prevent aircraft from again being used as weapons. The amount of TSA’s resources and the vastness and diversity of the general aviation airport system mean the bulk of the responsibility for determining vulnerabilities and instituting security enhancements has fallen and will likely continue to fall on airport operators. As the 9/11 Commission concluded, homeland security and national preparedness often begins with the private sector. While the federal government can provide guidance and some amount of funding for security enhancements, long-term success in securing general aviation depends on a partnership among the federal government, state governments, and the general aviation industry. Even with such a partnership, enhancing security at general aviation airports presents TSA and the general aviation community with challenges that will not be easily or quickly resolved. For example, TSA’s planned risk management approach for general aviation could assist the agency in providing guidance and prioritizing funding for security enhancements by assessing vulnerabilities and threats to better target its efforts. However, without a documented implementation plan for assessing threats and vulnerabilities that sets forth time frames and goals and the resources needed to achieve these goals, there is limited assurance that TSA will focus its resources and efforts on areas of greatest need, monitor the progress of its efforts, and hold responsible officials accountable for achieving desired results. In addition, completing vulnerability and threat assessments in partnership with general aviation airports should help TSA better communicate threat information. However, because TSA must rely on other federal agencies to provide threat information and follow federal requirements governing disclosure of classified information, it is difficult for TSA to adhere to risk communication principles, particularly in providing specific and actionable information. Nevertheless, effective communication of threat information is important because misallocation of limited resources and disruption of operations are possible effects of communicating nonspecific or incorrect threat information. While TSA and FAA have promulgated regulations to help reduce security risks associated with access to aircraft and airspace, the intended security benefit of these regulations may be limited for a variety of reasons. For example, we found limitations in TSA’s process for monitoring flight training providers and operators of private charter aircraft, and in granting waivers to pilots to fly through security related flight restrictions. In addition, FAA has not documented its process for reviewing and revalidating the need for continuing security-related flight restrictions on airspace that are established for indefinite periods. Without plans for monitoring compliance or procedures to document agency processes, TSA and FAA cannot ensure that these regulations achieve their intended effect or minimize the negative impacts of the regulations on affected general aviation industry stakeholders. To better assess the threat of terrorists’ misuse of general aviation aircraft and to improve the quality of communicating terrorist threat information to the general aviation community, we recommend that the Secretary of the Department of Homeland Security direct the Assistant Secretary of Homeland Security for the Transportation Security Administration to take the following two actions: Develop an implementation plan for executing a risk management approach that will help identify threats and vulnerabilities. Such a plan should include milestones, specific time frames, and estimates of funding and staffing needed to focus its resources and efforts on identified airports. After identifying the most critical threats and vulnerabilities, apply risk communication principles, including to the extent possible the nature of the threat, when and where it is likely to occur, over what time period, and guidance on actions to be taken—in developing and transmitting security advisories and threat notifications. To help ensure that temporary flight restrictions issued for indefinite periods are reviewed and, if appropriate, revalidated and consistently applied, we recommend that the Secretary of the Transportation direct the Administrator of the Federal Aviation Administration to establish a documented process to justify the initiation and continuance of flight restrictions for extended periods. In our restricted report, we also made two recommendations to the Secretary of the Department of Homeland Security regarding monitoring compliance with regulations governing the identification of student pilots, their training, and the operation of certain general aviation aircraft; and the process for granting pilots waivers to enter restricted airspace. We provided draft copies of this report to the Department of Homeland Security, the Department of Transportation, the Transportation Security Administration, and the Federal Aviation Administration for their review and comment. TSA generally concurred with the findings and recommendations in the report and provided formal written comments that are presented in appendix II. TSA provided technical comments that we incorporated as appropriate. FAA also generally concurred with the findings and recommendations in the report and provided technical comments that we incorporated as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies of this report to the Secretary of the Department of Homeland Security, the Secretary of the Department of Transportation, the Assistant Secretary of Homeland Security for the Transportation Security Administration, and the Administrator of the Federal Aviation Administration and interested congressional committees. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report or wish to discuss it further, please contact me at (202) 512-8777 or at berrickc@gao.gov, or Chris Keisling, Assistant Director, at (404) 679-1917 or at keislingc@gao.gov. Key contributors to this report are listed in appendix III. To determine what steps the federal government has taken to identify and assess threats to and vulnerabilities of general aviation, and communicate that information to stakeholders, we interviewed individuals in the Transportation Security Administration’s (TSA) Office of Transportation Security Policy, Office of Operations Policy, and General Aviation Operations and Inspections Office on TSA’s role in enhancing general aviation security. Individuals from these offices provided documentation on TSA’s threat assessment efforts as well as its past vulnerability assessment activities and future vulnerability assessment plans. We examined documentation on TSA’s means of obtaining intelligence information and disseminating that information to general aviation stakeholders. We also interviewed individuals from FAA’s Special Operations Division and Airspace and Rules Division on their roles in securing general aviation. We examined documentation from the Federal Bureau of Investigation (FBI) and the Central Intelligence Agency (CIA) on intelligence regarding potential terrorist misuse of general aviation. In addition, we examined documentation from TSA and FBI on the reasons general aviation may be vulnerable to terrorist misuse. We also spoke to staff in and examined documentation from TSA’s Office of Threat Assessment and Risk Management to obtain information on plans to implement a risk management approach to further assess threats and vulnerabilities and to enable the agency to implement risk communication principles to communicate threat information. To determine what steps the federal government has taken to strengthen general aviation security, and what, if any, challenges the government faces in further enhancing security, we obtained and analyzed information from Federal Aviation Administration (FAA), including data on the number of flight restrictions that affect general aviation and the amount of federal funding that has been spent on enhancing general aviation security. We sought to determine the reliability of these data by, among other things, discussing methods of inputting and maintaining data with FAA officials. We spoke to TSA officials about, and examined related documentation on, security guidelines published by TSA, including documentation on TSA’s activities with the Aviation Security Advisory Committee’s Working Group on General Aviation Airports Security. We interviewed general aviation industry representatives, including those who provided input to the TSA-sponsored Aviation Security Advisory Committee’s Working Group on General Aviation Airports Security, to obtain their views on federal efforts to enhance general aviation security. We also interviewed individuals from TSA’s Office of Compliance on the promulgation of regulations as a result of the passage of the Aviation and Transportation Security Act, as well as TSA’s plans for ensuring operator compliance with these regulations. We interviewed personnel from FAA’s Special Operations Division regarding FAA’s issuance of temporary flight restrictions, including the criteria and internal controls FAA uses to examine requests for these restrictions from federal and nonfederal entities. As part of this analysis, we took steps to verify the reliability of data from FAA on the number of violations of temporary flight restrictions. We interviewed FAA and TSA officials on potential limitations of the effectiveness of these flight restrictions. We also contacted the Director of the Foreign Terrorist Tracking Task Force on efforts to screen foreign students applying for flight training in the United States. We examined potential sources of funding for additional security measures at general aviation airports, including challenges associated with limited funding. To determine the actions individual general aviation airport managers have taken to enhance security at their airports, we visited 31 general aviation airports in 10 states. We judgmentally selected these 31 airports to observe a cross section of general aviation airports. However, we limited our selection of general aviation airports to the 2,829 listed in FAA’s National Plan of Integrated Airport Systems, because these airports are eligible for FAA funding and are open to use by the general public. The remaining 16,000 general aviation airports are generally privately owned and not open to use by the public, and/or are small landing strips with fewer than 10 based aircraft, and are not eligible for federal funding. To ensure we selected a cross section of general aviation airports listed in the National Plan, we based our selection on: 1. Size, using the number of based aircraft as an indicator—100 or more aircraft we considered large, 25 to 99 medium, and 24 or fewer small. 2. Regional location—northeast, northwest, southeast, and southwest areas of the country. 3. Proximity to potential terrorist targets such as large population centers versus sparse population areas, as well as near to and far from other critical infrastructures and symbolic landmarks. 4. Airport characteristics, including number, length, and type (turf or paved) of runways, and primary types of general aviation operations such as recreational aviation, business and corporate aviation, charter services, and flight training. Because we judgmentally selected these general aviation airports, we cannot draw generalized conclusions based on airport managers’ interview responses. However, the anecdotal information provided is intended to complement the findings of our random survey of 500 general aviation airports. To obtain examples of what some states have done to enhance general aviation security, we judgmentally selected 10 states with efforts to enhance general aviation security ranging from issuing new security requirements to those in the early stages of determining how they would address general aviation security. To select this range of states, we conducted a literature search to determine which states had proposed or enacted new security laws, regulations, or requirements. We also requested recommendations from the National Association of State Aviation Officials and other industry associations such as the Aircraft Owners and Pilots Association, and noted which state aviation directors had participated in the National Association of State Aviation Officials’ Task Group on General Aviation Security. We also considered whether a state participated in FAA’s block grant program in which FAA provides airport improvement program grant money to a state in a lump sum and the state determines which airport projects to fund, rather than each airport applying directly to FAA for grant funds on a project-by-project basis. Finally, on the basis of our resources, we considered those states in which we also planned to visit general aviation airports. Because we did not randomly select the states in which we obtained information, we cannot draw generalized conclusions about all states. However, the information obtained from these 10 states serves to provide examples of what some states have done to enhance general aviation security. In addition to those named above, Leo Barbour, Grace Coleman, Chris Ferencik, Kara Finnegan-Irving, Dave Hooper, Stan Kostyla, Thomas Lombardi, Mark Ramage, Robert Rivas, Jerry Seigler, and Richard Swayze were key contributors to this report.
Federal intelligence agencies have reported that in the past, terrorists have considered using general aviation aircraft (all aviation other than commercial and military) for terrorist acts, and that the September 11th terrorists learned to fly at general aviation flight schools. The questions GAO answered regarding the status of general aviation security included (1) What actions has the federal government taken to identify and assess threats to, and vulnerabilities of, general aviation; and communicate that information to stakeholders? (2) What steps has the federal government taken to strengthen general aviation security, and what, if any, challenges does the government face; and (3) What steps have non-federal stakeholders taken to enhance the security of general aviation? The federal and state governments and general aviation industry all play a role in securing general aviation operations. While the federal government provides guidance, enforces regulatory requirements, and provides some funding, the bulk of the responsibility for assessing and enhancing security falls on airport operators. Although TSA has issued a limited threat assessment of general aviation, and the FBI identified that terrorists have considered using general aviation to conduct attacks, a systematic assessment of threats has not been conducted. In addition, to assess airport vulnerabilities, TSA plans to issue a self-assessment tool for airport operators' use, but it does not plan to conduct on-site vulnerability assessments at all general aviation airports due to the cost and vastness of the general aviation network. Instead, TSA intends to use a systematic and analytical risk management process, which is considered a best practice, to assess the threats and vulnerabilities of general aviation. However, TSA has not yet developed an implementation plan for its risk management efforts. TSA and the Federal Aviation Administration (FAA) have taken steps to address security risks to general aviation through regulation and guidance, but still face challenges in their efforts to further enhance security. For example, TSA has promulgated regulations requiring background checks of foreign candidates for U.S. flight training schools and has issued security guidelines for general aviation airports. However, we found limitations in the process used to conduct compliance inspections of flight training programs. In addition, FAA, in coordination with TSA and other federal agencies, has implemented airspace restrictions over certain landmarks and special events. However, FAA has not established written policies or procedures for reviewing and revalidating the need for flight restrictions that limit access to airspace for indefinite periods of time and could negatively affect the general aviation industry. Non-federal general aviation stakeholders have partnered with the federal government and have individually taken steps to enhance general aviation security. For example, industry associations developed best practices and recommendations for securing general aviation, and have partnered with TSA to develop security initiatives such as the Airport Watch Program, similar to a neighborhood watch program. Some state governments have also provided funding for enhancing security at general aviation airports, and many airport operators GAO surveyed took steps to enhance security such as installing fencing and increasing police patrols.
Before originating a residential mortgage loan, a lender assesses its risk through the underwriting process, in which the lender generally examines the borrower’s credit history and capacity to repay the mortgage and obtains a valuation of the property that will be the loan’s collateral. Lenders need to know the property’s market value, or the probable price that the property should bring in a competitive and open market, in order to provide information for assessing their potential loss exposure if the Real estate can be valued using a number of borrower defaults.methods, including appraisals, broker price opinions (BPO), and automated valuation models (AVM). Appraisals are opinions of value based on market research and analysis as of a specific date. Appraisals are performed by state-licensed or -certified appraisers who are required to follow the Uniform Standards of Professional Appraisal Practice (USPAP). A BPO is an estimate of the probable selling price of a particular property prepared by a real estate broker, agent, or salesperson rather than by an appraiser. An AVM is a computerized model that estimates property values using public record data, such as tax records and information kept by county recorders, multiple listing services, and other real estate records. In 1986, the House Committee on Government Operations issued a report concluding that problematic appraisals were an important contributor to the losses that the federal government suffered during the savings and loan crisis. The report stated that hundreds of savings and loans chartered or insured by the federal government were severely weakened or declared insolvent because faulty and fraudulent real estate appraisals provided documentation for loans larger than what the collateral’s real value justified. In response, Congress incorporated provisions in Title XI of FIRREA that were intended to ensure that appraisals performed for federally related transactions were done (1) in writing, in accordance with uniform professional standards, and (2) by individuals whose competency had been demonstrated and whose professional conduct was subject to effective supervision. Various private, state, and federal entities have roles in the Title XI regulatory structure: The Appraisal Foundation. The Appraisal Foundation is a private not- for-profit corporation composed of groups from the real estate industry that works to foster professionalism in appraising. The foundation sponsors two independent boards with responsibilities under Title XI. The first of these, the Appraisal Standards Board, sets rules for developing an appraisal and reporting its results through USPAP. The second board, the Appraiser Qualifications Board, establishes the minimum qualification criteria for state certification and licensing of real property appraisers.of publications but also receives an annual grant from ASC. Evaluations are estimates of market value that do not have to be performed by a state- licensed or -certified appraiser. The federal banking regulators permit evaluations to be performed (consistent with safe and sound lending practices) in certain circumstances, such as mortgage transactions of $250,000 or less that are conducted by regulated institutions. for assessing the completeness, adequacy, and appropriateness of these institutions’ appraisal and evaluation policies and procedures. Appraisal Subcommittee. ASC has responsibility for monitoring the implementation of Title XI by the private, state, and federal entities noted previously. Among other things, ASC is responsible for (1) monitoring and reviewing the practices, procedures, activities, and organizational structure of the Appraisal Foundation—including making grants to the Foundation in amounts that it deems appropriate to help defray costs associated with its Title XI activities; (2) monitoring the requirements that states and their appraiser regulatory agencies establish for the certification and licensing of appraisers; (3) monitoring the requirements established by the federal banking regulators regarding appraisal standards for federally related transactions and determinations of which federally related transactions will require the services of state-licensed or -certified appraisers; and (4) maintaining a national registry of state-licensed and -certified appraisers who can perform appraisals for federally related transactions. Among other responsibilities and authorities, the Dodd-Frank Act requires ASC to implement a national appraisal complaint hotline and provides ASC with limited rulemaking authority. To carry out these tasks, ASC has 7 board member positions and 10 staff headed by an Executive Director hired by the board. Five of the board members are designated by the federal agencies that are part of FFIEC—the Bureau of Consumer Financial Protection (also known as the Consumer Financial Protection Bureau or CFPB), FDIC, the Federal Reserve, NCUA, and OCC. The other two board members are designated by the U.S. Department of Housing and Urban Development (HUD)—which includes the Federal Housing Administration (FHA)—and FHFA. ASC is funded by appraiser registration fees that totaled $2.6 million in fiscal year 2011. Available data and interviews with lenders and other mortgage industry participants indicate that appraisals are the most frequently used valuation method for home purchase and refinance mortgage originations. Appraisals provide an opinion of market value at a point in time and reflect prevailing economic and housing market conditions. Data provided to us by the five largest lenders (measured by dollar volume of mortgage originations in 2010) show that, for the first-lien residential mortgages for which data were available, these lenders obtained appraisals for about 90 percent of the mortgages they made in 2009 and 2010, including 98 percent of home purchase mortgages. The data we obtained from lenders included mortgages sold to the enterprises and mortgages insured by FHA, which together accounted for the bulk of the mortgages originated in 2009 and 2010. The enterprises and FHA require appraisals to be performed for a large majority of the mortgages they purchase or insure. For mortgages for which an appraisal was not done, the lenders we spoke with reported that they generally relied on validation of the sales price (or loan amounts in the case of refinances) against an AVM-generated value, in accordance with enterprise policies that permit this practice for some mortgages that have characteristics associated with a lower default risk. The enterprises, FHA, and lenders require and obtain appraisals for most mortgages because mortgage industry participants consider appraising to be the most credible and reliable valuation method, for a number of reasons. Most notably, appraisals and appraisers are subject to specific requirements and standards. In particular, USPAP outlines the steps appraisers must take in developing appraisals and the information appraisal reports must contain. It also requires that appraisers follow standards for ethical conduct and have the competence needed for a particular assignment. Furthermore, state licensing and certification requirements for appraisers include minimum education and experience criteria, and standardized report forms provide a way to report relevant appraisal information in a consistent format. In contrast, other valuation methods such as BPOs and AVMs are not permitted for most purchase and refinance mortgage originations. The enterprises do not permit lenders to use BPOs for mortgage originations and permit lenders to use AVMs for only a modest percentage of mortgages they purchase. Additionally, the federal banking regulators’ guidelines state that BPOs and AVMs cannot be used as the primary basis for determining property values for mortgages originated by regulated institutions. However, the enterprises and lenders use BPOs and AVMs in a number of circumstances other than purchase and refinance mortgage originations because these methods can provide a quicker, less expensive means of valuing properties in active markets. When performing appraisals, appraisers can use one or more of three approaches to value—sales comparison, cost, and income. The sales comparison approach compares and contrasts the property under appraisal with recent offerings and sales of similar properties. The cost approach is based on an estimate of the value of the land plus what it would cost to replace or reproduce the improvements minus depreciation. The income approach is an estimate of what a prudent investor would pay based upon the net income the property produces. USPAP requires appraisers to consider which approaches to value are applicable and necessary to perform a credible appraisal and provide an opinion of the market value of a particular property. Appraisers must then reconcile the values produced by the different approaches they use to reach a value conclusion. The enterprises and FHA require that, at a minimum, appraisers use the sales comparison approach for all appraisals because it is considered the most applicable for estimating market value in typical mortgage transactions. Consistent with these policies, our review of valuation data from a mortgage technology company—representing about 20 percent of mortgage originations in 2010—indicated that appraisers used the sales comparison approach for nearly all (more than 99 percent) of the mortgages covered by these data. The cost approach, which was generally used in conjunction with the sales comparison approach, was used somewhat less often—in approximately two-thirds of the transactions in 2009 and 2010, according to these data. The income approach was rarely used. Some mortgage industry stakeholders have argued that wider use of the cost approach in particular could help mitigate what they viewed as a limitation of the sales comparison approach. They told us that relying solely on the sales comparison approach could lead to market values rising to unsustainable levels and that using the cost approach as a check on the sales comparison approach could help lenders and appraisers identify when this is happening. For example, they pointed to a growing gap between average market values and average replacement costs of properties as the housing bubble developed in the early to mid-2000s. However, other mortgage industry participants noted that a rigorous application of the cost approach might not generate values much different from those generated using the sales comparison approach. They indicated, for example, that components of the cost approach—such as land value or profit margins of real estate developers—could grow rapidly in housing markets where sales prices are increasing. The data we obtained did not allow us to analyze the differences between the values appraisers generated using the different approaches. Recently issued policies reinforce long-standing requirements and guidance designed to address conflicts of interest that may arise when direct or indirect personal interests bias appraisers from exercising their independent professional judgment. In order to prevent appraisers from being pressured, the federal banking regulators, the enterprises, FHA, and other agencies have regulations and policies governing the selection of, communications with, and coercion of appraisers. Examples of recently issued policies that address appraiser independence include the now-defunct HVCC, which took effect in May 2009; the enterprises’ new appraiser independence requirements that replaced HVCC in October 2010; provisions in the Dodd-Frank Act; and revised Interagency Appraisal and Evaluation Guidelines from the federal banking regulators that were issued in December 2010. Provisions of these and other policies address (1) prohibitions against the involvement of loan production staff in appraiser selection and supervision; (2) prohibitions against third parties with an interest in the mortgage transaction, such as real estate agents or mortgage brokers, selecting appraisers; (3) limits on communications with appraisers; and (4) prohibitions against coercive behaviors. According to mortgage industry participants, HVCC and other factors have contributed to changes in appraiser selection processes—in particular, to lenders’ more frequent use of AMCs to select appraisers. AMCs are third parties that, among other things, select appraisers for appraisal assignments on behalf of lenders. Some appraisal industry participants said that HVCC, which required additional layers of separation between loan production staff and appraisers for mortgages sold to the enterprises, led some lenders to outsource appraisal functions to AMCs because they thought using AMCs would allow them to easily demonstrate compliance with these requirements. In addition, lenders and other mortgage industry participants told us that market conditions, including an increase in the number of mortgages originated during the mid-2000s and lenders’ geographic expansion over the years, put pressure on lenders’ capacity to manage appraisers and led to their reliance on AMCs. Greater use of AMCs has raised questions about oversight of these firms and their impact on appraisal quality. Direct federal oversight of AMCs is limited. Federal banking regulators’ guidelines for lenders’ own appraisal functions list standards for appraiser selection, appraisal review, and reviewer qualifications. The guidelines also require lenders to establish processes to help ensure that these standards are met when lenders outsource appraisal functions to third parties, such as AMCs. Officials from the federal banking regulators told us that they reviewed lenders’ policies and controls for overseeing AMCs, including the due diligence performed when selecting AMCs. However, they told us that they generally did not review an AMC’s operations directly unless they had serious concerns about it that the lender was unable to address. In addition, a number of states began regulating AMCs in 2009, but the regulatory requirements vary and provide somewhat differing levels of oversight, according to officials from several state appraiser regulatory boards. Some appraiser groups and other appraisal industry participants have expressed concern that existing oversight may not provide adequate assurance that AMCs are complying with industry standards. These participants suggested that the practices of some AMCs for selecting appraisers, reviewing appraisal reports, and establishing qualifications for appraisal reviewers—key areas addressed in federal guidelines for lenders’ appraisal functions—may have led to a decline in appraisal quality. For example, appraiser groups said that some AMCs selected appraisers based on who would accept the lowest fee and complete the appraisal report the fastest rather than on who was the most qualified, had the appropriate experience, and was familiar with the relevant neighborhood. AMC officials we spoke with said that they had processes that addressed these areas of concern—for example, using an automated system that identified the most qualified appraiser based on the requirements for the assignment, proximity to the subject property, and performance metrics such as timeliness and appraisal quality. While the impact of the increased use of AMCs on appraisal quality is unclear, Congress recognized the importance of additional AMC oversight in enacting the Dodd-Frank Act by requiring state appraiser regulatory boards to supervise AMCs. The Dodd-Frank Act requires the federal banking regulators, CFPB, and FHFA to establish minimum standards for states to apply in registering AMCs, including requirements that appraisals coordinated by an AMC comply with USPAP and be conducted independently and free from inappropriate influence and coercion. This rulemaking provides a potential avenue for reinforcing existing federal requirements for key functions that may impact appraisal quality, such as selecting appraisers, reviewing appraisals, and establishing qualifications for appraisal reviewers. Such reinforcement could help to provide greater assurance to lenders, the enterprises, and federal agencies of the quality of the appraisals provided by AMCs. To help ensure more consistent and effective oversight of the appraisal industry, we recommended in our July 2011 report that the heads of the federal banking regulators, CFPB, and FHFA—as part of their joint rulemaking required under the Dodd-Frank Act—consider including criteria for the selection of appraisers for appraisal orders, review of completed appraisals, and qualifications for appraisal reviewers when developing minimum standards for state registration of AMCs. federal banking regulators and FHFA agreed with or indicated that they would consider our recommendation but as of June 2012 had not issued a rule setting minimum standards for state registration of AMCs. ASC has been performing its monitoring role under Title XI, but several weaknesses have potentially limited its effectiveness. In particular, ASC has not fully developed appropriate policies and procedures for monitoring state appraiser regulatory agencies, the federal banking regulators, and the Appraisal Foundation. In addition, ASC faces potential challenges in implementing some Dodd-Frank Act provisions. GAO-11-653. national registry of appraisers, license reciprocity (which enables an appraiser certified or licensed in one state to perform appraisals in other states), and programs for enforcing appraiser qualifications and standards. ASC primarily uses on-site reviews conducted by ASC staff to monitor states’ compliance with the policy statements. ASC’s routine compliance reviews examine each state every 2 years or annually if ASC determines that a state needs closer monitoring. These reviews are designed to encourage adherence to Title XI requirements by identifying any instances of noncompliance or “areas of concern” and recommending corrective actions. ASC conveys its findings and recommendations to states through written reports. In 2010, ASC reported 34 findings of noncompliance, the majority of which concerned weaknesses in state enforcement efforts, such as a lack of timeliness in resolving complaints about appraiser misconduct or wrongdoing. At the completion of each review, ASC executive staff and board members deliberate on the findings and place the state into one of three broad compliance categories: “in substantial compliance,” “not in substantial compliance,” and “not in compliance.” According to ASC, in substantial compliance applies when there are no issues of noncompliance or no violations of Title XI; not in substantial compliance applies when there are one or more issues of noncompliance or violations of Title XI that do not rise to the level of not in compliance; and not in compliance applies when “the number, seriousness, and/or repetitiveness of the Title XI violations warrant this finding.” We found that ASC had been using the three compliance categories in its reports to states and annual reports to Congress (which provide aggregate statistics on the number of states in each category). However, it had not included the definitions of the categories in these reports or in its compliance review manual or policy and procedures manual, and its definition of “not in compliance” was not clear or specific. As previously noted, the definition states only that the category is to be used “when the number, seriousness, and/or repetitiveness of the violations warrant this finding” and does not elaborate on how these factors are weighed or provide examples of situations that would meet this definition. These shortcomings are inconsistent with our internal control standards, which state that federal agencies should have appropriate policies and procedures for each of their activities. Without clear, disclosed definitions, ASC limits the transparency of the state compliance review process and the usefulness of information Congress receives to assess states’ implementation of Title XI. Further, by not incorporating the definitions into its compliance review and policy and procedures manuals, ASC increases the risk that board members and staff may not interpret and apply the compliance categories in a consistent manner. To address these shortcomings, we recommended in our January 2012 report that ASC clarify the definitions it uses to categorize states’ overall compliance with Title XI and include these definitions in ASC’s compliance review and policy and procedures manuals, compliance review reports to states, and annual reports to Congress. In June 2012, ASC officials told us that they had developed a revised system for rating states that included five compliance categories (ranging from excellent to poor), each with specific criteria. They said that they would soon be publishing the compliance categories in the Federal Register to obtain public comments and would include the final categories in appropriate manuals and reports. In addition to this procedural weakness, ASC has functioned without regulations and enforcement tools that could be useful in promoting state compliance with Title XI. Prior to the Dodd-Frank Act, Title XI did not give ASC rulemaking authority and provided it with only one enforcement option—”derecognition” of a state’s appraiser regulatory program. This action would prohibit all licensed or certified appraisers from that state from performing appraisals in conjunction with federally related transactions. ASC has never derecognized a state, and ASC officials told us that using this sanction would have a devastating effect on the real estate markets and financial institutions within the state. The Dodd-Frank Act provides ASC with limited rulemaking authority and authorizes ASC to impose (unspecified) interim actions and suspensions against a state agency as an alternative to, or in advance of, the derecognition of the agency. As of June 2012, ASC had not implemented this new enforcement authority. ASC officials said that determining the interim actions and suspensions they would take against state agencies would be done through future rulemaking. Although Title XI charges ASC with monitoring the appraisal requirements of the federal banking regulators, ASC has not developed policies and procedures for carrying out this responsibility. While ASC’s policy manual provides detailed guidance on monitoring state appraiser regulatory programs, it does not mention any activities associated with monitoring the appraisal requirements of the federal banking regulators. Further, ASC officials acknowledged the absence of a formal monitoring process. The absence of policies and procedures specifying monitoring tasks and responsibilities limits accountability for this function and is inconsistent with federal internal control standards designed to help ensure effectiveness and efficiency in agency operations. According to ASC officials, ASC performs this monitoring function through informal means, primarily through its board members who are employed by the federal banking regulators. However, minutes from ASC’s monthly board meetings and ASC’s annual reports to Congress indicate that the monitoring activities of ASC as a whole have been limited. For example, our review of board-meeting minutes from 2003 through 2010 found no instances of the board discussing the appraisal requirements of the federal financial regulators.function in ASC’s annual reports is limited to a summary of any new appraisal requirements issued by the federal financial regulators and HUD during the preceding year. Additionally, evidence of this monitoring Stakeholder views differ as to how to interpret the Title XI requirement that ASC monitor the requirements established by the federal banking regulators with respect to appraisal standards. Specifically, some ASC board members told us that they understand their monitoring role as maintaining an awareness of the federal financial regulators’ appraisal requirements. Further, one ASC board member told us that ASC’s monitoring of the federal financial regulators was more limited than its monitoring of states because (1) board members from the federal financial regulatory agencies are knowledgeable of the appraisal requirements of their agencies, (2) the federal regulators’ interagency process for developing appraisal guidelines (in place since 1994) has reduced the need for monitoring the consistency of guidelines across agencies, and (3) monitoring the states’ appraiser requirements requires in-depth review of state processes for licensing, certification, and enforcement. ASC adopted some of the report’s recommendations, such as creating a Deputy Executive Director position and allowing states to respond to preliminary compliance review findings prior to the issuance of final reports. and noted that ASC’s annual reports did not provide substantive analysis or critique of federal appraisal requirements. However, appraisal industry stakeholders also noted that implementing a more expansive interpretation of ASC’s monitoring role would pose challenges. For example, existing ASC staff may not have the capacity to take on additional monitoring responsibilities. Even if ASC staff were able to independently analyze the federal regulators’ appraisal requirements, the analysis would be subject to review by the ASC board, which, because of its composition, is not independent from the agencies that ASC is charged with monitoring. To better define the scope of its monitoring role and improve the transparency of its activities, we recommended in our January 2012 report that ASC develop specific policies and procedures for monitoring the appraisal requirements of the federal banking regulators. In June 2012, ASC officials told us that they recognized the need for ASC to perform this monitoring function, were deliberating on ways to carry it out, and expected to have policies and procedures in place later in the year. As previously noted, the Appraisal Foundation is a private not-for-profit corporation that sponsors independent boards that set standards for appraisals and minimum qualification criteria for appraisers. ASC approves an annual grant proposal and provides monthly grant reimbursements to the Appraisal Foundation to support the Title XI- related activities of the foundation and its Appraisal Standards Board and Appraiser Qualifications Board. The reimbursements cover the foundation’s incurred costs for activities under the grant. From fiscal years 2000 through 2010, ASC provided the foundation over $11 million in grant reimbursements, or about 40 percent of ASC’s expenditures over that period. Although ASC monitors the foundation in several ways, ASC lacks specific policies and procedures for determining whether grant activities are related to Title XI. ASC’s policies and procedures manual does not address how ASC monitors the Appraisal Foundation. Instead, ASC uses monitoring procedures contained in a memorandum prepared by a former Executive Director. The memorandum describes how the Executive Director reviewed the foundation’s grant activities but does not provide criteria for deciding what is Title XI-related. When we asked current ASC officials for the criteria they used, they indicated only that ASC staff “review submissions from the Foundation and supporting cost spreadsheets to determine that activities proposed in the annual grant request or the monthly reimbursement processes meet the requirements of Title XI.” They said that once staff determine whether or not a submission falls within these parameters, they make a recommendation to the ASC board. However, determinations about what activities are Title XI-related are not always clear-cut. For example, in 2003, the Executive Director at the time recommended that the foundation be reimbursed for certain legal expenses in connection with a complaint filed with the foundation’s ethics committee. However, the ASC board rejected the reimbursement request because the expenses “were not sufficiently Title XI-related.” ASC’s records do not indicate what criteria either the Executive Director or the ASC board used as a basis for their decisions or why they disagreed. Similarly, our review of ASC documents for more recent grants found no supporting explanations for decisions about whether grant activities were Title XI-related. One ASC board member said the board had a common understanding of what activities were eligible for grants but acknowledged that the basis for funding decisions could be better documented. As previously noted, our internal control standards state that federal agencies should have appropriate policies for each of their activities. Without policies that contain specific criteria, ASC increases the risk that its grant decisions will be inconsistent, limits the transparency of its decisions, and lacks assurance that it is complying with federal internal control standards. To address this limitation, we recommended that ASC develop specific criteria for assessing whether the grant activities of the Appraisal Foundation were related to Title XI In and include these criteria in ASC’s policy and procedures manual.June 2012, ASC officials told us that they had been developing these criteria and planned to finalize them by August 2012. The Dodd-Frank Act contains 14 provisions that give ASC a number of new responsibilities and authorities. Some of the tasks associated with these provisions are complex and challenging, especially for a small agency with limited resources. One of the more complex tasks for ASC is to establish a national appraisal complaint hotline and refer hotline complaints to appropriate governmental bodies for further action. Appraisal industry stakeholders we spoke with noted that creating and maintaining a hotline could be costly because it will likely require investments in staff and information technology to fully ensure that calls are properly received, screened, tracked, and referred. Stakeholders indicated that screening calls would be a critical and challenging job because frivolous complaints could overwhelm the system and identifying valid complaints would require knowledge of USPAP. Another complex task for ASC is providing grants to state appraiser regulatory agencies to support these agencies’ compliance with Title XI. Appraisal industry stakeholders cited challenges that ASC could face in designing the grant program and the decisions it will need to make. Some noted the challenge of designing grant eligibility and award criteria that (1) do not reward states that have weak appraiser regulatory programs because they use appraisal-related fee revenues (from state appraiser licensing and examination fees, for example) for purposes other than appraiser oversight and (2) will not create incentives for states to use less of their own resources for regulation of appraisers. In addition, ASC officials said they were unsure whether a January 2012 increase in the national registry fee—from $25 to $40 per appraiser credential—would be adequate to fund the grants and oversee them, especially in light of recent declines in the number of appraisers. As of June 2012, ASC had not implemented either the national hotline or the state grant program but had completed some initial steps. For example, ASC officials told us that they had developed initial protocols for handling hotline complaints and had begun work on a complaint form, website, and call center. In addition, ASC is in the process of hiring a grants manager. Chairman Biggert, Ranking Member Gutierrez, and Members of the Subcommittee, this concludes my prepared statement. I am happy to respond to any questions you may have at this time. For further information on this testimony, please contact me at (202) 512- 8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony include Steve Westley (Assistant Director), Don Brown, Marquita Campbell, Emily Chalmers, Anar Ladhani, Yola Lewis, Alexandra Martin-Arseneau, John McGrail, Erika Navarro, Carl Ramirez, Kelly Rubin, Jerry Sandau, Jennifer Schwartz, Andrew Stavisky, and Jocelyn Yin. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Real estate valuations, which encompass appraisals and other estimation methods, have come under increased scrutiny in the wake of the recent mortgage crisis. The Dodd-Frank Act codified several independence requirements for appraisers and requires federal regulators to set standards for registering AMCs. Additionally, the act expanded the role of ASC, which oversees the appraisal regulatory structure established by Title XI of FIRREA. The act also directed GAO to conduct two studies on real estate appraisals. This testimony discusses information from those studies, including (1) the use of different real estate valuation methods, (2) policies on appraiser conflict-of-interest and selection and views on their impact, and (3) ASC’s performance of its Title XI functions. To address these objectives, GAO analyzed government and industry data; reviewed academic and industry literature; examined policies, regulations, and professional standards; and interviewed industry participants and stakeholders. Data GAO obtained from Fannie Mae and Freddie Mac (the enterprises) and five of the largest mortgage lenders indicate that appraisals—which provide an estimate of market value at a point in time—are the most commonly used valuation method for first-lien residential mortgage originations. Other methods, such as broker price opinions and automated valuation models, are quicker and less costly but are viewed as less reliable. As a result, they generally are not used for most purchase and refinance mortgage originations. Although the enterprises and lenders GAO spoke with did not capture data on the prevalence of approaches used to perform appraisals, the sales comparison approach—in which the value is based on recent sales of similar properties—is required by the enterprises and the Federal Housing Administration. This approach is reportedly used in nearly all appraisals. Conflict-of-interest policies have changed appraiser selection processes and the appraisal industry more broadly, raising concerns about the oversight of appraisal management companies (AMC), which often manage appraisals for lenders. Recent policies, including provisions in the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act), reinforce prior requirements and guidance that restrict who can select appraisers and prohibit coercion. In response to market changes and these requirements, some lenders have turned to AMCs. Greater use of AMCs has raised questions about oversight of these firms and their impact on appraisal quality. Federal regulators and the enterprises said they hold lenders responsible for ensuring that AMCs’ policies and practices meet their requirements but that they generally do not directly examine AMCs’ operations. Some industry participants voiced concerns that some AMCs may prioritize low costs and speed over quality and competence. The Dodd-Frank Act requires state appraiser licensing boards to supervise AMCs and requires the federal banking regulators, the Federal Housing Finance Agency, and the Bureau of Consumer Financial Protection to establish minimum standards for states to apply in registering them. Setting minimum standards that address key functions AMCs perform on behalf of lenders could provide greater assurance of the quality of the appraisals that AMCs provide. As of June 2012, federal regulators had not completed rulemaking to set state standards. The Appraisal Subcommittee (ASC) has been performing its monitoring role under Title XI of the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 (FIRREA), but several weaknesses have potentially limited its effectiveness. For example, ASC has not clearly defined the criteria it uses to assess states’ overall compliance with Title XI. In addition, Title XI charges ASC with monitoring the appraisal requirements of the federal banking regulators, but ASC has not defined the scope of this function—for example, by developing policies and procedures—and its monitoring activities have been limited. ASC also lacks specific policies for determining whether activities of the Appraisal Foundation (a private nonprofit organization that sets criteria for appraisals and appraisers) that are funded by ASC grants are Title XI-related. Not having appropriate policies and procedures is inconsistent with federal internal control standards that are designed to promote the effectiveness and efficiency of federal activities. GAO previously recommended that federal regulators consider key AMC functions in rulemaking to set minimum standards for registering these firms. The regulators agreed with or said they would consider this recommendation. GAO also recommended that ASC clarify the criteria it uses to assess states’ compliance with Title XI and develop specific policies and procedures for monitoring the federal banking regulators and the Appraisal Foundation. ASC is taking steps to implement these recommendations. See G AO-11-653 and GAO-12-147 .
FAA, airports, and aircraft manufacturers have worked to meet the demands of continued growth in passenger and cargo traffic in different ways. FAA has worked to improve the capacity and efficiency of the national airspace system to accommodate a greater number and variety of aircraft by, for example, improving air traffic management systems and implementing domestic reduced vertical separation minimums. FAA is also currently working on the transformation of the nation’s current air traffic control system to the next generation air transportation system—a system intended to accommodate the expected growth in air traffic. However, the full implementation of the next generation air transportation system is years away. To accommodate increased traffic, airports have expanded the number of available runways and gates to service additional aircraft and in some cases new airports have been built. However, airports cannot always accommodate increased air traffic by expanding their infrastructure for a variety of reasons, including the lack of physical space to build additional runways or terminals. Aircraft manufacturers have developed larger and more efficient aircraft to meet growing passenger and freight demand. For example, Boeing introduced the first wide-body aircraft in 1969, the 747-100, which significantly changed the aviation market and was much larger than currently operated aircraft. According to Airbus, the 747-100 had roughly two and a half times more seating capacity than the largest aircraft operating at the time. Since then, other wide- bodied aircraft have been introduced to accommodate the increasing emphasis and demand placed on international service. The Airbus A380 represents another generational change in aircraft size and seating capacity. Specifically, the A380 is much larger than other aircraft, with a wingspan of about 262 feet, a tail fin reaching almost 80 feet high, a maximum takeoff weight in excess of 1.2 million pounds, and seating between 555 and 853 passengers. In comparison, the largest commercial aircraft in use today, the Boeing 747-400, has a wingspan of 211 feet, a tail fin about 64 feet high, a maximum takeoff weight of 875,000 pounds, and can seat between 416 and 660 passengers. Although the A380 will be the first in the new category of large passenger aircraft, it will likely not be the last. In December 2006, Boeing announced that it received orders for its 747-8 passenger aircraft. The Boeing 747-8 is anticipated to have a wingspan of about 225 feet, a tail fin about 64 feet high, a maximum takeoff weight of about 970,000 pounds, and typically seats 467 passengers in a 3-class configuration. These dimensions place this aircraft in the same category as the A380. (Figure 2 shows the dimensions of the Boeing 747-400, Airbus A380, and Boeing 747-8 aircraft.) Airbus anticipates there will be a continued demand for larger aircraft that can connect busy and congested hubs in the future. According to its analysis, Airbus estimated that new large passenger and freight aircraft would make up about 10 percent of the overall fleet from 2004 to 2023. In contrast, Boeing, while conceding the demand for a small number of very large aircraft, projects a greater demand for smaller-sized aircraft, such as the Boeing 787, which can provide point-to-point service, especially in long distance markets. The air carriers that have ordered the A380 plan to operate at airports throughout the world, including certain U.S. airports. As a result, the A380 must comply with aviation standards set by individual countries from around the world. ICAO is the international body that seeks to harmonize global aviation standards so that worldwide civil aviation can benefit from a seamless air transportation network. Its members or contracting states, including the United States, are not legally bound to act in accordance with the ICAO standards and recommended practices. Rather, contracting states decide whether to transform the standards and recommended practices into national laws or regulations. In some cases, contracting states deviate from the ICAO standards and recommended practices, or do not implement them at all. Although ICAO has no enforcement powers and only establishes standards and recommended practices, air carriers that use airports that do not comply with them may be subject to increased insurance costs. The A380 falls under ICAO’s design standards for the largest aircraft (Code F), which require at least 60-meter-wide runways (about 200 feet) and 25-meter-wide taxiways (about 82 feet). In addition, ICAO has also established varying in-flight, landing, and takeoff separation standards for the different classes of aircraft. In the United States, FAA, an agency of the Department of Transportation (DOT), is responsible for regulating the safety of civil aviation and also establishes the standards and recommendations for the design and development of civil airports. FAA’s role as a regulator is to foster aviation safety by overseeing manufacturers and operators to enforce full compliance with safety requirements. To this end, FAA must certify any new aircraft design before that aircraft can be registered in the U.S. for operations by domestic airlines. This design certification is the foundation for many other FAA approvals, including operational approvals. When domestic aircraft manufacturers request approval of a new aircraft design, FAA uses the type certification process to ensure that the design complies with applicable requirements or airworthiness standards. Type validation is the type certification process that FAA uses for foreign or imported products, such as the A380, to ensure that the design complies with applicable FAA standards. The A380 was validated by FAA and issued a type certificate in December 2006. Also, in March 2007, Airbus completed a series of airline route proving and airport compatibility flights, which were designed to demonstrate the A380’s ability to operate at airports around the world. As part of these flights, the A380 visited four U.S. airports, including New York John F. Kennedy, Chicago O’Hare, Los Angeles, and Washington Dulles International Airports. FAA also establishes standards and recommendations for airport planning and design. Due to the size of the A380, it is subject to the FAA’s design standards for the largest aircraft (Airplane Design Group VI standards). To be in compliance with these design standards, airports are required to have 200-foot-wide runways, 100-foot-wide taxiways, and appropriate separation distances. Table 1 shows the wing span criteria for the airplane design groups and examples of aircraft that fall into each category. These design standards group aircraft by wingspan and set ranges for which the aircraft that fall within each group could operate without limitations. According to FAA standards, the A380 could operate at U.S. airports built to Design Group VI standards without the imposition of operating restrictions to the airport or aircraft. However, most U.S. airports that anticipate receiving A380 service are not built to Design Group VI standards. When airports do not or cannot meet the required FAA design standards to accommodate certain aircraft, airport officials can apply for Modifications to Standards through FAA. This would allow certain aircraft to be operated at airports under certain conditions as long as the airport can provide an acceptable level of safety comparable to that of an airport meeting Design Group VI standards. The use of Modifications to Standards is a process to provide U.S. airports flexibility when the required design group standards cannot be met to accommodate certain operations, as long as an acceptable level of safety can be maintained. After reviewing the design specifications of the A380, FAA issued interim guidance in 2003 that allows the A380 to operate at airports with runways and taxiways that do not fully meet Design Group VI standards. In order to avoid costly or impractical changes to upgrade runways and taxiway systems to Design Group VI and be approved for A380 operations under the interim guidance, FAA must approve an airport’s request for Modifications to Standards when the standards are not met. These modifications may include A380-specific operational restrictions or special operating procedures to ensure that existing non-standard infrastructure is providing an acceptable level of safety. The A380 will be the first of a new category of large passenger aircraft introduced into the national airspace system in the coming years. The size of the aircraft poses a number of potential safety challenges for airports. Most U.S. airports were not designed to receive aircraft the size of the A380 and therefore, the width of their runways and taxiways do not meet FAA safety standards. As a result, airports expecting A380 service need to modify their infrastructure or impose operating restrictions on the A380 and other aircraft to assure that safety is maintained. In addition, research data suggests that the wake turbulence created by the A380 is stronger than any aircraft in use today and would require greater separation from other aircraft during landing and takeoff. Although the A380 is equipped with some safety enhancements, such as new internal and exterior materials designed to reduce flammability and an external taxiing camera system to enhance pilot vision on the ground, the A380 poses safety challenges for fire and rescue officials due to its larger size, upper deck, fuel capacity, and the number of passengers. The fire and rescue officials at the airports we visited were confident in their ability to respond to an A380 incident, but almost all of them identified some equipment, personnel, or training needs that would improve their ability to respond to emergencies involving the A380. Similar concerns were raised for the Boeing 747 aircraft when it was introduced to the market, and these potential safety challenges would likely be present for other similarly-sized aircraft introduced in the future. FAA, ICAO, Airbus, and airports have taken a number of steps to mitigate potential safety challenges posed by the A380. The A380 offers air carriers and airports several safety enhancements over existing aircraft. For example, it has a cockpit with the latest advanced displays and avionics, and is equipped with an external taxiing camera system to assist flight crews in keeping the aircraft in the center of taxiways when moving on the airfield. The cockpit was also designed to be much lower to the ground than other large aircraft to provide the flight crew better visibility. Other technical advances include the aircraft’s new external and internal materials that are designed to reduce flammability. A new material called Glare that is highly resistant to fatigue, is used in the external panels for the upper fuselage and provides a longer period of time preventing fire from penetrating into the passenger cabin—about 15 minutes compared to about a minute for standard aircraft aluminum. In addition, thermal acoustic insulation blankets, designed to extend the time before an external fire penetrates the fuselage, will be used inside the A380. Combined, these materials could provide additional time for evacuation by delaying the entry of fire into the cabin. The interior materials used in the A380 will also have decreased flammability properties and the aircraft will be equipped with enhanced fire and smoke detection systems. However, the size of the A380 also presents several potential safety challenges. These challenges include accommodating the A380 at airports that were not designed for aircraft as large as the A380, ensuring that the air turbulence caused by the A380 does not impact the flight of other aircraft, evacuating large numbers of passengers from the A380, and ensuring that airports have the necessary fire and rescue capabilities available. These issues would likely be present for other similarly-sized aircraft that may be introduced in the future. FAA, ICAO, Airbus, and airports have taken several steps to mitigate these challenges. The size of the A380 presents a safety challenge because most U.S. airports were not built to accommodate such large aircraft. FAA’s design standards are intended to ensure the safety of the aircraft and passengers at the airport. For example, FAA’s Design Group VI standards, which are applicable for the largest aircraft, including the A380, require that airports have 200-foot-wide runways. According to FAA officials, this standard helps ensure that pilots can safely operate large aircraft like the A380. Although the design standards do not govern aircraft operations, aircraft operators must seek FAA’s approval for certain aircraft to use facilities and infrastructure that do not meet standards and demonstrate to FAA that an acceptable level of safety is maintained. A few airports, such as Dallas-Fort Worth, Denver, and Washington Dulles International Airports, meet some design standards for A380-sized aircraft; however, no U.S. airport is completely built to those standards. To address this issue, airports have made or are making infrastructure changes to safely accommodate the A380. In May 2006, we reported that 18 U.S. airports were making preparations to receive the A380 and estimated that it would cost about $927 million to upgrade their infrastructure. About 83 percent of the costs reported by airports were identified for runway or taxiway projects. Most projects widened existing runways or taxiways and, in some cases, relocated taxiways to increase separation. The remaining costs were for changes at gates, terminals, or support services. Although these changes to airport infrastructure were driven by the introduction of the A380, they will also benefit current aircraft and other new large aircraft that may be introduced in the future. Further, officials at some airports told us that the economic benefits from having A380 service at their airport will outweigh the costs associated with the infrastructure changes needed to accommodate the aircraft. To safely accommodate the A380, many of the U.S. airports we visited that expect to receive this aircraft have requested Modifications to Standards from FAA. The use of Modifications to Standards is an established process to provide U.S. airports flexibility when the required design group standards cannot be met to accommodate certain operations as long as an acceptable level of safety can be maintained. For example, if the separation between a runway and a taxiway at an airport is less than the established standards, a Modification to Standards can be granted by FAA for not meeting the current standards when federal funds are being used for a planned improvement to that runway or taxiway and FAA determines that it is operationally safe. According to FAA, the use of Modifications to Standards at airports does not compromise safety. This process has been used by U.S. airports that do not fully meet the design standards for certain sized aircraft. However, FAA officials said the Modification to Standards process being applied to the A380 is seldom used because this process generally is not used to limit operations of a particular aircraft at an airport. Of the 18 U.S. airports we visited, 11 have applied for Modifications to Standards that would allow them to operate the A380. Of the remaining seven airports, officials indicated they were unsure if such modifications will be needed and will decide whether to request Modifications to Standards after FAA decides whether an A380 can safely operate on a 150- foot-wide runway or whether a 200-foot-wide runway will be required. According to FAA officials, a decision on runway width is expected in late summer of 2007. Finally, the airports also anticipate implementing some type of operating restrictions in order to safely accommodate the A380. Specifically, all 18 U.S. airports we visited anticipated imposing some type of operating restrictions on the A380 or on other aircraft that operate around the A380. The anticipated operating restrictions would generally affect runway and taxiway use. For example, officials at San Francisco Airport plan to restrict the movement of certain aircraft from using sections of parallel taxiways when an A380 is taxiing to and from the terminal because the taxiways are not far enough apart to meet the standards for taxiway separation required to safely operate the A380. FAA officials noted, however, that FAA is still conducting an operational evaluation for the A380, and therefore has not determined what, if any, operational restrictions for the A380 will be required. Thus, airports’ planned operating restrictions are subject to change when FAA completes its operational evaluation, which is expected this summer. FAA officials said that, FAA will perform an operational evaluation similar to the evaluation used for the A380 for the Boeing 747-8 and other large aircraft when they enter service. The wake turbulence of the A380 and other large aircraft can create safety issues if appropriate wake turbulence separations are not applied. Wake turbulence is created behind aircraft and the strength of the turbulence is dependent on the wingspan, the weight of the aircraft, and its speed. In general, the bigger the aircraft, the greater the wake created. Wake turbulence can affect following aircraft during landing, takeoff, and in- flight. Figure 3 illustrates how wake turbulence is created by an aircraft and the direction it travels. FAA and ICAO have adopted standards for keeping aircraft separated from each other during landing, takeoff, and in- flight to avoid the adverse effects of wake turbulence. ICAO and FAA have studied whether the A380 needs greater separation than current standards require and determined that the A380 produces stronger wake turbulence than any aircraft in use today. On the basis of this data, ICAO issued new guidance on the separation required between the A380 and other aircraft during landing, takeoff, and in-flight in October 2006. ICAO officials acknowledged that the guidance could be more conservative than the final standards, noting that the initial flight separation standard for the Boeing 747-400 aircraft was also set conservatively, but later reduced. The separations for the A380 could be changed in the future on the basis of operational experience of the aircraft. However, while this guidance is in effect, there will be somewhat longer intervals for departures following an A380 than currently exist and greater distances between aircraft following an A380 during landings. Figure 4 illustrates the interim flight separation standards for the A380 compared to other heavy category aircraft, such as the Boeing 747-400 aircraft. Another potential safety challenge is the large number of passengers to evacuate from an A380 during an emergency. The A380’s maximum seating configuration can accommodate up to 853 passengers—193 more than carried by the maximum seating configuration of the Boeing 747-400. To obtain type certification, aircraft manufacturers must demonstrate that the aircraft can be evacuated within 90 seconds. In March 2006, Airbus conducted the emergency evacuation demonstration for the A380. During the demonstration, 853 passengers and 20 crew members were successfully evacuated from the aircraft within 78 seconds. Airbus officials credited the design of the A380 for the successful evacuation demonstration. A related concern of FAA officials, airport fire and rescue officials, and some experts with whom we spoke is how to handle the large numbers of people around the aircraft after evacuation is complete. In particular, some fire and rescue officials were concerned about their ability to control the crowd and how to treat injured people on-site prior to being moved to nearby hospitals. To address these concerns, airport fire and rescue officials are reexamining their equipment needs and emergency plans for treating a greater number of passengers. FAA guidance states that an airport’s emergency plans should, to the extent practical, provide for medical services, including transportation and medical assistance, for the maximum number of people that can be carried on the largest aircraft that an airport reasonably can be expected to serve. However, in most cases, airport fire and rescue officials said that they plan for reasonable worst- case scenarios in which about 50 percent of the passengers can be treated for injuries on the largest aircraft operated at the airport. The advent of the A380 also may introduce a number of new fire and rescue safety issues for airports. For example: The A380 can hold almost 82,000 gallons of fuel, compared to about 57,300 gallons carried by the Boeing 747-400. While an A380 or a 747-400 may not be fueled to maximum capacity, the proportional increase in fuel that could be on the A380 compared to that of a 747-400 means that fire fighters will need additional water and extinguishing agent to contain and extinguish a fire. Although the A380 will have Glare material, designed to increase the amount of time it takes before a fire can enter the cabin, it will not be installed on the underside of the aircraft where a fire caused by leaking fuel is most likely to occur, according to a FAA official. Thus, assuring that airports have sufficient extinguishing agent is important. Airports may not have the necessary equipment to access the upper deck of the A380 for fire fighting or evacuation purposes. Most fire and rescue officials at the airports we visited indicated that they do not have the equipment to access the upper deck of the A380 for fire fighting or evacuation purposes. Although the height to the upper deck door of the A380 is essentially the same as that of the 747, according to a FAA official, the need to invest in such equipment now becomes more critical for the A380 because more passengers are seated on the upper deck of the A380. The A380 was designed with 16 evacuation slides and the longest slide, on the upper deck, will extend out about 50 feet from the aircraft. This increased number of slides could improve passenger evacuation, but according to some fire and rescue officials we interviewed, the number and position of the A380’s slides could also impede the fire and rescue vehicles’ access to the aircraft and making it more difficult to suppress the fire. Several airport fire and rescue officials with whom we spoke were confident they could respond to an A380 incident with their current resources. However, most stated that they were evaluating personnel, equipment, and training needs to ensure that the airport was adequately prepared for the A380. Fire and rescue officials from several airports stated that the introduction of A380-sized aircraft will only increase their needs for additional personnel and equipment. For example, officials from some airports told us that they are planning to add a vehicle with a penetrating nozzle with a higher reach that can inject fire extinguishing agent into the upper deck of the A380. Figure 5 shows a fire fighting vehicle with a penetrating nozzle fully extended and elevated to its maximum height of 50 feet. To help address these safety concerns, FAA has begun evaluating the need to update its airport fire and rescue safety guidance for new large aircraft, such as the A380. Officials from FAA’s Technical Center said that the guidance needs to be updated to reflect the A380’s vertical height, high numbers of passengers, second passenger deck, and increased fuel loads. FAA is also researching the need to increase the amount of water and extinguishing agent needed to respond to an A380 incident. In addition, FAA is studying the quantity of fire-suppressing agents needed to combat fires on new large aircraft and double-deck aircraft—taking into account the vertical dimension of the A380. However, FAA officials noted that most of the airports expecting to receive A380 flights currently exceed the vehicle and extinguishing agent requirements applicable to the aircraft and therefore would likely already meet new standards. FAA researchers are also helping to develop a penetrating nozzle on a 65-foot boom that would provide greater extension and a higher reach to inject fire extinguishing agent into the upper deck of the A380. The impact of the A380 on the capacity of U.S. airports is uncertain and would depend on multiple factors. Airport capacity is generally measured by the maximum number of takeoffs and landings that can occur within a given period of time. The A380 could increase passenger capacity at airports because it can carry more passengers than current aircraft and fewer flights could be used to accommodate air traffic growth. However, potential operating restrictions and the increased flight separation requirements could adversely impact capacity by limiting the number of flights that airports can handle. Further, the effects of gate restrictions, such as the number of gates available for A380 use and restricted use of gates adjacent to the A380, and terminal congestion from the increased number of passengers will need to be evaluated and could cause delays to the A380 and other aircraft. The extent of disruptions and delays caused by possible operating restrictions, increased separation requirements, and gate restrictions would depend on the time of day, the number of A380 operations, and the volume of overall traffic. Many airport officials stated that as long as the number of A380 operations per day remains low, the impact of the A380 on airport capacity—even with operating restrictions, increased separation requirements, and gate restrictions—should not be significant; however, as the number of A380 operations increases, the potential for an adverse impact also grows. The A380 was created, in part, to help alleviate airport capacity constraints caused by the continued growth in passenger and cargo air traffic. Air traffic in the U.S. increased by 35 percent from 1991 to 2001. Despite the low passenger travel following the events of September 11, 2001, FAA forecasts this growth to continue—estimating that air traffic will triple over the next 20 years. The current and projected growth in air traffic will also include new classes of aircraft, such as the A380. This greater diversity of aircraft—in terms of size, speed, and operating requirements— will add to the demands placed on the national airspace system and airports. Historically, airlines have addressed increased passenger demand by simply adding more flights and airports by expanding infrastructure. However, these are not viable options when airport runway infrastructure cannot be expanded and the volume of landings and departures at an airport exceeds the limits to operate efficiently. For example, in August 2006, FAA proposed a rule to limit the number of flights at New York’s LaGuardia Airport to reduce the level of congestion and delays. To offset the limit on flights, the rule encourages the use of larger aircraft at the airport to accommodate increased passenger demand. By using larger aircraft, the airport could accommodate more passengers with fewer or with the existing number of daily flights. Similarly, London’s Heathrow airport plans to increase its passenger capacity without increasing the number of daily flights by expecting as many as one of every 10 flights to be an A380 by 2020. According to Airbus, the A380 will help alleviate capacity constraints by accommodating more passengers and freight on each flight than any other aircraft in use today. Airbus officials estimate that the A380 can carry at least 35 percent more passengers and the A380 freighter will carry 50 percent more cargo volume per flight than other aircraft currently in use. In addition, the A380 can fly up to 8,000 nautical miles non-stop, enabling airlines to carry more passengers for greater distances than the current largest aircraft. Thus, the A380 could transport more people or freight greater distances with the same number—and possibly fewer—aircraft than are used currently. At congested airports, when A380 aircraft are used, airlines could meet anticipated growth in air travel without having to schedule additional flights. In addition to alleviating capacity constraints, Airbus and airport officials told us that the potentially greater number of passengers on each A380 compared to currently used aircraft could translate into economic benefits for the airports and local communities that would receive them. Specifically, airport expansion to accommodate anticipated growth in air travel, including the larger volume of passengers that the A380 could bring to an airport, could contribute to an area’s economic growth. According to Airbus and some airport officials, if airports received more passengers, airports will benefit from greater parking revenues, passenger facility charges, retail and restaurant sales, and other services. In addition, if A380 service increases the number of passengers flowing in and out of the airport, that increase could translate into more job opportunities at the airport and in the community. Studies have indicated that economic benefits can accrue to local economies as a result of activity at airports through expansion projects, directly and indirectly, in terms of additional jobs or increased salaries and wages. Therefore, the economic impact of A380 service on local communities near airports could be substantial, but it is not certain because the degree to which passenger volume would increase is uncertain. Furthermore, any economic benefits realized by airports and local communities as a result of airport improvements to enhance capacity, including accommodating A380 service, may represent transfers of economic activity from one airport or community to another. Airports’ planned operating restrictions and separation requirements resulting from A380 ground and flight operations, as well as the reduction in gate utilization and flexibility could offset some of the capacity gains anticipated as a result of the aircraft at U.S. airports. Potential operating restrictions and the increased separation requirements imposed to ensure the safety of the A380 and other aircraft at airports and during flight could result in a reduction in the number of flights that airports can accommodate. Furthermore, gate availability, restricted use of gates adjacent to A380 gates, and potential congestion issues could reduce gate utilization and flexibility at some airports—which could also lead to fewer flights at an airport. According to most of the airport officials and experts we interviewed, the extent to which operating restrictions, increased separation requirements, and gate utilization would impact capacity would depend on the volume of A380 traffic, the time of day, and the volume of overall air traffic. Most U.S. airports we visited that expect to receive the A380 are not designed for aircraft of this size and, therefore may need to implement operating restrictions to safely accommodate the A380. These restrictions can come in many forms—from restricting the A380 to certain runways and taxiways to stopping the movement of other aircraft when the A380 is in close proximity. In addition some airports have designated specific routes for the A380 to use when landing and taxiing. These specific routes are needed because the wingspan of the A380 prevents the aircraft from passing various objects on the airfield, such as buildings, without violating the spacing requirements established by FAA. Therefore, airports expecting large aircraft service like the A380 will have to evaluate taxi routes to ensure required distances from other objects are maintained— which is a normal procedure for airports that receive larger aircraft. The effect of these operating restrictions have not been determined, but a potential impact is that airports may not be able to handle as many landings and departures in a given time period. For example, at one airport, airport officials said landings and departures could not be performed on one runway while an A380 is taxiing to or from the runway for about two miles on the adjacent taxiway. According to the air traffic controllers, this would disallow use of that runway for about three minutes. Even delays of a few minutes at an airport could increase the operating costs of air carriers. For example, FAA officials from FAA’s Technical Center estimated that one minute of delay would cost an air carrier at San Francisco airport about $57, or about $3,400 per hour. Similarly, the A380 may need to follow a designated route to and from the runway—and not necessarily the most efficient route—potentially delaying other aircraft that may need to wait for the A380 to complete its maneuvers. As a result, fewer aircraft could be able to access runways to land and depart in a given period. Most experts and air traffic controllers said the cumulative effect of these restrictions could reduce the number of flights at a busy airport because delays exacerbate airport congestion and make the job of managing air traffic more difficult. In the long-term, airports could work with airlines to schedule A380 aircraft during off-peak times to lessen this effect. However, airlines may be reluctant to schedule these flights during off-peak hours because it might be contrary to their international flight time slots to which A380s will likely be largely used. Regardless, even if schedules were adjusted to account for the operating restrictions, the additional time associated with the restrictions could result in the airport being unable to accommodate as many flights as it could if not for the A380 operating at the airport. According to many airport officials and aviation experts with whom we spoke, the extent of disruptions and delays caused by the operating restrictions would depend on the time of day, the number of A380 operations, and the volume of overall traffic. Many airport officials and experts we interviewed stated that as long as the number of A380 flights per day remains low, the impact of the operating restrictions should not be significant; however, as the number of A380 flights increases, the potential impact would also grow. The increased separation requirements for the A380 could adversely impact airspace and airport capacity. Under ICAO’s current guidance, separation distances are based on the size of the aircraft following an A380, with lighter aircraft requiring a greater separation. To illustrate the increased separation requirements for the A380 on approach for landing, there must be a 6 nautical-mile separation between a heavy category aircraft, such as a 747-400, trailing an A380. In comparison, a heavy aircraft trailing another heavy aircraft needs to be separated by 4 nautical miles. The cumulative effect of this extra separation could adversely impact airspace capacity by reducing the number of flights that could be accommodated in the airspace during a given time frame, according to most of the experts we interviewed. In addition, the additional separation between the A380 and other aircraft during takeoff and landing can reduce the number of arrivals and departures at an airport, which could also negatively impact airport capacity. Airbus officials, however, noted that such reductions in the number of arrivals and departures will be countered by the potential increase in the number of passengers per A380 flight—that is, the number of airplane operations may decrease, but the number of passengers arriving and departing from the airport may increase. Most of the experts we interviewed generally agreed that the increased flight separations required for the A380 could have a significant impact on airport capacity, but noted the magnitude of the impact would depend on timing of flights and volume of A380 traffic. Most airport officials at the airports we visited indicated that they expected few A380 flights and therefore, did not anticipate that the additional separation or ground traffic issues would have a significant impact. FAA’s analysis of capacity at a few airports expecting to receive the A380 supports these views. For example, using ICAO’s current separation standards—which increase separation by the size of the aircraft following an A380—FAA projected that A380 operations at the San Francisco airport in 2015 would add no increase in delays given the few A380s expected. However, given the larger number of expected A380s at New York’s JFK airport, A380 operations would increase the total annual delay about 2 percent in 2015 over the expected total annual delay without A380 service. In addition, FAA projected that as the number of A380 flights increase by 2025, an increase of about 1 percent in the total annual delay can be expected at San Francisco airport and almost 2 percent at New York’s JFK airport over the expected hours of total annual delay without A380 service. The projected cost to airlines in 2025 for A380-related delays at San Francisco airport would be $11.6 million and $59.2 million at JFK airport. According to Airbus officials, however, the analysis does not reflect potential cost savings to airlines due to the reduction in the number of arrivals and departures and as previously noted the potential increase in the number of passengers per A380 flight. Without an integrated analysis that includes passenger throughput, we are unable to determine the net effect. The size of the A380 may also impact gate utilization in several ways. First, the A380 will need to use gates with at least two passenger loading bridges. The A380—similar to the 747-400—will be limited to using specific gates because not all gates have two bridges. Similarly, many terminal areas at U.S. airports where traffic bottlenecks and congestion are common will not have the necessary clearances for an A380 to operate on taxilanes between or beside other aircraft (see fig. 6). Thus, the A380 will be limited to certain gates. Second, the size of the A380 could restrict the size of the aircraft at the adjacent gate, or close the gate entirely. Third, loading and unloading passengers and baggage on an A380 could take longer because of the increased number of passengers on the aircraft. As a result, the A380 could tie up a gate longer than other aircraft, reducing the number of aircraft that could be served by the gate in a given period. According to most of the experts with whom we spoke said these gate issues can reduce flexibility in airport operations and lead to delays. However, Airbus officials noted that the interior cabin design of the A380 and the use of two bridges should allow turnaround times of about 90 minutes—which is similar to the turnaround time of the 747-400. The increased passenger load carried by an A380 could strain current airport terminal facilities and operations, such as check-in, baggage claim, and customs and immigration services. For example, most experts we interviewed said that a surge in passengers created by an A380 going through airport check-in procedures could not only delay the A380 passengers but also passengers of other flights. In addition, the amount of baggage from an A380 flight to load or unload could lead to delays for passengers and other aircraft waiting at the gate. One expert noted that the delays caused by the new security procedures introduced in the summer of 2006—which resulted in an increase in checked baggage for a period of time—illustrates how surges in the amount of baggage loaded and unloaded can lead to delays and congestion. However, airport officials generally had no concerns with the A380’s impact on airport terminal facilities and operations. Additionally, a few experts told us that the A380’s incremental increase in passengers and baggage over that of a 747-400 would have little impact on terminal operations, especially at airports that will only receive a few A380 flights per day. As mentioned earlier, the next generation air transportation system is being designed to accommodate as much as 3 times the current air traffic, including the introduction of new large aircraft such as the A380. The planning underway involves so-called “curb-to-curb” initiatives that are designed, in part, to address the potential capacity and gate disruption issues discussed above. Since the planning and implementation phases of the next generation system remain in the early stages, however, it is currently unclear the extent to which the initiatives will effectively mitigate those potential issues. Selected foreign airports we visited have taken different approaches to prepare for the introduction of the A380. These differences reflect the age and the expected level of A380 traffic at the airports—and, in some cases, the anticipated economic benefits of the A380 flights. The different approaches include adopting alternative airport design standards to accommodate new large aircraft, making significant investment in existing infrastructure, and designing airports that allow for new large aircraft. By implementing these approaches, officials from the foreign airports we visited do not anticipate that the introduction of the A380 will result in delays or disruptions at their airports, despite higher levels of expected A380 traffic compared to most U.S. airports because these airports will not have to impose operating restrictions on the A380 to the extent of U.S. airports. The A380 Airport Compatibility Group (AACG), which includes four European aviation authorities, agreed to adopt adaptations of the ICAO standards for A380 operations at existing airports that do not currently meet the requirements. For example, ICAO standards require runway width to be no less than 60 meters (about 200 feet) and taxiway width 25 meters (about 82 feet), but the AACG decided widths of 45 meters (about 150 feet) for runways and 23 meters (about 75 feet) for taxiways would be adequate to safely operate the aircraft. Officials of European civil aviation authorities said the AACG decision was based on runway-to-taxiway centerline deviation studies that have found that large aircraft do not deviate significantly from the centerline. In addition, the AACG decision was influenced by the anticipation that the A380 would be certified by the European Aviation Safety Agency (EASA) to operate on 45-meter runways—which occurred in December 2006. In contrast, the FAA type certificate does not include approval to operate on 150-foot-wide runways and evaluations of these operations have not been completed. According to FAA, the decision about runway width is an operational concern, rather than a certification issue. FAA is currently evaluating the use of narrower runways (less than 200 feet). FAA expects to complete its evaluations and issue a decision in summer 2007. Like most U.S. airports, the older foreign airports we visited were not designed to accommodate aircraft as large as the A380. However, unlike the U.S. airports, these foreign airports made significant investments in infrastructure changes and improvements in anticipation of future growth and the need to modernize, which included accommodating new large aircraft such as the A380. For example: Airport officials at London Heathrow airport indicated about $885 million would be related to accommodating the A380. Heathrow’s investments related to the A380 included widening and strengthening its two runway’s shoulders and upgrading runway lighting, demolition and redevelopment of a portion of an existing terminal to add four A380 gates and allow more space for the aircraft, and development of a new terminal to provide five A380 gates by 2008 and 14 by 2011. At the Paris Charles de Gaulle airport, about $132 million is being spent to prepare for the A380. The investment includes widening and strengthening two runways at the airport and building a new satellite terminal complex specifically to accommodate the A380. Initially, nine gates with upper deck access and two remote parking positions are available, but airport officials expect the number of A380 gates to increase to about 30 by 2018. At the Beijing Capital airport, A380-related improvements have been included in the $3 billion renovation projects—particularly to prepare for the 2008 Olympic Games—that include building a new terminal to handle the anticipated increase in future demand, a new 3,800-meter-long, 60- meter-wide runway to accommodate the A380, new facilities and cargo areas, and additional landing areas. At the Amsterdam Schiphol airport, a new 60-meter-wide, 3,800-meter-long runway and associated taxiways were built that meet international standards, and the terminal was expanded at a cost of over $440 million and $213 million, respectively, to expand capacity and maintain its competitive position as an international hub. The new, longer runway and terminal expansion projects were initiated to enhance overall capacity of the airport and to accommodate new large aircraft, such as the A380. The terminal will have four gates ready for the A380 in 2007. In contrast, all the 18 U.S. airports expecting to receive the A380 plan to invest about $927 million in total on A380 infrastructure changes—which is only slightly more than the investments being made at Heathrow. The most a single U.S. airport is investing in infrastructure changes to accommodate the A380 is $151 million. The level of planned investments reflects the expected level of A380 traffic. Specifically, the foreign airports we visited are expecting more A380 traffic, in part, because they will serve as hub airports for international travel or serve as hubs for airlines that have purchased the A380. For example, JFK expects about 16 A380 arrivals and departures per day in 2015—possibly the most daily A380 flights at any U.S. airport. However, Heathrow airport officials expect that by 2020, one of every 10 aircraft arriving and departing will be an A380, or about 130 arrivals and departures per day. Similarly, officials at the Paris Charles de Gaulle airport estimate that at least 10 percent of all passengers arriving at the airport will be aboard an A380 by 2020. In addition to the level of investment, U.S. and foreign airports differ in the type of investments. Foreign airports, in particular European airports, are investing more in terminal and gate improvements to accommodate the A380 than U.S. airports. For example, London Heathrow, Paris Charles de Gaulle, and Amsterdam Schiphol airports have undertaken major terminal and gate improvement projects to accommodate the A380. In contrast, the majority of investments reported by U.S. airports (83 percent) were for runway and taxiway projects to accommodate the A380. This difference likely reflects that all Asian airports meet ICAO standards, including runway and taxiway width, for new large aircraft, such as the A380, and that the AACG determined that European airports could use more narrow runway and taxiway widths for the A380, which negated the need to widen the runways or taxiways. Seven of the eight Asian and Canadian airports we visited were designed for future expansion or were built to allow new large aircraft, such as the A380. Five airports—Singapore Changi, Hong Kong, Tokyo Narita, Montréal Trudeau, and Toronto Pearson—were not designed specifically for the A380, but rather were built to accommodate the arrival of new large aircraft in the future and either complied with or needed only minimal modifications to comply with international standards applicable to new large aircraft. For example, at the Singapore Changi and Toronto Pearson airports, the runways were wide enough to accommodate the A380, but the shoulders needed to be modified to comply with ICAO requirements. Taken as a whole, these airports will not have to impose operating restrictions on the A380 except for a few instances, but not to the extent as U.S. airports. Two Asian airports in Bangkok, Thailand and Guangzhou, China, were built in compliance with the international standards for new large aircraft. According to airport officials, these two airports were built because of the economic activity they were expected to generate for their region and their countries. Moreover, these officials stated that to remain competitive, the airports had to be able to receive new large aircraft, and in particular the A380 because it represents the next generation of aircraft. Because these two Asian airports in Bangkok and Guangzhou were built to comply with international standards for new large aircraft, they will not need to restrict A380 operations or the movement of other aircraft as they move around the airfields to and from terminals. Figure 7 shows a picture of the Baiyun International Airport in Guangzhou, China. In comparison, most of the 18 U.S. airports expecting to receive the A380 and the three European airports we visited were not built to comply with international standards for new large aircraft, such as the A380. As a result, officials from the U.S. airports told us that they anticipated imposing operating restrictions on the A380 or aircraft operating in proximity to the A380 to ensure safety. As discussed previously, European airports have adopted alternative standards and only one of these airports we visited plans to impose some operating restrictions. Many large airports in the U.S. and around the world are facing capacity constraints as passenger and cargo traffic continues to grow. The A380 was designed, in part, to help alleviate these capacity constraints. However, the impact of its arrival on airport capacity in the United States is uncertain. The exact impact will likely vary by geographic regions of the U.S. and will depend on a range of factors, including the volume of A380 traffic, timing of these aircrafts’ operations, and the operating restrictions imposed on the aircraft and those aircraft operating around it. Although many U.S. airports are facing capacity constraints, the decisions by airport officials to make the necessary infrastructure changes to accommodate the aircraft were not solely driven by potential capacity gains. Rather, officials at some airports told us that they want to receive the A380 to help their airport’s competitive position. They are expecting that the economic benefits from having A380 service at their airport will outweigh the costs associated with the infrastructure changes needed to accommodate the aircraft. While the impact of operating restrictions on airport capacity is not clear, FAA and industry experts generally agreed that the A380 will add another element of complexity to airport operations and airspace management. This could limit A380 operations to designated gates, taxiways, or runways at many airports. This will reduce air traffic controllers’ flexibility in making routing decisions for the A380 and other aircraft. Further exacerbating this situation is the current and projected growth in air traffic as well as the rollout of new classes of aircraft that could have their own operating and infrastructure requirements. Optimizing the use of airspace and airport facilities to the growth in air traffic and new classes of aircraft, including the A380, will be challenging. To address some of these challenges, airports expecting to receive the A380 are making infrastructure changes to accommodate it that involve retrofitting or expanding existing infrastructure, such as runways and taxiways. As we have previously reported, the airports estimated that these changes will be costly and were driven by the introduction of the A380, but they will also benefit current aircraft and other new large aircraft that may be introduced in the future. If recent history is a guide, the evolution of aircraft will not stop with the A380 as evident with Boeing’s decision to go forward with its own new large aircraft, the 747-8. Thus, to help mitigate future difficulties, federal policymakers, airport officials, and other stakeholders are considering the introduction of the A380 and other new classes of aircraft as they move forward with airport development throughout the nation as well as the development of the next generation air transportation system. We provided a draft of this report to the Department of Transportation for review and comment. FAA officials generally agreed with the report’s findings. FAA officials also provided technical clarifications via e-mail, which were incorporated as appropriate. In addition, we provided a draft of this report to Airbus North America Holdings, Inc. (Airbus) for review and comment. Airbus provided written comments, which are reprinted in appendix III. In its letter, Airbus states that we correctly identified potential safety and capacity issues associated with the introduction of the A380. However, regarding our discussion on capacity issues, Airbus expresses concern that we overemphasized the operational constraints imposed on or by the A380. We interviewed a range of aviation experts and examined a variety of studies and analyses to understand any potential impact, both positive and negative, the A380 could have on capacity. Although the report does describe the potential operational constraints associated with the introduction of the A380, we believe the report provides a balanced discussion regarding the potential benefits that new large aircraft, such as the A380, could provide to help alleviate capacity constrained U.S. airports as well as the potential capacity reduction due to operating restrictions, increased separation, and gate utilization issues associated with A380 operations. Airbus also suggests that our capacity discussion should include information on passenger throughput, noting that we use one definition of capacity—that is, the maximum number of aircraft takeoffs and landings (aircraft movements) that can occur during a given period. We acknowledge that we defined capacity by aircraft movements and agree that passenger throughput is another measure of capacity. We chose to use aircraft movements as the definition of capacity for this report because FAA uses the maximum number of aircraft movements to express airport capacity. The report includes information on the potential impact of the A380 on passenger throughput—specifically, that the A380 could accommodate more passengers and freight on each flight than any other aircraft in use today. However, we added additional information on the A380’s potential impact on passenger throughput on the basis of Airbus’ comments. Airbus also provided technical comments, which were incorporated, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 10 days from the report date. At that time, we will send copies to appropriate congressional committees, the Secretary of Transportation, and representatives of Airbus. We will also make copies available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 2834 or by e-mail at dillinghamg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals making key contributions to this report were Nikki Clowers, Assistant Director; Vashun Cole; and Frank Taliaferro. We were asked to review and identify the impact of the Airbus A380 on U.S. airports. In May 2006, we issued a report that estimated the costs of infrastructure changes that U.S. airports plan to make to accommodate the A380. This report discusses (1) the safety issues associated with the introduction of the A380, and how U.S. airports are addressing them, (2) the potential impact of A380 operations on the capacity of U.S. airports, and (3) how selected foreign airports are addressing these safety and capacity issues. To address these issues, we reviewed published studies on operational issues related to the A380 and on aircraft fire and rescue equipment and tactics, A380 emergency evacuations, pavement strength issues for the A380’s weight, and other safety-related issues. We also reviewed FAA’s design standards and attended FAA briefings on its type validation and type certification processes. For our May 2006 report, we analyzed the A380-related requests for Modifications to Standards made by the U.S. airports we visited and summarized FAA decisions regarding the infrastructure and operational impacts to the airports. We also discussed—with FAA and airport officials—the effect that Modifications to Standards would have on airports’ infrastructure. For this report, we discussed with FAA officials the safety considerations of Modification to Standards, but did not analyze the extent that Modifications to Standards are used at all U.S. airports. We also examined FAA William J. Hughes Technical Center’s (Technical Center) analysis of the impact of new large aircraft operations at Memphis International, New York John F. Kennedy International, and San Francisco International Airports. We analyzed the Technical Center’s methodology in preparing these analyses and the results of these analyses and met with FAA officials to discuss the analyses. We determined that the Technical Center’s analyses were sufficiently reliable for our purposes. We also examined the International Civil Aviation Organization’s (ICAO) guidance and standards for airport design and aircraft separation. We interviewed officials from FAA and representatives from ICAO, Airbus, and aviation trade association to discuss safety and capacity issues associated with the arrival of the A380. In addition, we conducted semi- structured interviews with 17 aviation experts to obtain their views on the impact of the A380 on airport operations and capacity and potential safety issues. We contracted with the National Academy of Sciences (NAS) to identify individuals who are experts in the fields of safety, capacity, infrastructure, and certification. We developed an interview guide that asked for the expert’s views on a series of questions on safety and capacity issues related to the introduction of the A380 and pre-tested this guide with two experts to ensure that the questions sufficiently addressed the issues and were not biased, misleading, or confusing. We incorporated feedback from our pretests into the interview guide, and then used the guide for our interviews. After conducting the interviews, we analyzed the experts’ responses to our questions to identify major themes. The aviation experts we interviewed were not selected randomly and their views and opinions cannot be generalized to the larger population of experts and aviation officials. See table 2 for the aviation experts we interviewed. We conducted site visits to the 18 U.S. airports that are making infrastructure improvements to accommodate the A380. (Table 3 shows the U.S. airports that we visited.) We conducted these site visits from September 2005 to February 2006. During these site visits, we interviewed airport officials, including airport management, air traffic controllers, and fire and rescue personnel, and toured the airport facilities to identify safety and capacity challenges associated with the arrival of the A380 at their airport and efforts they were undertaking to mitigate these challenges. To ensure the accuracy of information summarized in the report, we verified the information we collected with officials from the 18 airports in the fall of 2006. We also conducted site visits to 11 Asian, Canadian, and European airports that will be receiving the A380. (Table 4 shows the foreign airports we visited.) We conducted these site visits from February 2006 to November 2006. We selected these high-capacity airports based on the expected level of A380 operations or the presence of airlines that have ordered the A380 aircraft and intend on using these airports as a hub for their operations. During these site visits, we interviewed airport officials, including airport management, air traffic controllers, and fire and rescue personnel, and toured the airport facilities to identify the safety and capacity challenges associated with the arrival of the A380 and the efforts being undertaken to mitigate these challenges. We summarized the information obtained for this report and sought verification from the 11 airports in the winter of 2006. We performed our work from May 2005 to March 2007 in accordance with generally accepted government auditing standards. To determine how foreign airports were addressing the potential safety and capacity issues associated with the introduction of the A380, we visited 11 foreign airports. The following are summaries of the information airports’ provided on operations and their A380 plans. Bangkok Suvarnabhumi International Airport, currently the operating hub for Thai Airways, opened in 2006 and was built as an ICAO Code F airport that could handle 45 million passengers and three million tons of cargo per year at a cost of about $3.9 billion. The airport is one of the largest in Asia, with a terminal slightly larger than that of Hong Kong airport. The final phase of construction, expected to begin in about 2015, will add a fourth runway and another terminal to increase the capacity to 100 million passengers per year. A maintenance facility has also been built at the airport that can house up to three A380s in one hangar at the same time. Officials of the Thai Department of Civil Aviation do not expect that the A380 would cause delays at their airport. A380 flight operations will begin with Qantas and United Arab Emirates airlines service in early 2008. Thai Airways ordered six A380 aircraft and will begin service in 2009 or 2010 after it takes its first delivery from Airbus. Table 5 provides A380-related issues at Suvarnabhumi airport. Early 2008. Initially: Anticipates 12 per day. 5th year: 12 per day (possibly more). Singapore Airlines, Air France, and Qantas (2008), Emirates and Lufthansa (2008 or 2009), and Thai Airways (2009 or 2010). Passenger: Not available. Cargo: Not available. Airfield is ICAO Code F compliant. None. Passenger waiting rooms could become crowded and baggage facilities in the new airport were built to receive new large aircraft such as the A380. Five A380 gates with one upper and two lower boarding bridges. None. None. None. None. Meets ICAO ARFF requirements for A380- sized aircraft. Beijing Capital International Airport has been upgraded with several renovations since it opened in 1958, and in 2005 it handled about 41 million passengers and about 782,000 tons of cargo. Airport officials said that in anticipation of the increasing aviation demands due to the economic development of the Beijing area as well as the 2008 Beijing Olympic Games, Beijing Capital airport officials have begun a $3 billion airport expansion plan to double the existing capacity. When completed, the airport will be able to handle 60 million passengers, 1.8 million tons of cargo, and about 500,000 flights per year. A380-related improvements have been incorporated in the renovation projects, which include building a new terminal to handle the anticipated increase in future demand, a new 3,800-meter-long, 60-meter-wide runway to accommodate the A380, new facilities and cargo areas, and additional landing areas. In addition, major terminal and gate improvement projects have been undertaken to accommodate the A380. China Southern Airlines is the only Chinese A380 customer. However, in addition to China Southern Airlines, Air France, and Lufthansa Airlines have expressed their intent to operate the A380 at the Beijing airport. Table 6 provides A380-related issues at Beijing airport. Guangzhou Baiyun International Airport, currently the operating hub for China Southern airlines, opened in 2004. It cost roughly $2.39 billion, is one of the three large hub airports on the Chinese mainland, and is the busiest airport in south China. In 2005, the airport handled 23.5 million passengers and 750,000 tons of cargo. The airport was the first in China designed and built with the hub concept and a capacity to accommodate a projected annual growth of 27 million passengers and 1.4 million tons of cargo through 2010. China Southern Airlines is the only Chinese A380 customer and has already considered replacing an existing nonstop route from Guangzhou to Los Angeles using an A380. The airport has one runway and will have one gate ready for the A380 in 2008 and plans to add additional A380 gates as needed in future planned concourses. Airport officials said A380-related improvements exist in a $2.22 billion expansion plan that includes the construction of an additional runway, terminal, and cargo facilities. The facilities will be increased as the expansion plans are completed with a capacity to accommodate 80 million passengers and 2.5 million tons of cargo annually. Table 7 provides A380-related issues at Baiyun airport. Hong Kong International Airport is the busiest airport for freight (by weight) in the world, handling about 3.6 million tons of freight in 2006. The airport also handled about 44.5 million passengers in 2006. The airport was built on a landfill in the Hong Kong bay and began operations in 1998. The airport has additional expansion plans to increase passenger capacity to 80 million per year by 2025. However, in order to achieve that capacity the airport authority is planning to conduct engineering and environmental feasibility studies on the construction of a third runway for the airport. The airport authority had spent approximately $15 million in airport enhancement works for the operation of A380 passenger flights and was certified as an ICAO Code F airport in July 2006. The airport is an operating hub for DHL freight, and FedEx and UPS also operate at the airport. No airline based in Hong Kong has purchased the A380, but airport officials expect to accommodate foreign carriers’ A380 flights. The airport serves about 80 foreign airlines and about 70 percent of the flights to Hong Kong are wide-body jets. Singapore Airlines will likely be the first to bring an A380 into Hong Kong. Table 8 provides A380-related issues at Hong Kong airport. Singapore Changi International Airport has undergone several expansions since the airport opened in 1981. In 2006, the airport handled over 35 million passengers and almost two million tons of cargo. Changi airport is the operating hub for Singapore Airlines, which is the launch customer for the Airbus A380. Singapore Airlines will begin receiving its A380 deliveries in the fall of 2007 and plans to begin flight operations in January 2008 with flights to London Heathrow and San Francisco airports. Lufthansa, Qantas, Korean Air, and Virgin Atlantic airlines could begin flights to Singapore by 2010. The airport authority has spent about $43 million on improvements such as widening runway shoulders, and runway-taxiway and taxiway- taxiway intersections, installing upper deck loading bridges, and expanding the seating areas to handle A380 passenger loads. The airport has two parallel runways and will have 11 gates ready for the A380 in 2007—a total of 19 gates will be available in 2008. Changi airport will also have a maintenance facility with hangars that can fully enclose two A380 aircraft and a third A380 compatible hangar under construction. In 2008, a new terminal (Terminal 3) will open for operations and will enable the airport to accommodate 64 million passengers per year and add 8 more gates for the A380. Table 9 provides A380-related issues at Changi airport. Tokyo Narita International Airport, which opened in 1978, handles the majority of international passenger traffic in Japan and in 2005 handled over 31 million passengers and more than 2.3 million tons of cargo. In terms of the number of international passengers, it is ranked eighth in the world and second highest in the world in terms of the volume of international cargo. To date, six airlines—Lufthansa, Air France, Qantas, Virgin Atlantic, Singapore Airlines, and Korean Airlines—have announced plans to operate A380s at the airport. No Japanese air carrier has any immediate plans to purchase the A380. The airport has one runway and will have ten gates ready for the A380. Airport officials said existing facilities are used to accommodating very large passenger loads arriving at the same time on a daily basis. In fact, large aircraft, such as the 747-200, 747-400, and 777-200, currently make up about 75 percent of the traffic at Narita airport. The officials said the nominal increase in passenger loads on A380 flights will not have a significant impact on the efficiency of the airport’s internal operations. Table 10 provides A380-related issues at Narita airport. Montréal Trudeau International Airport, first opened in 1941, is the third busiest airport in Canada in terms of passenger traffic (after Toronto Pearson and Vancouver airports) and served about 11 million passengers in 2005. The airport is undergoing a major $716 million expansion and modernization plan designed to double terminal capacity to handle 25 million passengers per year and enhance the level of passenger service. The first A380 arrival is expected during the summer of 2009 with an Air France flight on its daily Paris to Montréal route. Montréal Trudeau, which serves as the main operating hub for Air France, is expected to be the only airport in Canada with a daily A380 flight. Airport officials said that no major investments were needed because runway width and clearances between runways and taxiways comply with ICAO Code F requirements. The airport has three runways and one gate that will be available to accommodate the A380 in 2007. The runways are 62 meters wide, but vary in length and have non-paved, grass shoulders. Airport officials stated that two of the runways do not meet the necessary length requirement for A380 departures, but could be occasionally used for landings. Table 11 provides A380-related issues at Trudeau airport. Toronto Pearson International Airport, first opened in 1939, is Canada’s busiest airport and handled almost 30 million passengers, 410,000 tons of cargo, and about 410,000 flights in 2005. Four carriers operate at Pearson that has purchased the A380, but none have indicated intent to operate their A380s at the airport. The airport is nearing completion of a $3.7 billion Airport Development Program to address improvements in groundside, terminal and airside infrastructure. Airport officials said the investments in airport infrastructure were meant to replace and expand their capacity to receive more passengers and freight and were not directed exclusively to accommodating the A380 because they did not expect many A380s at the airport. However, about $37.3 million of the improvement costs can be attributed directly to accommodating the A380 and future new large aircraft for airfield and terminal modifications. The airport currently has two runways and will have four gates ready for the A380 in 2007. The runways are 60 meters wide, but have non-paved, grass shoulders that may have to be paved to protect against jet blast. Airport officials stated they took A380 needs into account when designing the new Terminal 1, which opened in April 2004. Table 12 provides A380-related issues at Pearson airport. Amsterdam Schiphol Airport is one of four major European hubs for passenger and freight air traffic. It is the third busiest European airport for cargo traffic with over 1.4 million tons transported and fourth in passenger traffic with over 44 million passengers in 2005—much of which is due to the trans-shipment of cargo and connecting passenger traffic. The airport will not be a hub for A380 traffic but will accommodate significant A380 passenger transfers to other planes bound to other destinations. A380 flight operations could begin in February 2008 with flights from Malaysian Airlines. Schiphol began planning for airport improvements related to new large aircraft in 1996. The new Code F runway and associated taxiways cost over $440 million and the expansion of the terminal cost over $213 million. The airport has one runway that is compliant with ICAO Code F but will also use the other four 45-meter runways and associated 23-meter taxiways in accord with a European agreement that Code E infrastructure could be used for the A380. Airport officials said A380s will be operated on the runways and taxiways not designed to Code F standards under waivers approved by the Netherlands Civil Aviation Authority. The airport will also have two gates ready for the A380 in 2007 and another two after 2008. Schiphol officials indicated that they would not need many additional A380 gates in the future when A380 flights increase because large aircraft gate occupancy and turnaround time present no issues. Table 13 provides A380-related issues at Schiphol airport. London Heathrow International Airport is the world’s busiest airport in terms of international flights. The airport is an important hub with the largest number of passengers of any European airport in 2005—almost 68 million—and handled about 1.4 million tons of cargo. The airport has reached its capacity for flights but would like to increase passenger capacity to 90 million by 2020 and 95 million by 2030. The first A380 flight will likely be Singapore Airlines in early 2008. Airport officials said they made significant investments of about $885 million in airport improvements to expand their capacity to receive more passengers. Most of the spending was used to build new terminals and gates to accommodate the A380, but also included widening and strengthening its two runway’s shoulders and upgrading runway lighting, and improvements to existing terminals to provide A380 gates. The airport will use two 50- meter-wide, parallel runways that are not Code F compliant for width and will use a waiver approved by the United Kingdom Civil Aviation Authority. The airport will have 12 gates ready for the A380 by 2008, but Heathrow officials anticipate that they will need about 35 A380 gates by 2015. In addition, they eventually expect that one of every ten aircraft arriving and departing (130 arrivals and departures) will be an A380 by 2020. Table 14 provides A380-related issues at Heathrow airport. Paris Charles de Gaulle International Airport handled about 53.7 million passengers and over two million tons of cargo in 2005. The initial A380 flights from France to North America will be to the New York JFK and Montréal Trudeau airports beginning in 2009. United Arab Emirates, Singapore, and China Southern airlines could begin flights to Paris in 2008 and 2009, and will be an A380 operating hub for KLM-Air France. Over $132 million has been invested for infrastructure upgrades to accommodate the A380, such as widening taxiway bridges to allow A380 access to all terminals. The investment also included widening and strengthening two runways at the airport and building a new satellite terminal complex specifically for A380s. The airport has four runways that will be used for A380 operations. Two of the runways are 60 meters wide and comply with ICAO Code F width, but their 2,700-meter-lengths will likely be too short for departures. The two 4,200-meter-long, 45-meter- wide runways can be used for departures and landings but will have to be operated under waivers approved by the French Civil Aviation Authority. Nine gates will be ready for the A380 in 2008 and will be increased up to 30 by 2018. Airport officials estimated that at least 10 percent of all passengers arriving at the airport will be aboard an A380 by 2020. Table 15 provides A380-related issues at Charles de Gaulle airport.
Airbus S.A.S. (Airbus), a European aircraft manufacturer, is introducing a new aircraft designated as the A380, which is expected to enter service in late 2007. The A380 will be the largest passenger aircraft in the world, with a wingspan of about 262 feet, a tail fin reaching 80 feet high, and a maximum takeoff weight of 1.2 million pounds. The A380 has a double deck and could seat up to 853 passengers. GAO was asked to examine the impact of the A380 on U.S. airports. In May 2006, GAO issued a report that estimated the costs of infrastructure changes at U.S. airports to accommodate the A380. This report discusses (1) the safety issues associated with introducing the A380 at U.S. airports, (2) the potential impact of A380 operations on the capacity of U.S. airports, and (3) how selected foreign airports are preparing to accommodate the A380. To address these issues, GAO reviewed studies on operational and safety issues related to the A380 and conducted site visits to the 18 U.S. airports and 11 Asian, Canadian, and European airports preparing to receive the A380. GAO provided the Federal Aviation Administration (FAA) and Airbus a copy of the draft report for review. Both generally agreed with the report's findings. FAA and Airbus also provided technical clarifications, which were incorporated as appropriate. The A380 will be the first of a new category of large passenger aircraft introduced into the national airspace system in the coming years. The size of the A380 poses some potential safety challenges for U.S. airports. As a result, airports expecting A380 service may need to modify their infrastructure or impose operating restrictions, such as restrictions on runway use, on the A380 and other aircraft to ensure an acceptable level of safety. In addition, increased separation between the A380 and other aircraft during landing and departure is also required because research data indicate that the air turbulence created by the A380's wake is stronger than the largest aircraft in use today. The A380 also poses challenges for fire and rescue officials due to its larger size, upper deck, fuel capacity, and the number of passengers. FAA, Airbus, airports, and other organizations have taken several steps to mitigate these safety challenges. For example, the A380 is equipped with some safety enhancements, such as materials designed to reduce flammability and an external camera taxiing system to enhance pilot vision on the ground. The impact of A380 operations on capacity is uncertain. The A380 was designed, in part, to help alleviate capacity constraints faced by many large airports in the United States and around the world by accommodating more passengers and freight on each flight than any aircraft currently in use. However, potential operating restrictions and the increased separation requirements imposed to ensure the safety of the A380 and other aircraft at airports and during flight could reduce the number of flights that airports can accommodate. The extent to which possible operating restrictions, increased separation, and gate utilization impact capacity would depend on the time of day, the number of A380 operations, and the volume of overall airport traffic. Selected foreign airports that GAO visited have taken different approaches than U.S. airports in preparing for the introduction of the A380. These differences reflect the expected level of A380 traffic at the airports--and in some cases, the anticipated economic benefits of the A380 flights. The different approaches include adopting alternative airport design standards, making significant investment in existing infrastructure, and designing airports that allow for new large aircraft. By implementing these approaches, officials from the foreign airports that GAO visited do not anticipate that the introduction of the A380 will result in delays or disruptions at their airports, despite higher levels of expected A380 traffic compared to most U.S. airports.
GPRAMA is a significant enhancement of GPRA, which was the centerpiece of a statutory framework that Congress put in place during the 1990s to help resolve long-standing management problems in the federal government and provide greater accountability for results. GPRA sought to focus federal agencies on performance by requiring agencies to develop long-term and annual goals—contained in strategic and annual performance plans—and measure and report on progress towards those goals on an annual basis. In our past reviews of its implementation, we found that GPRA provided a solid foundation to achieve greater results in the federal government, but several key governance challenges remained—particularly related to: addressing crosscutting issues; ensuring performance information was useful and used by agency leadership and managers and the Congress; strengthening the alignment between individual performance and agency results as well as holding individuals and organizations responsible for achieving those results; measuring performance for certain types of programs; and providing timely, useful information about the results achieved by agencies. To help address these and other challenges, GPRAMA revises existing provisions and adds new requirements, including the following: Cross-agency priority (CAP) goals: OMB is required to coordinate with agencies to establish federal government priority goals— otherwise referred to as CAP goals—that include outcome-oriented goals covering a limited number of policy areas as well as goals for management improvements needed across the government. The act also requires that OMB—with agencies—develop annual federal government performance plans to, among other things, define the level of performance to be achieved toward the CAP goals. Agency priority goals (APGs): Certain agencies are required to develop a limited number of APGs every 2 years. Both the agencies required to develop these goals and the number of goals to be developed are determined by OMB. These goals are to reflect the highest priorities of each selected agency, as identified by the head of the agency, and be informed by the CAP goals as well as input from relevant congressional committees. Leadership positions: Although most of these positions previously existed in government, they were created by executive orders, presidential memoranda, or OMB guidance. GPRAMA established these roles in law, provided responsibilities for various aspects of performance improvement, and elevated some of them. Chief operating officer (COO): The deputy agency head, or equivalent, is designated COO, with overall responsibility for improving agency management and performance. Performance improvement officer (PIO): Agencies are required to designate a senior executive within the agency as PIO, who reports directly to the COO and has responsibilities to assist the agency head and COO with performance management activities. Goal leader: For each CAP goal, OMB must identify a lead government official—referred to by OMB as a goal leader— responsible for coordinating efforts to achieve each of the goals. For agency performance goals, including APGs, agencies must also designate a goal leader, who is responsible for achieving the goal. Performance Improvement Council (PIC): Originally created by a 2007 executive order, GPRAMA establishes the PIC in law and included additional responsibilities. The PIC is charged with assisting OMB to improve the performance of the federal government and achieve the CAP goals. Among its other responsibilities, the PIC is to facilitate the exchange among agencies of useful performance improvement practices and work to resolve government-wide or crosscutting performance issues. The PIC is chaired by the Deputy Director for Management at OMB and includes agency PIOs from each of the 24 CFO Act agencies as well as other PIOs and individuals designated by the chair. Quarterly performance reviews (QPR): For each APG, agencies are required to conduct QPRs to review progress towards the goals and develop strategies to improve performance, as needed. These reviews are to be led by the agency head and COO and include the PIO, relevant goal leaders, and other relevant parties both within and outside the agency. Performance.gov: OMB is required to develop a single, government- wide performance website to communicate government-wide and agency performance information. The website—implemented by OMB as Performance.gov—is required to make available information on APGs and CAP goals, updated on a quarterly basis; agency strategic plans, annual performance plans, and annual performance reports; and an inventory of all federal programs. Performance management capacity: The Office of Personnel Management (OPM) is charged with three responsibilities under the act. OPM is to (1) in consultation with the PIC, identify key skills and competencies needed by federal employees to carry out a variety of performance management activities; (2) incorporate these skills and competencies into relevant position classifications; and (3) work with agencies to incorporate these key skills into agency training. Since GPRAMA’s enactment in January 2011, OMB and agencies have taken a number of important steps to implement key provisions related to the act’s planning and reporting requirements. In February 2012, OMB identified 14 interim CAP goals concurrent with the submission of the President’s Budget. Nine of the goals related to crosscutting policy areas and 5 covered management improvements. In addition, at the same time, 24 agencies selected by OMB developed 103 APGs for 2012 and 2013, and OMB published information about these goals as well as the CAP goals on Performance.gov, which OMB considers to comprise the federal government performance plan. In December 2012, OMB expanded the information available on the site by providing an update on fiscal year 2012 performance for both sets of goals, and in March 2013, quarterly updates of the site began. All 24 CFO Act agencies are conducting QPRs, according to our survey of PIOs at these agencies. Our 2013 survey indicates that approximately one-third (33 percent) of federal managers across the government are at least somewhat familiar with the QPRs. These and related efforts were based on OMB guidance on implementing the act issued in 2011 and 2012. As another positive development, OMB and agencies have also put into place key aspects of the act’s performance management leadership roles. We recently reported that, at the agency level, all 24 CFO Act agencies have assigned senior-level officials to the COO, PIO, and goal leader roles. Furthermore, OMB guidance directed agencies with PIOs who are political appointees or other officials with limited-term appointments to appoint a career senior executive to serve as deputy PIO. Nearly all (22) of the CFO Act agencies have assigned officials to the deputy PIO role, according to our PIO survey. PIOs we surveyed reported that most performance management officials (COOs, PIOs, deputy PIOs and goal leaders) had large involvement in four primary tasks that summarize the performance management responsibilities required by GPRAMA: (1) strategic and performance planning and goal setting, (2) performance measurement and analysis, (3) communicating agency progress toward goals, and (4) agency quarterly performance reviews. At the government-wide level, the PIC has taken steps to meet its requirement to facilitate the exchange of useful practices and tips and tools to strengthen agency performance management. For example, it established the Goal Setting Working Group to help agencies set their 2012 to 2013 APGs; the Internal Agency Reviews Working Group to share best practices for QPRs; and the Business Intelligence Working Group to share tools for data analytics. PIOs we surveyed reported that, in general, they found the PIC helpful and that there was strong agency participation in the PIC and its working groups. However, in April 2013 we reported that the PIC has not routinely assessed its performance and recommended that OMB work with the PIC to conduct formal feedback on the PIC’s performance from member agencies on an ongoing basis; and update the PIC’s strategic plan and review the PIC’s goals, measures, and strategies for achieving performance, and revise them if appropriate. OMB staff agreed with these recommendations. In addition, OPM has completed its work identifying key skills and competencies needed by performance management staff and incorporating those skills and competencies into relevant position classifications. OPM identified 15 competencies for performance management staff and published them in a January 2012 memorandum from the OPM Director. It also identified relevant position classifications that are related to the competencies for performance management staff and worked with a PIC working group to develop related guidance and tools for agencies. Furthermore, OPM has taken steps to work with agencies to incorporate the key competencies into agency training. However, we reported in April 2013 that these efforts have been broad- based and not informed by specific assessments of agency training needs. We recommended that, in coordination with the PIC and the Chief Learning Officers Council, OPM (1) identify competency areas needing improvement within agencies, (2) identify agency training that focuses on needed performance management competencies, and (3) share information about available agency training on competency areas needing improvement. OPM agreed with these recommendations and reported that it will take actions to implement them. Many of the meaningful results that the federal government seeks to achieve, such as those related to protecting food and agriculture and providing homeland security, require the coordinated efforts of more than one federal agency, level of government, or sector. However, agencies face a range of challenges and barriers when they attempt to work collaboratively. The need for improved collaboration has been highlighted throughout our work over many years, in particular in two bodies of work. First, our reports over the past 3 years identified more than 80 areas where opportunities exist for executive branch agencies or Congress to reduce fragmentation, overlap, and duplication. Figure 1 defines and illustrates these terms. We found that resolving many of these issues requires better collaboration among agencies. Second, collaboration and improved working relationships across agencies are fundamental to many of the issues that we have designated as high risk due to their vulnerabilities to fraud, waste, abuse, and mismanagement, or most in need of transformation. For almost 2 decades we have reported on agencies’ missed opportunities for improved collaboration through the effective implementation of GPRA. In our 1997 assessment of the status of the implementation of GPRA, we reported that agencies faced challenges addressing crosscutting issues, which led to fragmentation and overlap. Again, we reported in 2004—10 years after the enactment of GPRA—that there was still an inadequate focus on addressing issues that cut across federal agencies. On a government-wide level, we reported that OMB did not fully implement a government-wide performance plan, as was required by GPRA. Additionally, few agency strategic and performance plans addressed crosscutting efforts and coordination. At that time, almost half of federal managers in our 2003 survey reported that they coordinated program efforts to a great or very great extent with other internal or external organizations. Now, almost 20 years since GPRA’s passage, our work continues to demonstrate that the needed collaboration is not sufficiently widespread. Accordingly, in 2012 we developed a guide on key considerations for implementing collaborative mechanisms. The results of our 2013 survey of federal managers show that the percentage of managers reporting that they use information obtained from performance measurement when coordinating program efforts with other internal or external organizations to a great or very great extent has not increased since 1997. Based on this survey, an estimated 23 percent of the managers reported that they coordinated program efforts to a small extent or not at all. The following three examples, among many, highlight the need for improved collaboration to help address crosscutting issues: Food safety: One area that has been identified in both bodies of work is the fragmented nature of federal food safety oversight. The U.S. food safety system is characterized by inconsistent oversight, ineffective coordination, and inefficient use of resources; these characteristics have placed the system on our high-risk list since 2007 and in all three of our annual reports on fragmentation, overlap, and duplication. We have reported that the U.S. Department of Agriculture (USDA) and the Food and Drug Administration (FDA), the two primary agencies responsible for food safety, have taken some steps to increase collaboration. However, agencies have not developed a government-wide performance plan for food safety that includes results-oriented goals and performance measures, as we recommended when we put federal oversight of food safety on the high-risk list in January 2007. In the absence of this plan, we have reported cases of fragmentation, overlap, and duplication. The 2010 nationwide recall of more than 500 million eggs because of Salmonella contamination highlights a negative consequence of this fragmentation. Several agencies have different roles and responsibilities in the egg production system. Through the Food Safety Working Group, federal agencies have taken steps designed to increase collaboration in some areas that cross regulatory jurisdictions. For example, both USDA and FDA set goals to reduce illness from Salmonella within their own areas of egg safety jurisdiction by the end of 2011 and developed a memorandum of understanding on information sharing regarding egg safety. While such actions are encouraging, without a government-wide performance plan for food safety, fragmentation, overlap, and duplication is likely to continue. Climate change: Climate change is a complex, crosscutting issue that poses risks to many environmental and economic systems— including agriculture, infrastructure, ecosystems, and human health— and presents a significant financial risk to the federal government. Among other impacts, climate change could threaten coastal areas with rising sea levels, alter agricultural productivity, and increase the intensity and frequency of severe weather events such as floods, drought, and hurricanes. Weather-related events have cost the nation tens of billions of dollars in damages over the past decade. For example, in 2012, the administration requested $60.4 billion for Superstorm Sandy recovery efforts. However, the federal government is not well positioned to address the fiscal exposure presented by climate change, partly because of the complex, crosscutting nature of the issue. Given these challenges and the nation’s precarious fiscal condition, we added “Limiting the Federal Government’s Fiscal Exposure to Climate Change” to our high-risk list in 2013. In adding climate change to this list, we reported that the federal government would be better positioned to respond to the risks posed by climate change if federal efforts were more coordinated and directed toward common goals. In October 2009, we recommended that the appropriate entities within the Executive Office of the President, in consultation with relevant federal agencies, state and local governments, and key congressional committees of jurisdiction, develop a strategic plan to guide the nation’s efforts to adapt to climate change, including the establishment of clear roles, responsibilities, and working relationships among federal, state, and local governments. In written comments, the Council on Environmental Quality generally agreed with the report’s recommendations, noting that leadership and coordination is necessary within the federal government to ensure an effective and appropriate adaptation response and that such coordination would help to catalyze regional, state, and local activities. Some actions have subsequently been taken to improve the coordination of federal adaptation efforts, including the development of an interagency climate change adaptation task force. Federal disability programs: In June 2012, we identified 45 programs in nine agencies that helped people with disabilities obtain or retain employment, reflecting a fragmented system of services and supports. Many of these programs overlapped in whom they served and the types of services they provided. Such fragmentation and overlap may frustrate and confuse program beneficiaries and limit the overall effectiveness of the federal effort. Having extensive coordination and overarching goals can help address program fragmentation. Although we identified promising coordination efforts among some programs, most reported not coordinating with each other, and some officials told us they lacked funding and staff time to pursue coordination. Coordination efforts can be enhanced when programs work toward a common goal; however, the number and type of outcome measures used by the 45 programs varied greatly. To improve coordination, efficiency, and effectiveness, we suggested that OMB consider establishing government-wide goals for employment of people with disabilities. Consistent with this suggestion, OMB officials stated that the Domestic Policy Council began an internal review intended to improve the effectiveness of some disability programs through better coordination and alignment. However, as we noted in our 2013 high-risk update, OMB still needs to maintain and expand its role in improving coordination across programs—such as the 45 we identified—that support employment for those with disabilities, and ultimately work with all relevant agencies to develop measurable government-wide goals to spur further coordination and improved outcomes for those who are seeking to find and maintain employment. On the other hand, we have recently highlighted progress that the executive branch and Congress have made in addressing areas that we previously identified as being at risk of fragmentation, overlap, and duplication. For example, the nation’s surface transportation system is critical to the economy and affects the daily life of most Americans. However, in our 2011 annual report on fragmentation, overlap, and duplication, we reported that over the years federal surface transportation programs grew increasingly fragmented. At the core of this fragmentation was the fact that federal goals and roles for the programs were unclear or conflicted with other federal priorities, programs lacked links to the performance of the transportation system or of the grantees, and programs did not use the best tools to target investments in transportation to the areas of greatest benefit. Accordingly, since 2004, we have made several recommendations and matters for congressional consideration to address the need for a more goal-oriented approach to surface transportation, introduce greater performance and accountability for results, and break down modal stovepipes. As we reported in February 2013, there was progress in clarifying federal goals and roles and linking federal programs to performance when the Moving Ahead for Progress in the 21st Century Act was enacted in July 2012. The act addressed fragmentation by eliminating or consolidating programs, and made progress in clarifying federal goals and roles and linking federal programs to performance to better ensure accountability for results. The challenge of collaboration has also been highlighted in our reviews of related GPRAMA requirements, such as those for CAP goals, APGs, and QPRs. While agencies have implemented some of these provisions, these efforts have not included all of the relevant agency, program, and other contributors. When agencies do not include all relevant contributors, they may miss important opportunities to work with others who are instrumental to achieving intended outcomes. Including all contributors is also a requirement of GPRAMA. At the government-wide level, OMB is required to list all of the agencies, organizations, program activities, regulations, tax expenditures, policies, and other activities that contribute to each CAP goal. With relevant stakeholders, OMB is required to review the progress of all contributors towards each goal on a quarterly basis. At the agency level, agencies are required to identify the various federal organizations, programs, and activities—both within and external to the agency—that contribute to each goal, and for APGs, review progress on a quarterly basis with relevant stakeholders. However, as shown in table 1, we have found that agencies are not including all stakeholders as they implement GPRAMA. While we continue to see challenges to collaboration across federal agencies, as a positive development, our survey of federal managers shows that reported collaboration increases when individuals contribute to the CAP goals, APGs, or QPRs. Our 2013 survey data indicate that 58 percent of federal managers reported they were somewhat or very familiar with CAP goals. Among these individuals, federal managers who viewed their programs as contributing to CAP goals to a great or very great extent were more likely to report collaborating outside their program to a great or very great extent to help achieve CAP goals, as figure 2 shows. We saw a similar pattern in responses from managers who were familiar with the APGs and the extent to which their programs contributed to the APGs. Eighty-two percent of federal managers reported they were somewhat or very familiar with APGs. Among these individuals, those who viewed their programs as contributing to APGs to a great or very great extent were more likely to report collaborating outside their program to a great or very great extent to help achieve APGs, as figure 3 shows. While the questions on our survey were designed to examine collaboration outside individual programs, they were not designed to distinguish between collaboration within or outside agency boundaries. As discussed in table 1, we found that collaboration was more common within agencies than between agencies. This may be appropriate in some cases; however, in other cases this might point to a need for broader inclusion of external stakeholders. We found that more managers reported collaborating with officials external to their agency to a great or very great extent when they also reported that their programs were involved in QPRs to a similar extent. Tax expenditures represent a significant federal investment. If the Department of the Treasury (Treasury) estimates are summed, an estimated $1 trillion in revenue was forgone from the 169 tax expenditures reported for fiscal year 2012, nearly the same as discretionary spending that year. For some tax expenditures, forgone revenue can be of the same magnitude or larger than related federal spending for some mission areas. For example, in fiscal year 2010, tax expenditures represented about 78 percent ($132 billion) of federal support for housing. Since 1994, we have recommended greater scrutiny of tax expenditures, as periodic reviews could help determine how well specific tax expenditures work to achieve their goals and how their benefits and costs compare to those of spending programs with similar goals. In November 2012, we issued a guide that identifies criteria for assessing tax expenditures and provides questions for the Congress to ask about a tax expenditure’s effectiveness. However, OMB has not developed a framework for reviewing tax expenditure performance, as we recommended in June 1994 and again in September 2005. Because OMB has not yet established such a framework, little is known about how tax expenditures contribute to broad federal outcomes and how they are related to spending programs seeking the same or a similar outcome. OMB guidance has shown some progress in addressing how agencies should incorporate tax expenditures in strategic plans and annual performance plans and reports, as we first recommended in September 2005. GPRAMA specifically requires OMB to identify tax expenditures among the various federal activities that contribute to each CAP goal, when applicable. Although the act does not explicitly require agencies to identify tax expenditures among the various federal programs and activities that contribute to their performance goals, OMB’s guidance directs agencies to do so for their APGs, which are a small subset of their performance goals. However, our review of the APGs developed for 2012 to 2013 found that only one agency, for one of its APGs, identified two relevant tax expenditures. We recently reported that OMB was missing an opportunity to more broadly identify how tax expenditures contribute to each agency’s overall performance. Even among the CAP goals, OMB and agencies are missing opportunities to identify tax expenditures as contributors. In the original information on Performance.gov in February 2012, OMB included tax expenditures as potential contributors for 5 of the 14 CAP goals (veteran career readiness, entrepreneurship and small businesses, energy efficiency, job training, and improper payments). In the December 2012 and March 2013 updates to Performance.gov, only two goals (veteran career readiness and improper payments) discussed two tax expenditures, which represent $2.7 billion or 0.3 percent of the $1 trillion sum across the tax expenditures listed by Treasury. Tax expenditures were no longer mentioned as contributing to the entrepreneurship and small businesses, energy efficiency, and job training CAP goals. For example, under the energy efficiency CAP goal, OMB originally listed both spending programs and tax expenditures that contribute to the goal. However, in the December 2012 update to Performance.gov, OMB had deleted all of the tax expenditures even though many of these tax expenditures remained unchanged. In one case, OMB deleted the credit for energy efficiency improvements to existing homes (estimated at $780 million for fiscal year 2012), but highlighted the Department of Energy’s (DOE) weatherization assistance spending program (estimated at $68 million in obligations for fiscal year 2012), even though both fund residential energy efficiency. Overall, we identified eight tax expenditures, totaling $2.4 billion in forgone revenue, which share the purpose of achieving energy efficiency, but are no longer identified as potential contributors. When asked about these changes, OMB staff shared that for the entrepreneurship and small business CAP goal the goal leaders narrowed the focus of the goal, which resulted in an updated list of contributing programs and activities that no longer included tax expenditures. For the energy efficiency and job training CAP goals, OMB staff told us that the exclusion of tax expenditures from the December 2012 and March 2013 updates was an oversight. OMB staff told us they planned to add the appropriate tax expenditures as contributors to those goals in the next quarterly update to Performance.gov, which occurred in June 2013. However, none were added to the job training CAP goal update, and as of June 19, 2013, the energy efficiency CAP goal had not yet been updated. However, these examples raise concerns as to whether OMB previously ensured all relevant tax expenditures were identified as contributors to the 14 CAP goals when they were published in February 2012, especially since only 5 CAP goals listed tax expenditures as contributors at that time. We have previously reported that, as with spending programs, tax expenditures represent a substantial federal commitment to a wide range of mission areas. Given the lack of scrutiny tax expenditures receive compared to spending programs—especially absent a comprehensive framework for reviewing them—it is possible that additional tax expenditures should have been identified and included as contributors to one or more of the other 9 CAP goals. Moreover, for the 2 CAP goals where tax expenditures were listed as contributors and mistakenly removed, it is unclear if OMB and the goal leaders assessed the contributions of those tax expenditures toward the CAP goal efforts, since they were not listed in the December 2012 and March 2013 updates. Without information about which tax expenditures support these goals and measures of their performance, Congress and other decision makers will not have the needed information to assess overall federal contributions towards desired results, and the costs and relative effectiveness associated with those contributions. We have previously reported that data-driven decision making leads to better results. Moreover, we have reported that if agencies do not use performance measures and performance information to track progress toward goals, they may be at risk of failing to achieve their goals. The textbox illustrates this problem in the high risk area of the Department of Defense’s (DOD) approach to business transformation. DOD Is Not Regularly Reviewing Performance Information to Assess Progress towards Goals in Transforming Its Business Operations In 2005, we identified DOD’s approach to business transformation as high-risk because DOD had not established clear and specific management responsibility, accountability and control over its business transformation and it lacked a plan with specific goals, measures, and mechanisms to monitor progress. We subsequently reported that DOD made improvements to strengthen its management approach, but we also identified additional steps that are needed. For example, DOD has broadly outlined a performance management approach, and established governance structures, such as the Defense Business Council, to help monitor progress in its business transformation efforts. However, we found the Council had not regularly reviewed performance data and when reviews did occur, it did not have sufficient information to assess progress. To enhance DOD’s ability to set strategic direction for its business transformation efforts, better assess overall progress toward business transformation goals, and take any necessary corrective actions, we recommended in February 2013 that DOD take a number of steps to improve its approach to performance management. DOD agreed with this recommendation and said it would continue to improve and institutionalize the Council’s operations. In the first 4 months of 2013 alone, we issued numerous testimonies and reports that illustrate how performance management weaknesses can hinder agencies’ abilities to achieve critical results. This work also illustrates that the scope of these problems is widespread, affecting agencies such as DOD, Treasury, the Departments of Transportation (DOT), Homeland Security (DHS), Health and Human Services, Housing and Urban Development (HUD), and State. The impact of these weaknesses is far reaching as well: These agencies are responsible for performing functions that affect every aspect of Americans’ lives, from education, healthcare, and housing to national security and illicit drug use, as described in the textbox. Office of National Drug Control Policy Has Established a Performance Monitoring System to Address Illicit Drug Use, but Not Yet Reported on Results The public health, social, and economic consequences of illicit drug use, coupled with the nation’s constrained fiscal environment, highlight the need for federal programs to use resources efficiently and effectively to address this problem. However, we reported in March 2013 that the Office of National Drug Control Policy and federal agencies have not made progress toward achieving most of the goals in the 2010 National Drug Control Strategy, although they reported to be on track to implement most Strategy action items in support of these goals. In April 2012, the Office established the Performance Reporting System, a monitoring mechanism intended to provide specific, routine information on progress toward Strategy goals and help identify factors for performance gaps and options for improvement. We reported that this could help increase accountability for improving results and identify ways to bridge the gap that existed between the lack of progress toward the Strategy’s goals and the strong progress made on implementing the Strategy’s actions. While this was promising, the Office does not plan to report on results until later in 2013, and until then, operational information is not available to evaluate its effectiveness. GAO, Office of National Drug Control Policy: Office Could Better Identify Opportunities to Increase Program Coordination, GAO-13-333, (Washington, D.C.: Mar. 26, 2013). Our prior work has shown that performance information can be used across a range of management functions to improve results, from setting program priorities and allocating resources to taking corrective action to solve program problems. Since our 2007 survey there was statistically significant improvement on two survey items related to use of performance information. More managers reported in 2013—after GPRAMA’s enactment and initial implementation—that they used performance information to a great or very great extent in developing program strategy and refining program performance measures. However, the 2013 improvement on the refining program performance measures item followed an earlier decline and does not represent an improvement in comparison to our 1997 survey results. While there was also a statistically significant change between 1997 and 2013 in the percentage of managers who reported to a great or very great extent that they used performance information in adopting new program approaches or changing work processes, the initial decline on this item occurred between our 1997 and 2000 surveys with no significant changes since then. Overall, our periodic surveys of federal managers since 1997 indicate that with the few exceptions described above, the use of performance information has not changed significantly at the government- wide level, as shown in figure 4. In addition, we introduced an item in the 2013 survey on streamlining programs, a performance management activity that can help address the overlap and duplication challenges and opportunities described earlier in this report. Less than half of federal managers (44 percent) reported to a great or very great extent that they used performance information for “streamlining programs to reduce duplicative activities.” Our prior work has identified practices that can promote the use of performance information for management decision making, such as leadership demonstrating commitment to using performance information, communicating performance information frequently and effectively, ensuring that performance information is useful, and building capacity to use performance information. Moreover, many of the requirements put in place by GPRAMA reinforce the importance of these practices. Our past government-wide surveys of federal managers indicated that these key practices were not always being employed across various agencies. Our 2013 survey suggests that effectively adopting these practices continues to be a substantial weakness across the government as described below. Demonstrating leadership commitment: Our prior work has shown that the demonstrated commitment of leadership and management to achieving results and using performance information can encourage the federal workforce to apply the principles of performance management. GPRAMA requires top leadership involvement in performance management, such as requiring agency leadership to routinely review performance information and progress toward APGs during the QPRs. However, results from our 2013 survey show almost no statistically significant changes in managers’ perceptions of their leaders’ and supervisors’ attention and commitment to the use of performance information since our last survey in 2007. The only statistically significant change from 2007 to 2013 was a decline in the percentage of managers that agreed to a great or very great extent that their agencies’ top leadership demonstrates a strong commitment to achieving results, from 67 percent to 60 percent. Moreover, less than two-thirds of managers agreed to a great or very great extent with other survey items related to leadership commitment and attention to performance information, as shown in figure 5. Communicating performance information: Our prior work showed that communicating performance information frequently and effectively throughout an agency can help managers to inform staff and other stakeholders of their commitment to achieve the agency’s goals and to keep these goals in mind as they pursue their day-to-day activities. Frequently reporting progress toward achieving performance targets also allows managers to review the information in time to make improvements. GPRAMA includes requirements for communicating performance information, such as sharing performance information at least quarterly and directing agencies to update performance indicators on their websites at least annually. However, there was no statistically significant change between 2007 and 2013 in the percentage of federal managers agreeing to a great or very great extent that agency managers at their level effectively communicate performance information on a routine basis (41 percent in 2013 and 43 percent in 2007). Our analysis suggests that easy access to performance information is related to the effective communication of performance information. Of the 49 percent of federal managers who agreed to a great or very great extent that performance information is easily accessible to managers at their level, 62 percent also agreed that agency managers at their level effectively communicate performance information on a routine basis to a great or very great extent. Conversely, of the 19 percent that agreed to only a small or no extent that performance information is easily accessible to managers at their level, only 9 percent also agreed that agency managers at their level effectively communicate performance information on a routine basis to a great or very great extent. Ensuring performance information is useful: As we previously reported, to facilitate the use of performance information, agencies should ensure that information meets various users’ needs for completeness, accuracy, consistency, timeliness, validity, and ease of use. GPRAMA introduced several requirements that could help to address these various dimensions of usefulness. For example, agencies must disclose more information about the accuracy and validity of their performance data and actions to address limitations to the data. Without useful performance information, it is difficult to monitor agencies’ progress toward critical goals, such as improving veterans’ access to health care provided by the Department of Veterans Affairs (VA), as illustrated in the textbox. Performance Information on Veterans’ Wait Times for Medical Appointments Was Unreliable The Veterans Health Administration (VHA), within the VA, provided nearly 80 million outpatient medical appointments to veterans in fiscal year 2011. Although access to timely medical appointments is important to ensuring veterans obtain needed care, long wait times and inadequate scheduling processes have been persistent problems. VHA is implementing a number of initiatives to improve veterans’ access to medical appointments such as use of technology to interact with patients and provide care. However, we testified in March 2013 that certain aspects of VHA’s policies and policy implementation contributed to unreliable performance information on veterans’ wait times. VA concurred with our recommendations and identified actions planned or under way to address them. GAO, VA Health Care: Appointment Scheduling Oversight and Wait Time Measures Need Improvement, GAO-13-372T (Washington, D.C.: Mar. 14, 2013). Responses to four survey items on hindrances related to the usefulness of performance information indicate some limited improvement. There was a statistically significant improvement between the 2007 and 2013 surveys on two of these four items (shown as declines because they concern hindrances), but no significant change otherwise, as illustrated in figure 6. In addition, related survey items introduced after 1997 showed no significant change between 2007 and 2013, with about 40 percent of managers agreeing to a great or very great extent that “agency managers at my level take steps to ensure that performance information is useful and appropriate” and 36 percent agreeing to the same extent that “I have sufficient information on the validity of the performance data I use to make decisions.” Despite these limited improvements, the overall picture from the 2013 results—with about one-fifth to nearly one-third of managers reporting hindrances, as indicated in figure 6, and less than half agreeing with most of the positive statements about the format, timeliness, and accessibility of their performance information in figure 7—remains a major concern. Building capacity to use performance information: We have previously reported that building the capacity to use performance information is critical to using performance information in a meaningful fashion, and that inadequate staff expertise, among other factors, can hinder agencies from using performance information. GPRAMA lays out specific requirements for OPM to identify skills and competencies for performance management functions, among other actions, which reinforce the importance of staff capacity to use performance information. Managers’ survey responses and our recent work indicate areas of weakness in agencies’ analysis and evaluation tools and staff’s skills and competencies, both of which are critical components of performance management capacity. About a third (36 percent) of managers reported in 2013 that they agreed to a great or very great extent that their agencies have sufficient analytical tools for managers at their levels to collect, analyze, and use performance information. Furthermore, less than a third of managers reported that their agencies were investing resources to improve the use and quality of performance information. Thirty percent of managers reported that they agree to a great or very great extent that the programs they are involved with have sufficient staff with the knowledge and skills needed to analyze performance information. Additionally, our recent work found gaps in performance management competencies among agency staff. Although PIOs we surveyed at 24 agencies in 2012 for our April 2013 report on performance management leadership roles reported that their staff generally possessed core competencies identified by OPM for performance management staff, certain competencies—performance measurement, information management, organization performance analysis, and planning and evaluating—were present to a lesser extent. Training is one way agencies can address a lack of staff capacity to use performance information, as illustrated in the sidebar. Between 1997 and 2013, there was a statistically significant increase in the percentage of managers reporting that their agencies made training available in the past 3 years on most of the performance management tasks we asked about. However, between 2007 and 2013, there was either no significant change or a decline in the percentage of managers responding positively to the same items, as shown in figure 8. Our prior work has indicated that effective data-driven reviews can serve as a leadership strategy, requiring leadership and other responsible parties to come together to review performance information and progress toward results and identify important opportunities to drive performance improvements. According to our 2012 survey of PIOs at 24 agencies, the majority (21 of 24) reported that actionable opportunities for performance improvement are identified through the reviews at least half the time. In addition, most officials we interviewed at DOE, Treasury, and the Small Business Administration (SBA) attributed improvements in performance and decision making to their QPRs. The textbox presents one such improvement described by officials at Treasury. Treasury Credits QPRs with Decision to Stop Minting $1 Coins for Circulation and Saving U.S. Government Millions Treasury’s Deputy Secretary said that it was a performance review session with the U.S. Mint that first led him to question the direction they had been taking with the $1 coin. Performance data he reviewed for the meeting indicated that the Mint was producing 400 million new $1 coins annually, while the Federal Reserve already had 1.4 billion existing ones in storage. Digging deeper, he learned that the Federal Reserve had previously estimated that there were enough $1 coins to meet demand for more than a decade. This estimate was based on the assumption that demand would remain at 2012 levels. While our case studies and survey of PIOs indicated the benefits of QPRs, our 2013 government-wide federal managers’ survey indicated that the majority of federal managers are not familiar with the QPRs at their agencies, although a greater percentage of Senior Executive Service (SES) managers reported that they were familiar with the QPRs, as shown in figure 9. Our analysis suggests that, while familiarity with QPRs may be somewhat limited government-wide, it is positively related to managers’ perceptions of their leadership’s demonstrated commitment to using performance information. Of the 12 percent of all federal managers who reported they were very familiar with QPRs, 76 percent agreed that their top leadership demonstrates a strong commitment to using performance information to guide decision making to a great or very great extent. In contrast, of the 66 percent who reported they were not familiar with QPRs, 36 percent agreed to a great or very great extent with the same statement. Similarly, our analysis suggests that being the subject of a QPR is positively related to the extent to which managers view the QPRs as being used to accomplish certain purposes to a great or very great extent. For example, federal managers who reported that their programs have been the subject of a QPR to a great or very great extent were more likely to report that their agencies use QPRs to identify problems or opportunities than those who reported that their programs have been the subject of a QPR to a moderate or small or no extent. Figure 10 shows this trend, along with a similar one for federal managers’ ratings of agency leadership use of QPRs to help achieve performance goals. Our analysis also suggests that being the subject of a QPR may be positively related to managers’ perceptions of their agencies employment of key practices that we have previously reported can promote successful data-driven performance reviews. For example, federal managers who reported that their programs have been the subject of a QPR to a great or very great extent were more likely to report that the reviews included key practices, such as leadership actively participating in reviews, than those who reported that their programs have been the subject of QPRs to a moderate or small or no extent. This trend and similar ones for other key practices are shown in figure 11. Federal managers’ responses to items about other key practices—holding QPRs on a regular, routine basis and having a process for following up on QPRs—were similarly related to the extent to which managers’ programs were the subject of a QPR. It is important for individuals to see a connection between their daily operations and results to help understand how individual performance can contribute to organizational success. While our past work has shown that agencies have encountered challenges linking individual performance with broader organizational results, progress has been made over the last decade in establishing this linkage and holding individuals accountable for organizational results through performance management systems. For example, while agencies have been required to hold senior executives accountable for their individual and organizational performance by linking performance expectations with GPRA-required goals since 2000, OPM and OMB have continued to reinforce the importance of this alignment in improvements in SES performance management. Most recently, in January 2012, OPM and OMB released a government-wide performance appraisal system for senior executives that provides agencies with a standard framework for managing the performance of its executives. While striving to provide greater clarity and equity in the development of performance standards and link to compensation, among other things, the Directors of OPM and OMB stated that the new system is intended to provide agencies with the necessary flexibility and capability to customize the system in order to meet their needs. As part of this framework, agencies are to identify expectations for the senior executives that focus on measurable outcomes from the strategic plan or other measurable outputs and outcomes clearly aligned to organizational goals and objectives. In addition, the Goals-Engagement-Accountability-Results (GEAR) model, established in 2011, focuses on aligning employee performance with organizational performance, creating a culture of engagement, and implementing accountability at all levels, among other things. The GEAR model outlines a series of recommended actions for agencies to adopt in order to help improve employee and organizational performance. We reported in September 2012 that DOE’s GEAR implementation plan includes aligning employee performance management with organizational performance management and developing training to support these goals, which along with initiating knowledge-sharing activities, will promote improvement of DOE’s organizational performance, according to DOE officials. We have ongoing work looking at GEAR implementation in the five pilot agencies and plan to issue the results of our work later in 2013. To further institutionalize individual accountability for achieving results, GPRAMA established in law several mechanisms that help individuals and agencies see this connection and hold them accountable for their contributions to agency and government-wide goals. As we recently reported, agency leaders should hold goal leaders and other responsible managers accountable for knowing the progress being made in achieving goals and, if progress is insufficient, understanding why and having a plan for improvement including improvements in the quality of the data to help ensure they are sufficient for decision making. For example, PIOs are responsible for, among other things, assisting the agency head and COO in developing and using performance measures specifically for assessing individual performance in the agency. QPRs offer an opportunity for organizational performance to be assessed and responsible officials to be held accountable for addressing problems and identifying strategies for improvement. As agencies implement the accountability provisions of GPRAMA, they will need to ensure managers have decision-making authority commensurate with the responsibility to identify and address performance problems as they arise. Since our 1997 government-wide survey of federal managers, SES managers have reported improvements in accountability for agency goals and results and the decision-making authority to help achieve agency goals. However, there has been a gap between SES managers’ perceptions of their accountability for program performance as opposed to their decision-making authority since our initial survey in 1997. In 2013, 80 percent of SES managers reported that they are held accountable for the results of the programs for which they are responsible to a great or very great extent, while 61 percent reported that they have the decision-making authority they need to help the agency achieve its strategic goals, a 19 percentage point difference. See figure 12. Using performance information in employee performance management helps individuals track their performance and progress toward achieving organizational goals and can help emphasize the importance of individual contributions to organizational success. However, the percentage of federal managers reporting use of performance information in employee performance management to a great or very great extent has stagnated with no statistically significant change in reported use from 1997 to 2013. See figure 13. A fundamental element in an organization’s efforts to manage for results is its ability to set meaningful goals for performance and to measure progress toward those goals. In our 1996 Executive Guide, we underscored the importance of taking a balanced approach to setting goals and measuring performance. If a balance across an organization’s various priorities does not exist, the measures in place can overemphasize some goals and create skewed incentives. This need for agencies to have a balanced set of performance measures was reinforced in GPRAMA, which calls for agencies to develop a variety of measures, such as output, outcome, customer service, and efficiency, across program areas. As we have previously reported, based on our government-wide federal managers surveys, federal managers reported a statistically significant increase in the presence of different types of performance measures for their programs to a great or very great extent following initial implementation of GPRA. Despite this early progress in establishing a variety of performance measures, since our 2003 federal managers survey, there generally has been no statistically significant increase in the reported presence of these measures to a great or very great extent. More recently, as illustrated in figure 14, the only statistically significant increase between 2007 and 2013 is in the percentage of managers reporting the presence of quality measures. We have further found over the years and through our more recent work that there has been uneven development of outcome-oriented performance measures across federal programs, even though agencies have been responsible for measuring program outcomes, among other things, since the passage of GPRA in 1993. As demonstrated in the textbox, outcome-oriented performance measures help agencies determine if the program is achieving its intended purpose. Additionally, these performance measures are essential for assessing the vast number of results of federal efforts that span multiple agencies and organizations. GAO Has Reported on Agency Difficulties in Developing and Using Outcome Measures In May 2006, we recommended that USDA and DHS adopt meaningful performance measures for assessing the effectiveness of the Agriculture Quarantine Inspection (AQI) program at intercepting foreign pests and disease on agricultural materials entering the country by all pathways and posing a risk to U.S. agriculture. We reported in March 2013 that the Federal Emergency Management Agency has not yet established clear, objective, and quantifiable capability requirements and performance measures to identify capability gaps in a national preparedness assessment, as recommended in our March 2011 report. We reported in April 2013 that the Federal Communications Commission, DHS, DOD, and Department of Commerce had taken a variety of actions to support the security of the nation’s communications networks, including ones related to developing cyber policy and standards, securing Internet infrastructure, sharing information, supporting national security and emergency preparedness, and promoting sector protection efforts. GAO, Homeland Security: Management and Coordination Problems Increase the Vulnerability of U.S. Agriculture to Foreign Pests and Disease, GAO-06-644 (Washington, D.C.: May 19, 2006). GAO, Homeland Security: Agriculture Inspection Program Has Made Some Improvements, but Management Challenges Persist, GAO-12-885 (Washington, D.C.: Sept. 27, 2012). GAO-11-318SP. GAO, Communications Networks: Outcome-Based Measures Would Assist DHS in Assessing Effectiveness of Cybersecurity Efforts, GAO-13-275 (Washington, D.C.: Apr. 3, 2013). Our work over the last 20 years has identified difficulties agencies face in measuring performance across various program types, such as regulations and grants. Some commonly reported difficulties that cut across the various program types include: accounting for factors that are both outside of an agency’s control and impact the results of a program; developing appropriate performance measures, especially for programs without a clearly defined purpose or that require a long time period to achieve intended results; and obtaining complete, timely, and accurate performance information of the program. Illustrative examples from our recent work that show how agencies have experienced difficulties in measuring program performance are provided in table 2. In our 2013 annual report on fragmentation, overlap, and duplication, we identified the need for improving the measurement of performance and results—including program evaluation—as a theme that cuts across our suggested actions to address fragmentation, overlap, and duplication in federal agencies. While some agencies have faced difficulties in measuring program performance, some progress has been made in developing performance measures and using the resulting performance information to measure performance in the applicable program area. For example: HUD has made progress in measuring grant program performance. As we reported in November 2011, HUD measured progress toward some green building goals by collecting energy consumption data for participating properties receiving grants or loans under its Green Retrofit Program for Multifamily Housing before and after the properties are retrofitted and planned to use this data to calculate savings and evaluate effectiveness. In January 2011, we reported that the Federal Railroad Administration (FRA) has created a set of performance goals and measures that address important dimensions of program performance related to its regulatory safety activities. In its proposed fiscal year 2011 budget, FRA included specific safety goals to reduce the rate of train accidents caused by various factors, including human errors and track defects. These goals were quantitative, with a targeted accident rate per every million train miles. Collecting such accident data equips FRA with a clear way to measure whether or not those safety goals are met. FRA’s budget request has also linked FRA’s performance goals and measures with DOT’s strategic goals. Moving forward, we will continue to examine the availability and use of performance measures across a variety of program types and update our work in this area. Given that we have found that agencies across the federal government have experienced similar difficulties in measuring the performance of different program types and have not made consistent progress in addressing them, a comprehensive examination of these difficulties is needed. The PIC could help facilitate this examination. As discussed earlier, GPRAMA requires the PIC, in part, to resolve crosscutting performance issues and facilitate the exchange of practices that have led to performance improvements within specific programs or agencies or across agencies. Although measuring the performance of different program types is a significant and long-standing challenge, the PIC has not yet addressed this issue in a systematic way, such as through a working group to identify common difficulties in developing and using performance measures to assess program performance and share best practices from instances in which agencies have overcome these difficulties. Without a comprehensive examination, it will be difficult for the PIC and agencies to fully understand these measurement issues and develop a crosscutting approach to help address them, which will likely result in agencies experiencing difficulties in measuring program performance in the future. According to our 2013 survey of federal managers, 34 percent reported that performance information is easily accessible to agency employees to a great or very great extent, while 17 percent reported that their agency’s performance information is easily accessible to the public to a great or very great extent. Survey data also indicate that agencies are not communicating to their employees about contributions to CAP goals or their progress toward achieving APGs. In fact, of the 58 percent of federal managers who indicated they were familiar with CAP goals, 22 percent reported that their agency has communicated to its employees on those goals to a great or very great extent. Of the 82 percent of federal managers who indicated familiarity with APGs, 40 percent reported that their agency has communicated on progress toward achieving them to great or very great extent. We recently reported that Performance.gov, as the central repository for federal government performance information, can assist in oversight and lead to a greater focus within government on the activities and efforts necessary to improve performance. OMB’s stated goals for Performance.gov include, among others, providing both a public view into government performance to support transparency as well as providing executive branch management capabilities to enhance senior leadership decision making. According to OMB staff, OMB will maintain responsibility for the website, but going forward, the plans are that the effort will be driven more by the General Services Administration (GSA) and the PIC, with GSA continuing to provide technological support. For future development of Performance.gov, OMB, the PIC, and GSA are working with federal agencies to develop the Performance Management Line of Business that, according to OMB staff, will standardize the collection and reporting of performance information by agencies. Performance.gov has the potential to increase the accessibility of performance information for users both inside and outside the federal government. An analysis of statements from OMB and GSA staff, agency officials, and feedback we obtained from potential users, however, indicates that there are varying expectations regarding the primary uses of Performance.gov. For example, OMB and GSA staff emphasized that they have viewed Performance.gov as a tool for agencies to support cross-agency coordination and efforts to achieve agency goals. Consistent with this, OMB staff said that Performance.gov has been used to facilitate conversations between OMB examiners and agency managers about progress on APGs. While most officials we interviewed said that OMB had collected feedback from the agencies in the development of Performance.gov, officials from most of these agencies also said that Performance.gov is not being used as a resource by agency leadership or other staff, as they have information sources tailored to meet their needs, and Performance.gov does not contain critical indicators or the ability to display some visualizations used for internal agency performance reviews. In addition, a performance management practitioner and other potential users of the website noted that the detailed, technical nature of Performance.gov seemed primarily oriented toward a government rather than a public audience. According to OMB staff, the specific legal requirements of GPRAMA have been the primary framework used to guide efforts to develop Performance.gov thus far. They noted that they have been focused on working to comply with these requirements by providing information on CAP goals and APGs, and by establishing a phased development plan for the integration of additional information from agency strategic plans, performance plans, and performance reports. OMB and GSA staff members have said, however, that the leading practices for developing federal websites will be helpful in guiding the future development of Performance.gov. OMB and GSA staff have also noted that as the phased development of Performance.gov unfolds, they expect to use broader outreach to, and usability testing with, a wider audience, including members of the public, to make Performance.gov more “public- facing” and “citizen-centric.” In accordance with this transition, we recommended in June 2013 that OMB work with GSA and the PIC to clarify the specific ways that intended audiences could use the information on Performance.gov. HowTo.gov, a leading source of best practices and guidance on the development of federal government websites, recommends identifying the purposes of a website, and the ways in which specific audiences could use a website to accomplish various tasks, and then structuring information and providing tools to help visitors quickly complete these tasks. With greater clarity about the intended uses of Performance.gov, OMB and GSA should have sufficient direction to design Performance.gov to make it a relevant and accessible source of information for a variety of potential users including those specified under GPRAMA—members and committees of Congress and the public. In the same report, we also recommended that OMB should work with GSA and the PIC to systematically collect information on the needs of intended audiences and collect recommended performance metrics that help identify improvements to the website. For example, HowTo.gov practices recommend that a website use consistent navigation. Although users we interviewed had mixed opinions on the organization and navigation of Performance.gov, simplifying the website’s navigation, adding an effective internal search engine, and providing an appropriate level of detail and information for intended audiences could increase the overall usability of Performance.gov. Outreach and testing on the ease of navigation and searching would help OMB systematically collect information on the needs of various audiences and how these could be addressed through Performance.gov. With performance goals and measures for the website, it would also be possible for the developers of Performance.gov to identify the gap between current capabilities and what is needed to fulfill stated goals and to identify and set priorities for improvements. OMB staff agreed with these recommendations. Congressional support has played a critical role in sustaining interest in management improvement initiatives over time. As we have previously reported, Congress has served as an institutional champion for many government-wide management reform initiatives over the years, such as the CFO Act and GPRA in the 1990s and more recently GPRAMA. Further, Congress has often played an important role in performance improvement and management reforms at individual agencies. Congress has also provided a consistent focus on oversight and has reinforced important policies. As we have previously reported, having pertinent and reliable performance information available is necessary for Congress to adequately assess agencies’ progress in making performance and management improvements and ensure accountability for results. However, our work has found that the performance information that agencies provided to Congress was not always useful for congressional decision making because the information was not clear, directly relevant, or sufficiently detailed. As stated earlier, in order for performance information to be useful, it should meet the needs of different users— including Congress—in terms of completeness, accuracy, consistency, timeliness, validity, and ease of use. GPRA required agencies to consult with Congress and obtain the views of interested stakeholders as a part of developing their strategic plans. However, according to the Senate committee report that accompanied the bill that ultimately became GPRAMA, agencies did not adequately consider the input of Congress in developing strategic plans, often because the agencies waited until strategic plans were substantially drafted and reviewed within the executive branch before consulting with Congress. In doing so, agencies limited the opportunities for Congress to provide input on their strategic plans and related goals, as well as the performance information that would be most useful for congressional oversight. To help ensure agency performance information is useful for congressional decision making, GPRAMA strengthens the consultation requirement. The act requires agencies to consult at least once every two years with relevant appropriations, authorization and oversight committees, obtaining majority and minority views, when developing or updating strategic plans—which include APGs. Subsequently, agencies are to describe how congressional input was incorporated into those plans and goals. Similarly, OMB is required to consult with relevant committees with broad jurisdiction at least once every two years when developing or updating CAP goals, and describe how that input was incorporated into those goals. At the request of Congress, in June 2012, we developed a guide to assist Members of Congress and their staffs in ensuring the consultations required under GPRAMA are useful to the Congress. The guide outlines general approaches for successful consultations, including creating shared expectations and engaging the right people in the process at the right time. The guide also provides key questions that Members and congressional staff can ask as part of the consultation process to ensure that agency performance information reflects congressional priorities. However, it is unclear if agencies incorporated congressional input on their updated strategic plans and APGs published in 2012, and therefore if this information will be useful for congressional decision making. In our recent review of APGs, we found that agencies reported engaging Congress during the development of their strategic plans and goals to varying degrees, and only 1 of the 24 agencies we reviewed explained how congressional input was incorporated into its APGs, as required by GPRAMA. We recommended in April 2013 that OMB ensure that agencies adhere to OMB’s guidance for website updates by providing a description of how input from congressional consultations was incorporated into each goal. OMB staff concurred with our recommendation. In addition, our recent work indicated that the performance information provided on Performance.gov also may not be meeting congressional needs. We found that outreach from OMB to congressional staff was limited, as were opportunities for staff to provide input on the development of Performance.gov. According to OMB staff, they met several times with staff from the Senate Homeland Security and Governmental Affairs Committee, House Oversight and Government Reform Committee, and the Senate Budget Committee to discuss the development of Performance.gov, and used this outreach to identify several specific website modifications. Of the three congressional staff that we spoke to that said they had received briefings on the development of Performance.gov, however, only one told us she had been consulted on website input. In addition, since 2010, OMB staff has not held meetings on the development of Performance.gov with staff from other committees in the House or Senate that might use the website to inform their oversight of federal agencies. As previously mentioned, we also found that OMB has not articulated how various intended audiences, including Congress, can use the site to accomplish specific tasks, such as supporting coordination and decision making to advance shared goals. At the request of the Congress, in December 2011 and June 2012, we highlighted several instances in which Congress has used agency performance information in various oversight and legislative activities, including (1) identifying issues that the federal government should address; (2) measuring the federal government’s progress toward addressing those issues; and (3) identifying better strategies to address the issues when necessary. For example, to help promote the use of e- filing of tax returns with the IRS, Congress used performance information to set clear expectations for agency performance, support oversight activities, and inform the development of additional legislation to help IRS achieve its goals. For further information, see the textbox. Congressional Use of Performance Information to Promote E-filing of Tax Returns Congress sought to promote the use of e-filing, which allows taxpayers to receive refunds faster, is less prone to errors, and provides IRS significant cost savings. Congress took the following actions to increase the use of e-filing:  Setting Expectations: As part of the Internal Revenue Service Restructuring and Reform Act of 1998, Congress established a performance goal of having 80 percent of individual tax returns e-filed by 2007.  Oversight: Congress monitored IRS’s progress in meeting the established goal for e- filings; held 22 hearings related to IRS filing seasons and e-filings; and requested annual GAO reports to Congress on filing season performance, including e-filing.  Additional Legislation: Congress saw the need for further actions to help IRS achieve the goal, and subsequently passed legislation to require tax return preparers who file more than 10 returns per year to do so electronically. Although IRS did not meet the 80 percent e-filing target by 2007 (58 percent were e-filed that year), increased use of e-filing has substantially reduced IRS’s cost to process returns. IRS subsequently met this goal for individual tax returns as of the 2012 tax filing season, with 82 percent of individual returns e-filed. IRS has yet to reach the 80 percent e-file goal for some types of returns other than individual income tax returns. See GAO, 2012 Tax Filing: IRS Faces Challenges Providing Service to Taxpayers and Could Collect Balances Due More Effectively, GAO-13-156 (Washington, D.C.: Dec. 18, 2012). Moving forward, the federal government will need to make tough choices in setting priorities as well as reforming programs and management practices to address the pressing and complex economic, social, security, sustainability, and other issues the nation confronts. GPRAMA provides a number of tools that could help address these challenges. Since enactment in 2011, the executive branch has taken a number of important steps to implement key provisions of the act, by developing interim CAP goals and APGs, conducting quarterly reviews, assigning key performance management roles and responsibilities, and communicating results more frequently and transparently through Performance.gov. However, the executive branch needs to do more to fully implement and leverage the act’s provisions to address these challenges. Our recent work reviewing federal performance issues and implementation of the act has pointed to several areas where improvements are needed and, accordingly, we recommended a number of actions. In addition, examples from our past work along with the most recent results from our survey of federal managers show that the executive branch has made little progress addressing long-standing governance challenges related to improving coordination and collaboration to address crosscutting issues, using performance information to drive decision making, measuring the performance of certain types of federal programs, and engaging Congress in a meaningful way in agency performance management efforts to ensure the resulting information is useful for congressional decision making. Of particular concern, OMB has yet to develop a framework for reviewing the performance of tax expenditures, which represented approximately $1 trillion in forgone revenue in fiscal year 2012. In some areas, forgone revenue due to tax expenditures is nearly equal to or greater than spending for federal outlay programs. Since 1994 we have recommended OMB take this action, and the act puts into place explicit requirements for the CAP goals that OMB identify related tax expenditures and measure their contributions to broader federal outcomes. While early implementation of CAP goals showed some promise, with tax expenditures being identified as contributing to 5 of the 14 goals, many of those tax expenditures were subsequently removed. For example, our work shows that eight tax expenditures, representing about $2.4 billion in forgone revenue, should be listed as contributing to the energy efficiency CAP goal. The few tax expenditures that continue to be listed as contributors to a CAP goal only represent about $2.7 billion in forgone revenue—approximately 0.3 percent of the total estimate of forgone revenue from tax expenditures. While OMB staff told us the removal of these tax expenditures was an oversight and that they will be added as contributors in the near future, it raises concerns as to whether OMB previously ensured all relevant tax expenditures were identified as contributors to the 14 CAP goals when they were published in February 2012. Tax expenditures represent a substantial federal commitment to a wide range of mission areas, but do not receive the same scrutiny as spending programs. Therefore, it is possible that additional tax expenditures should have been identified and included as contributors to one or more of the other 9 CAP goals. Moreover, for the 2 CAP goals where tax expenditures were mistakenly removed, it is unclear if OMB and the goal leaders assessed the contributions of those tax expenditures toward the CAP goal efforts, since they were not listed in the December 2012 and March 2013 updates. Without information about which tax expenditures support these goals and measures of their performance, Congress and other decision makers will not have the needed information to assess overall federal contributions towards desired results and the costs and relative effectiveness associated with those contributions. OMB took another promising action in 2012 by directing agencies to identify tax expenditures among the various federal programs and activities that contribute to their APGs—above and beyond what the act requires for all performance goals, which include APGs. However, the 103 APGs developed for 2012 to 2013 at 24 agencies represent only a small subset of all performance goals across the government. In addition, our review of the APGs for 2012 to 2013 found that only one agency, for one of its APGs, identified two relevant tax expenditures. OMB and agencies are missing important opportunities to more broadly identify how tax expenditures contribute to each agency’s overall performance. In addition to measuring the contributions of tax expenditures to their goals, our work has found that agencies have experienced common issues in measuring the performance of various other types of programs and have not made consistent progress in addressing them in the last 20 years. As such, a comprehensive and concerted effort to address these long-standing difficulties needs to be taken. With responsibilities to resolve crosscutting performance issues and facilitate the exchange of proven practices, the PIC should lead such an assessment. The PIC has not yet addressed this issue in a systematic way, and without a comprehensive examination, it will be difficult for the PIC and agencies to fully understand these measurement issues and develop a crosscutting strategy to address them. That would likely result in agencies continuing to experience difficulties in measuring program performance in the future. The PIC’s upcoming strategic planning effort provides a venue for developing an approach for tackling this issue by putting in place the necessary plans and accountability. The PIC’s strategy should detail specific actors and actions to be made within set time frames to ensure that these persistent measurement challenges are adequately addressed. To improve implementation of GPRAMA and help address pressing governance issues, we make the following four recommendations. To help ensure that the contributions made by tax expenditures toward the achievement of agency goals and broader federal outcomes are properly recognized, we recommend that the Director of OMB take the following three actions: Revise relevant OMB guidance to direct agencies to identify relevant tax expenditures among the list of federal contributors for each appropriate agency goal. Review whether all relevant tax expenditures that contribute to a CAP goal have been identified, and as necessary, include any additional tax expenditures in the list of federal contributors for each goal. Assess the contributions relevant tax expenditures are making toward the achievement of each CAP goal. Given the common, long-standing difficulties agencies continue to face in measuring the performance of various types of federal programs and activities—contracts, direct services, grants, regulations, research and development, and tax expenditures—we also recommend the Director of OMB work with the PIC to develop a detailed approach to examine these difficulties across agencies, including identifying and sharing any promising practices from agencies that have overcome difficulties in measuring the performance of these program types. This approach should include goals, planned actions, and deliverables along with specific time frames for their completion, as well as the identification of the parties responsible for each action and deliverable. We provided a draft of this report for review and comment to the Director of OMB. Via e-mail, staff from OMB’s Office of Performance and Personnel Management agreed with the recommendations in this report. The staff also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Director of OMB as well as interested congressional committees and other interested parties. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806, or mihmj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. Key contributors to this report are listed in appendix III. The GPRA Modernization Act of 2010 (GPRAMA) lays out a schedule for gradual implementation of its provisions during a period of interim implementation—from its enactment in January 2011 to February 2014 when a new planning and reporting cycle begins. GPRAMA also includes provisions requiring us to review implementation of the act at several critical junctures and provide recommendations for improvements to its implementation. This report is the final in a series responding to the mandate to assess initial implementation of the act by June 2013, and pulls together findings from our recent work related to the act, the results of our periodic survey of federal managers, and our related recent work on federal performance and coordination issues. Our specific objectives for this report were to assess the executive branch’s (1) progress in implementing the act and (2) effectiveness in using tools provided by the act to address challenges the federal government faces. To address both objectives, we reviewed GPRAMA, related congressional documents and Office of Management and Budget (OMB) guidance, and our past and recent work related to managing for results and the act. We also interviewed OMB staff. In addition, to further address the second objective, we administered a web-based questionnaire on organizational performance and management issues to a stratified random sample of 4,391 persons from a population of approximately 148,300 mid-level and upper-level civilian managers and supervisors working in the 24 executive branch agencies covered by the Chief Financial Officers (CFO) Act of 1990, as amended. The survey results provided information about the extent to which key performance management practices are in place to help address challenges. The sample was drawn from the Office of Personnel Management’s (OPM) Central Personnel Data File (CPDF) as of March 2012, using file designators indicating performance of managerial and supervisory functions. In reporting the questionnaire data, when we use the term “government-wide” and the phrases “across the government” or “overall” we are referring to these 24 CFO Act executive branch agencies, and when we use the terms “federal managers” and “managers” we are referring to both managers and supervisors. The questionnaire was designed to obtain the observations and perceptions of respondents on various aspects of results-oriented management topics such as the presence and use of performance measures, hindrances to measuring performance and using performance information, agency climate, and program evaluation use. In addition, to address implementation of GPRAMA, the questionnaire included a section requesting respondents’ views on various provisions of GPRAMA, such as cross-agency priority goals, agency priority goals, and quarterly performance reviews. For the agency priority goal questions, we directed the federal managers from the Nuclear Regulatory Commission to not answer these questions since OMB did not require the agency to develop agency priority goals for 2012 to 2013. This survey is comparable to surveys we have conducted four times previously at the 24 CFO Act agencies—1997, 2000, 2003, and 2007. The 1997 survey was conducted as part of the work we did in response to a GPRA requirement that we report on implementation of the act. The 2000, 2003, and 2007 surveys were designed to update the results from each of the previous surveys. The 2007 survey also included a section requesting the respondent’s view on OMB’s Program Assessment Rating Tool and the priority that should be placed on various potential improvements to it. The 2000 and 2007 surveys, unlike the other two surveys, were designed to support analysis of the data at the department and agency level as well as government-wide. For this report, we focus on comparing the 2013 survey results with those from the 1997 baseline survey; and with the results of the 2007 survey, which is the most recent survey conducted before GPRAMA was enacted in 2011. We noted the results from the other two surveys—2000 and 2003—when statistically significant trends compared to 2013 occurred. Similar to the four previous surveys, the sample was stratified by agency and by whether the manager or supervisor was a member of the Senior Executive Service (SES) or non-SES. The management levels covered general schedule (GS) or equivalent schedules at levels comparable to GS-13 through GS-15 and career SES or equivalent. Similar to our 2000, 2003, and 2007 surveys, we also incorporated managers or supervisors in other pay plans at levels generally equivalent to the GS-13 through career SES levels into the population and the selected sample to ensure at least a 90 percent coverage of all mid- to upper-level managers and supervisors at the departments and agencies we surveyed. Most of the items on the questionnaire were closed-ended, meaning that depending on the particular item, respondents could choose one or more response categories or rate the strength of their perception on a 5-point extent scale ranging from “to no extent” at the low end of the scale to “to a very great extent” at the high end. On most items, respondents also had an option of choosing the response category “no basis to judge/not applicable.” A few items had yes, no, or do not know options for respondents. Many of the items on the questionnaire were asked in our earlier surveys; the sections of the questionnaire asking about GPRAMA, program evaluations, and availability of performance information are new. For these new questions, we conducted pretests with federal managers in several of the 24 CFO Act agencies. For the 2013 survey, based on feedback we obtained from our pretests with managers, we moved the placement of question 8 in the survey to accommodate the insertion of a new question. In previous surveys, only those respondents who answered yes to question 5—that they had performance measures available for their programs—were asked to answer question 8—a series of items about the extent to which they used information obtained from performance measurement when participating in certain activities. Respondents answering “no” or “do not know” to question 5 could skip past the question 8 items. For the 2013 survey, all respondents were asked to answer question 8 given the new question added. To maintain the consistency and comparability with how we have previously analyzed and reported question 8 results, we applied the skip pattern used in prior surveys to question 8 by removing those individuals who did not answer yes to question 5 (and in the past would have been directed to skip out of answering the question). However, in the e-supplement we report the results as the federal managers answered the questionnaire, regardless of how they had answered question 5. To administer the survey, an e-mail was sent to managers in the sample that notified them of the survey’s availability on the GAO website and included instructions on how to access and complete the survey. With the exception of the managers at the Department of Justice (DOJ), which is discussed below, managers in the sample who did not respond to the initial notice were sent up to four subsequent e-mail reminders and follow- up phone calls asking them to participate in the survey. In our prior surveys, we worked with OPM to obtain the names of the managers and supervisors in our sample as selected through the CPDF. However, since our last survey in 2007, some agencies had requested from OPM that the names of individuals within selected subcomponents be withheld from the CPDF. We worked with officials at these agencies to attempt to gain access to these individuals to maintain continuity of the population of managers surveyed from previous years. Due to DOJ’s national security concerns about providing identifying information (e.g., names, e-mail addresses, phone numbers) of federal agents to us, we administered the current survey to all DOJ managers in our sample through a DOJ official. To identify the sample of managers whose names were withheld from the CPDF, we provided DOJ with the last four digits of Social Security numbers, the subcomponent, duty location, and pay grade information. To ensure that DOJ managers received the same survey administration process as the rest of the managers in our sample to the extent possible, we provided DOJ with copies of the notification, activation (including the web link to our survey), and follow-up e-mails that managers at other agencies received from us. DOJ administered the survey to its managers and conducted follow-up with the nonrespondents. We administered the survey to all 24 agencies from November 2012 through February 2013. To help determine the reliability and accuracy of the CPDF data elements used to draw our sample of federal managers, we checked the data for reasonableness and the presence of any obvious or potential errors in accuracy and completeness. For example, we identified cases where the managers’ names were withheld and contacted OPM to determine the reason and extent of this issue. We also checked the names of the managers in our selected sample provided from OPM with the applicable agency contacts to verify these managers were still employed with the agency in the role. We noted discrepancies when they occurred and excluded them from our population of interest, as applicable. We also reviewed our past analyses of the reliability of the CPDF data. On the basis of these procedures, we believe the data we used from the CPDF are sufficiently reliable for the purpose of this report. Of the 4,391 managers selected for this survey, we found that 266 of the sampled managers had retired, separated, died, or otherwise left the agency or had some other reason that excluded them from the population of interest. We received usable questionnaires from 2,762 sample respondents, or about 69 percent of the remaining eligible sample. In addition, there were 29 persons that we were unable to locate and therefore unable to request that they participate in the survey. The response rate across the 24 agencies ranged from 57 percent to 88 percent. The overall survey results are generalizable to the population of managers as described above at each of the 24 agencies and government-wide. The responses of each eligible sample member who provided a usable questionnaire were weighted in the analyses to account statistically for all members of the population. All results are subject to some uncertainty or sampling error as well as nonsampling error. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. The percentage estimates presented in this report based on our sample for the 2013 survey have 95 percent confidence intervals within plus or minus 5 percentage points of the estimate itself, unless otherwise noted. An online e-supplement shows the questions asked on the survey along with the percentage estimates and associated 95 percent confidence intervals for each item for each agency and government-wide. Because a complex survey design was used in the current survey as well as the four previous surveys, and different types of statistical analyses are being done, the magnitude of sampling error will vary across the particular surveys, groups, or items being compared due to differences in the underlying sample sizes, usable sample respondents, and associated variances of estimates. For example, the 2000 and 2007 surveys were designed to produce agency-level estimates and had effective sample sizes of 2,510 and 2,943, respectively. However, the 1997 and 2003 surveys were designed to obtain government-wide estimates only, and their sample sizes were 905 and 503, respectively. Consequently, in some instances, a difference of a certain magnitude may be statistically significant. In other instances, depending on the nature of the comparison being made, a difference of equal or even greater magnitude may not achieve statistical significance. We note in this report when we are 95 percent confident that the difference is statistically significant. Also, as part of any interpretation of observed shifts in individual agency responses between the 2013 and the 2000 surveys, it should be kept in mind that components of some agencies and all of the Federal Emergency Management Agency became part of the Department of Homeland Security. In addition to sampling errors, the practical difficulties of conducting any survey may also introduce other types of errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information available to respondents, or in how the data were entered into a database or were analyzed can introduce unwanted variability into the survey results. With this survey, we took a number of steps to minimize these nonsampling errors. For example, our staff with subject matter expertise designed the questionnaire in collaboration with our survey specialists. As noted earlier, the new questions added to the survey were pretested to ensure they were relevant and clearly stated. When the data were analyzed, a second independent GAO analyst independently verified the analysis programs to ensure the accuracy of the code and the appropriateness of the methods used for the computer-generated analysis. Since this was a web-based survey, respondents entered their answers directly into the electronic questionnaire, thereby eliminating the need to have the data keyed into a database, thus avoiding a source of data entry error. We conducted this performance audit from August 2012 to June 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Status OMB staff agreed with our recommendations. clarify the ways that intended audiences could use the information on the Performance.gov website to accomplish specific tasks and specify the design changes that would be required to facilitate that use; seek to more systematically collect information on the needs of a broader audience, including through the use of customer satisfaction surveys and other approaches recommended by HowTo.gov; and seek to ensure that all performance, search, and customer satisfaction metrics, consistent with leading practices outlined in HowTo.gov, are tracked for the website, and, where appropriate, create goals for those metrics to help identify and prioritize potential improvements to Performance.gov. OMB staff agreed with our recommendations. provide a definition of what constitutes “data of significant value;” direct agencies to develop and publish on Performance.gov interim quarterly performance targets for their agency priority goal performance measures when the above definition applies; direct agencies to provide and publish on Performance.gov completion dates, both in the near- term and longer-term for their milestones; and direct agencies to describe in their performance plans how the agency’s performance goals— including priority goals—contribute to any of the cross-agency priority goals. When such revisions are made, the Director of OMB should work with the PIC to test and implement these provisions. Status OMB staff agreed with our recommendations. complete information about the organizations, program activities, regulations, policies, tax expenditures, and other activities—both within and external to the agency—that contribute to each goal; and a description of how input from congressional consultations was incorporated into each goal. To improve performance management staff capacity to support performance management in federal agencies, the Director of OPM should, in coordination with the PIC and the Chief Learning Officer Council, work with agencies to: identify competency areas needing improvement within agencies; identify agency training that focuses on needed performance management competencies; and share information about available agency training on competency areas needing improvement. OPM agreed with our recommendations, and explained that it will work with agencies, and in particular with PIOs, to assess the competencies of the performance management workforce. OPM also stated that it will support the use of the PIC’s performance learning website to facilitate the identification and sharing of training related to competencies in need of improvement. OMB staff agreed with our recommendations. conduct formal feedback on the performance of the PIC from member agencies, on an ongoing basis; and update its strategic plan and review the PIC’s goals, measures, and strategies for achieving performance, and revise them if appropriate. To better leverage agency quarterly performance reviews as a mechanism to manage performance toward agency priority and other agency-level performance goals, the Director of OMB should—working with the PIC and other relevant groups—identify and share promising practices to help agencies extend their quarterly performance reviews to include, as relevant, representatives from outside organizations that contribute to achieving their agency performance goals. OMB staff agreed with our recommendation. Summary of related recommendations The Director of OMB, in considering additional programs with the potential to contribute to the crosscutting goals, should review the additional departments, agencies, and programs that we have identified, and consider including them in the federal government performance plan, as appropriate. Status OMB staff agreed with our recommendation. In December 2012 and March 2013, OMB updated information on Performance.gov on the CAP goals. OMB included some of the agencies and programs we identified for select goals, but in other instances eliminated key contributors that were previously listed. In addition to the above contact, Elizabeth Curda (Assistant Director) and Benjamin T. Licht supervised this review and the development of the resulting report. Tom Beall, Peter Beck, Mallory Barg Bulman, Virginia Chanley, Laura Miller Craig, Sara Daleski, Karin Fangman, Stuart Kaufman, Don Kiggins, Judith Kordahl, Jill Lacey, Janice Latimer, Adam Miles, Kathleen Padulchick, Mark Ramage, Daniel Ramsey, Marylynn Sergent, Megan Taylor, Sarah Veale, Kate Hudson Walker, and Dan Webb made significant contributions to this report. Pawnee Davis, Shannon Finnegan, Quindi Franco, Ellen Grady, Robert Gebhart, Tom James, Donna Miller, Michael O’Neill, Robert Robinson, and Stephanie Shipman also made key contributions.
The federal government faces significant and long-standing fiscal, management, and performance challenges. The act’s implementation offers opportunities for Congress and the executive branch to help address these challenges. This report is the latest in a series in which GAO, as required by the act, reviewed the act’s initial implementation. GAO assessed the executive branch’s (1) progress in implementing the act and (2) effectiveness in using tools provided by the act to address key governance challenges. To address these objectives, GAO reviewed the act, related OMB guidance, and past and recent GAO work related to federal performance management and the act; and interviewed OMB staff. In addition, to determine the extent to which agencies are using performance information and several of the act’s requirements to improve agency results, GAO surveyed a stratified random sample of 4,391 federal managers from 24 agencies, with a 69 percent response rate which allows GAO to generalize these results. The executive branch has taken a number of steps to implement key provisions of the GPRA Modernization Act (the act). The Office of Management and Budget (OMB) developed interim cross-agency priority (CAP) goals, and agencies developed agency priority goals (APG). Agency officials reported that their agencies have assigned performance management leadership roles and responsibilities to officials who generally participate in performance management activities, including quarterly performance reviews (QPR) for APGs. Further, OMB developed Performance.gov, a government-wide website, which provides quarterly updates on the CAP goals and APGs. However, the executive branch needs to do more to fully implement and leverage the act’s provisions to address governance challenges. OMB and agencies have identified many programs and activities that contribute to goals, as required, but are missing additional opportunities to address crosscutting issues. For example, few have identified tax expenditures, which represented about $1 trillion in forgone revenue in fiscal year 2012, due to a lack of OMB guidance and oversight. Therefore, the contributions made by tax expenditures towards broader federal outcomes are unknown. Ensuring performance information is useful and used by federal managers to improve results remains a weakness. GAO found little improvement in managers’ reported use of performance information or practices that could help promote this use. There was a decline in the percentage of managers that agreed that their agencies’ top leadership demonstrates a strong commitment to achieving results. However, agencies’ QPRs show promise as a leadership strategy for improving the use of performance information in agencies. Agencies have taken steps to align daily operations with agency results, but continue to face difficulties measuring performance. Agencies have established performance management systems to align individual performance with agency results. However, agencies continue to face long-standing issues with measuring performance across various programs and activities. The Performance Improvement Council (PIC) could do more to examine and address these issues, given its responsibilities for addressing crosscutting performance issues and sharing best practices. Without a comprehensive examination of these issues and an approach to address them, agencies will likely continue to experience difficulties in measuring program performance. Communication of performance information could better meet users’ needs. Federal managers and potential users of Performance.gov reported concerns about the accessibility, availability, understandability, and relevance of performance information to the public. Further outreach to key stakeholders could help improve how this information is communicated. Agency performance information is not always useful for congressional decision making. Consultations with Congress are intended, in part, to ensure this information is useful for congressional decision making. However, GAO found little evidence that meaningful consultations occurred related to agency strategic plans and APGs. GAO also found that the performance information provided on Performance.gov may not fully be meeting congressional needs. GAO recommends OMB improve implementation of the act and help address challenges by ensuring that the contributions of tax expenditures to crosscutting and agency goals are identified and assessed, and developing a detailed approach for addressing long-standing performance measurement issues. OMB staff agreed with these recommendations. GAO also reports on the status of existing recommendations related to CAP goals, APGs, QPRs, the PIC, agency performance management training, and Performance.gov.
In 1985, the Congress required the Department of Defense to carry out the destruction of the U.S. stockpile of chemical agents and munitions and established an organization within the Army to manage the disposal program. The Congress directed the program to provide maximum protection to the environment, the general public, the personnel involved in disposing of the chemical weapons at the eight storage sites. Further, the Congress authorized the Secretary of Defense to make grants to state and local governments, either directly or through the Federal Emergency Management Agency (FEMA) to assist them in carrying out functions related to emergency preparedness. In 1988, the Army established the Chemical Stockpile Emergency Preparedness Program (CSEPP) to help communities near the stockpile storage sites establish a full level of emergency preparedness and response capabilities. CSEPP also helps to implement the emergency preparedness at the Army installations storing the chemical stockpile. The Congress originally set 1994 as the date for the complete destruction of the stockpile. This date was later extended to 2007, after the Senate ratified the Convention on the Prohibition of the Development, Production, Stockpiling and the Use of Chemical Weapons and on Their Destruction, commonly known as the Chemical Weapons Convention, on April 24, 1997. Under the convention, April 29, 2007, is the deadline for the destruction of chemical weapons stockpiles. CSEPP is a partnership between the Army, as custodian of the chemical stockpile, FEMA, which has long-standing experience in preparing for and dealing with all types of emergencies, and state and local governments. In October 1997, the Army and FEMA signed a revised memorandum of understanding under which FEMA assumed responsibility for off-post (civilian community) program activities. The Army continued to manage “on-post” (installation) emergency preparedness and provide technical and financial support for both off-post and on-post activities. FEMA provides the civilian community with expertise, guidance, training, and other support. Specifically, FEMA’s CSEPP roles and responsibilities are to (1) administer the off-post funds; (2) support the states in developing response plans; (3) prepare, develop, deliver, and evaluate training; (4) provide technical assistance; and (5) develop programs for evaluating off-post readiness. Similarly, the states and communities also have responsibility for developing response plans and evaluating resource requirements. To improve overall management, the Army and FEMA use 12 “benchmarks,” or performance measures, to execute the program and report on its status. These performance measures were revised in January 2000 and are now also used for budgeting, accountability, and for assessing the status of states’ preparedness to respond to chemical emergencies. The Army’s Chemical Demilitarization Program (including CSEPP) has a 1999 total life-cycle (from start to finish) cost estimate of about $15 billion. The Army periodically updates the estimate. In 1985, the Army’s original cost estimate for the disposal project, the largest portion of the program, was $1.7 billion. This grew to nearly $10 billion in 1999. In 1988, it estimated that the cost of CSEPP would be $114 million. CSEPP has a 1999 life-cycle cost estimate of $1.2 billion. Sharing responsibility for the program, the Army provides the 10 states and the local communities near the storage sites with funding for the off-post program through FEMA. As with other emergency preparedness programs, FEMA administers this program through its regional offices to the states. Under the current management arrangement, the Army, FEMA, and the states and counties share responsibility for preparing CSEPP annual budgets. The states and counties are responsible for identifying the requirements and developing annual requests for the critical items that they believe are needed to be fully prepared to respond to a chemical emergency. After each state prepares its initial budget proposal, it negotiates an acceptable level of funding for its proposed projects with the appropriate FEMA regional office. The approved budget proposal is then forwarded to FEMA’s headquarters for further review and approval. After the Army approves a total funding amount that it will transfer to FEMA for CSEPP’s off-post activities, FEMA’s headquarters prepares a Cooperative Agreement with specific activities, funding, and periods of performance for each state. On the basis of these Cooperative Agreements, FEMA issues funds received from the Army as needed throughout the fiscal year to match a state’s budgeted CSEPP spending. The states then apportion the funds among various state agencies and the local communities (counties and cities) surrounding the sites for their CSEPP operations. Although the Army’s Program Manager for Chemical Demilitarization is responsible for the stockpile’s safe destruction, the current arrangement between the Army and FEMA does not provide the Program Manager with direct responsibility for CSEPP. However, in the past, FEMA has received supplemental funding from the Program Manager to help meet CSEPP’s unexpected funding needs. But the Program Manager told us that the program no longer has any uncommitted funds on hand to support CSEPP’s activities. The greatest risk to the local community is from an event that would cause a chemical release while the chemical weapons are in storage. Low- probability occurrences, such as an airplane crash, earthquake, or serious accident in the storage area, could potentially cause a cloud or plume of toxic chemical agent to be released into the air, putting the surrounding community at risk of exposure. In the unlikely event of such an incident, the professional or volunteer emergency personnel in the community would be the first responders. The type of protective action response— evacuation or sheltering in place—would be determined for each of the numerous zones in the counties that surround each site on the basis of recommendations made by emergency personnel at the Army post. To be able to effectively support the evacuation or shelter-in-place emergency response, local emergency management activities require that critical items, such as warning sirens, protective equipment, and response plans be in place. The Army and FEMA also fund joint training exercises that bring together the personnel, equipment, and response plans to practice emergency response preparedness. To illustrate, figure 1 shows three scenes around a decontamination unit during (training) exercises at Anniston, Alabama on March 2, 2001 and Umatilla, Oregon on May 8, 2001. The off-post emergency preparedness program is linked to the demilitarization program through its budget and in two other ways. First, the emergency program is designed to protect the public from a chemical emergency while the chemical weapons are in storage and during the demilitarization process. The public faces the highest risk when the stockpile is in storage because that is when the greatest amount of agent is present. When the destruction of the stockpile munitions begins, the risk to the public begins to decrease as the stockpile diminishes. When the destruction of the chemical weapons at a site is complete, the risk is gone and CSEPP funding for local preparedness ceases. Second, certain CSEPP and demilitarization program conditions must be met before states will agree that it is safe to begin the destruction operations. If state officials do not believe they have a satisfactory level of emergency preparedness, it will be difficult for the Program Manager for Chemical Demilitarization to begin destruction of the chemical weapons at a stockpile site. This linkage between the demilitarization and the emergency preparedness programs has thus set the official date that a state must be fully prepared for a chemical emergency as the date when the demilitarization process is scheduled to begin. If a state is not prepared and thus delays the start of demilitarization operations, it will cost the Army millions of additional dollars to pay contractors and support the facility. The Army, FEMA, and the states continue to use the projected start of demilitarization at each facility as the goal for having the needed critical items in place at the local communities near the stockpiles. Furthermore, this date also guides their program management and funding priorities. Likewise, this date matches either state law or planning goals linking the start of demilitarization operations with CSEPP readiness. For example, Oregon requires the governor to officially sign a statement that emergency preparedness at Umatilla is adequate before operations there are authorized to begin. Officials in other states also told us that similar emergency preparedness initiatives need to be completed before demilitarization operations begin. Without state officials’ agreement that their emergency preparedness is complete, the Army will not be able to begin demilitarization operations. CSEPP’s funding needs have continued to grow since 1997, after the Army said that the states would have all critical items in place by the end of 1998 and that, in particular, procurement funding requirements would diminish soon thereafter. Funding has generally been in line with the Army’s estimates of total needs through fiscal year 2000, but the program has already spent nearly all the procurement funds that had been estimated as needed through fiscal year 2010. The Army and FEMA are recalculating cost estimates for fiscal years 2003-07, but according to information provided by FEMA and the states, even this revised estimate will not include money for all needed items. According to the Army’s and FEMA’s financial documents, through the end of fiscal year 2000, the states received about half of the total CSEPP funding. But they have received different amounts because they each have different needs. FEMA officials told us that the Army has generally funded the CSEPP program in line with the Army’s life-cycle cost estimate and program cost projections, but they added that these projected amounts are less than past and current individual state requirements. For example, the states have requested unfunded critical items that exceeded the procurement funding that the Army predicted. To illustrate, through fiscal year 2000, the Army provided almost 88 percent of the total procurement expenses projected through fiscal year 2010. (See table 1.) Our review shows that needed procurement funding will exceed the amount estimated for fiscal years 2001-10. In contrast, during fiscal years 1988-2000, the program spent just over 53 percent of the total projected operation and maintenance funds. FEMA and the Army rely on the states and local communities to initiate funding requests. However, since the eventual funding decisions flow from the Army’s budget process, the states and FEMA have found it difficult to fund any newly identifiable requirements or other valid program needs once the budget is set. Such added costs to the program arise when unanticipated critical needs are asked for by the states on the basis of unforeseen rapid population growth around some chemical storage sites or when some critical items have needed unexpected repair or replacement. The Army’s budget for CSEPP is part of the Department of Defense’ s overall program, planning, budgeting, and execution budget process,which entails long planning request lead times. The lead time for projecting budget requests is 18 to 24 months beyond the time required for a particular item or funding need. FEMA and the states and local communities have not always adequately planned for anticipated and replacement needs and have had many unanticipated needs arise within this budget window. For example, some new and unanticipated CSEPP requests were not included in the Army’s and FEMA’s budgets for fiscal years 2002 and 2003 because the Army’s budget is already set and cannot be expanded. As a result, when budget cost estimates and funding are below the program’s actual requirements, FEMA and Army officials told us that FEMA has had to delay or spread out funding for some critical items. When FEMA and Army officials have to deny such funding requests, or so- called “unfunded requests,” from the states because funds are not available, they deny the states and local communities the opportunity for reaching full preparedness by not providing needed critical items in a timely manner. Correspondingly, if the Army and FEMA do not assist the states and local communities in accurately identifying requirements in a timely matter and determining the appropriate levels of funding, the states may not be fully prepared when chemical demilitarization is set to start. Any delay in achieving full preparedness could, in turn, delay the start of chemical demilitarization operations and would potentially cost the Army millions of dollars and jeopardize meeting the 2007 deadline. This unacceptable scenario may call for increased federal funding and funding in a more timely fashion. The Army, with the assistance of FEMA and the states, began updating the CSEPP life-cycle cost estimate in March 2000 and recalculating cost estimates for fiscal years 2003-7. Army and FEMA officials said that the estimate would increase by about $90 million. Though the revised cost estimate was not available to us at the time of our review, FEMA and state officials told us that not all the critical items that states will require nor the associated funding for all needed items were included. Our discussions with federal, state and local CSEPP officials identified several items, costing at least $50 million, which were not included in the projected procurement funding requirements. State officials told us that because of population growth and unexpected equipment replacement needs, they were not able to anticipate these critical needs. Such unfunded items include a communications system for the counties and the state of Oregon, the overpressurization of facilities in Alabama, and highway reader boards (signs) in Indiana. FEMA officials told us they would try to add additional funding needs to the revised cost estimate this summer. However, it is unlikely that these additions will include all of the items needed in the near future. These needed items have to be funded through new appropriations. Though FEMA and the Army have some discretion to reprogram or reallocate some funds for newly identified CSEPP needs, this discretion is limited, and there are few available funds to reprogram to meet unfunded requests. In many cases, personnel in the local communities do not have adequate experience and training to understand, identify, and prepare requests to meet federal and state budget and cost estimates. Thus, FEMA officials told us that state and local CSEPP officials have not always adequately identified the critical items they will need. As a result, the latest cost estimate is not sufficient to fund all critical items, and funding for the program will have to be increased in order to procure all needed items to achieve full preparedness. Since the inception of the CSEPP program in 1989, the Army has provided $761.8 million in funding. As figure 2 shows, the CSEPP off-post program has received the bulk of program funds since its inception and is growing. Most of the growth in program costs has been in FEMA’s off-post program, while funding for the on-post program has stabilized at about $30 million annually since fiscal year 1993. Typical on-post-funding requirements include alert and notification and communication equipment, as well as emergency operations personnel and training expenditures. Likewise, off-post funding requirements encompass similar expenditures plus public awareness activities and exercises. The Army’s on-post activities received $270.2 million, or about one-third of the funding, and FEMA’s off-post activities received $491.6 million, or about two-thirds. Of the total off-post amount, the states received about three- fourths, or $368.9 million, and FEMA used the rest to fund its activities and to purchase items for the states. (See app. II for further information on CSEPP funding amounts and procedures.) The states received varying amounts of funding ranging from a low of $6.2 million for Illinois to a high of $107.8 million for Alabama. (See fig. 3.) Because each state had different emergency response capabilities when the program began, FEMA uses the principle of “functional equivalence” to guide resource allocation. Under this principle, FEMA provides each state or local community with adequate assets to meet a level of response capability agreed to by FEMA, the Army, and the states. Thus, FEMA and the Army provide the states with levels of funding support according to their requirements and mutually agreed-upon needs. For example, each state should have emergency warning sirens; however, the number and location of these sirens would depend upon local conditions and requests. The Army and FEMA have made significant progress in the last 4 years in enhancing the states’ emergency preparedness. Three of the 10 states are fully prepared to respond to a chemical emergency, and 4 others are close to being fully prepared. (See app. III for more details on each state’s status.) In 1997, none of the states had attained all of the items deemed necessary to respond to a chemical emergency. Despite significant improvements in these states, more work is needed at the remaining three states where issues about some critical items are still unresolved. One of the counties in Alabama, Calhoun, has no agreed-upon response plan and has not informed the public about the actions they may be directed to take. This situation raises the question whether the county will be able to adequately respond to a chemical emergency. Additionally, some state and local emergency management officials indicated that until critical items are in place, they will not support the Army’s initiation of the destruction of chemical weapons at the stockpile site in their communities. All the locations we visited indicated that their program has improved since our June 1997 report. In 1997, none of the 10 states had attained all of the program’s critical items considered necessary for emergency preparedness; now 3 of the 10 states have. (See fig. 4.) The three states (Maryland, Utah, and Washington) considered fully prepared to respond to a chemical emergency individually cited several reasons for their program’s success. For example, Maryland and Washington state and local CSEPP officials indicated that their state had an extensive disaster control program in place prior to CSEPP because of their involvement in the Radiological Emergency Program. In addition, the Maryland state CSEPP director told us that an active cooperative community effort, such as participation in integrated process team meetings, helped CSEPP achieve its goals in Maryland. Utah’s and Washington’s CSEPP officials indicated that communications, cooperation, teamwork, and interpersonal relationships are the root of their success in implementing CSEPP. Additionally, Washington state’s CSEPP officials cited the inclusion of state and local CSEPP officials in the budgeting process as contributing factors to the program’s success. These three states, like the others, have ongoing needs for equipment upgrades, equipment replacement, and/or expanding response capability. For example, additional equipment such as sirens may be required to accommodate a change in population growth. (For further information about these additional needs in each state, see app. IV.) Four states (Arkansas, Colorado, Illinois, and Oregon) continue to lack all the items critical for responding to a chemical emergency. But these states have plans and actions in place to acquire the needed critical items by 2003. FEMA has either funded the items or has taken action to bring the states into compliance with CSEPP guidance. In some cases, the items are currently being distributed. Accordingly, we judged these states to be progressing toward performance goals and full preparedness. Arkansas still has gaps in four of its critical items. For example, not all of the personal protective equipment has been distributed to the emergency responders. Additionally, two overpressurization projects will not be completed until August 2002. The current tone alert radios do not work as intended and need to be replaced, and not all medical response personnel have received the necessary CSEPP training. Colorado is in the process of distributing its tone alert radios. Once Colorado completes this distribution effort, it will be considered fully prepared. Illinois still has capability gaps in two of its critical items. Although FEMA approved funding for 40 tone alert radios in February 2001, they have not yet been delivered and distributed. And only one of three hospitals participating in the program has a full supply of antidote. Oregon still has capability gaps in two of its critical items. The current communications system is cumbersome to use and does not meet CSEPP’s standards. A recent proposal to over-pressurize five facilities is under review. Although not an item included in the assessment of CSEPP’s preparedness by the Army and FEMA, the state also wants monitoring equipment to analyze an area to determine if it is safe to enter after a chemical accident. The remaining three states (Alabama, Indiana, and Kentucky) do not have several critical items in place. It will require a major effort by the Army, FEMA, and the states and their communities to have them in place in the near future because the states have many unresolved issues concerning these outstanding critical items. If these issues are not resolved shortly, the start of demilitarization operations may have to be delayed. Army efforts to destroy the stockpile within the Chemical Weapons Convention’s mandated time frame may also be compromised. For example, plans are for the Anniston, Alabama, site to be operational by March/April 2002— some 9 to 10 months from now—requiring all critical items to be in place by this date. Among the unresolved issues facing the three states are controversies surrounding what facilities to over-pressurize, the number of highway reader boards to order, the number of shelter-in-place kits to order, and the strategy for both evacuation and sheltering in place. Delays have been attributed to issues such as (1) complicated projects that were initially managed at the local level but were later assigned to a more experienced entity to manage and (2) the lack of timely federal response to requests. Alabama has major unresolved issues with FEMA and the Army and is lacking five critical items (overpressurization, tone alert radios, coordinated plans, CSEPP staffing, and shelter-in-place kits). There are unresolved issues with two of these five items. Specifically, Army’s, FEMA’s, and Alabama’s CSEPP officials have not agreed on how best to address the state’s overpressurization projects and its coordinated plans. State officials told us that Calhoun County and FEMA have not agreed on the number of facilities requiring overpressurization systems. FEMA is planning to over-pressurize some portion of 28 different facilities but has funded only eight of these projects. FEMA advised us that it believes an additional request by Calhoun County is without merit and not supported by science. The issue of coordinated response plans centers on local preference for a strategy of evacuation. Despite attempts by the Army and FEMA to have the state and Calhoun County officials consider a strategy combining evacuation and sheltering in place, Alabama’s overall immediate response zone counties’ protective action strategy covered evacuation only. In 1999, the Army funded a study that designed a strategy with both evacuation and sheltering in place. Talladega county, Alabama, uses the study’s guidebook to determine its response strategy. However, Calhoun county’s CSEPP leaders and FEMA still do not agree on how to incorporate and resource a strategy that includes shelter in place. As a result, Calhoun county has not participated in FEMA’s outreach campaign. In addition to five critical items, Alabama is also seeking additional sirens and is considering requesting additional personal protective suits and decontamination equipment. FEMA is in the process of reviewing the request for the additional sirens. Indiana is lacking four critical items (personal protective equipment, tone alert radios, mobile highway reader boards, and shelter-in-place kits). Three of these items have been received but they are in storage and will not be distributed until later in the year. Indiana has an unresolved issue with its capability to use highway reader boards. According to state CSEPP officials, the state had proposed using the Indiana Department of Transportation’s mobile reader boards during a chemical emergency. However, the transportation department decided that it could not share its reader boards with CSEPP. Now, Indiana’s CSEPP managers say they need additional funding to purchase reader boards for CSEPP. According to FEMA officials, the agency has not received a request for highway reader boards. Indiana is also seeking additional sirens and FEMA is in the process of reviewing this request. Kentucky is lacking four critical items (overpressurization, tone alert radios, coordinated plans, and medical planning). CSEPP officials and FEMA have yet to resolve the issues involving overpressurization, coordinated plans, and medical planning. Although two schools and one hospital will be over-pressurized, state officials have identified at least another 35 facilities that will require additional protection. FEMA and state and local CSEPP officials have not agreed on the number of facilities and type of protection they need. FEMA officials said the U. S. Corps of Engineers has studied the need for overpressurization and will recommend the number of facilities. Also, the state and counties are using draft plans that have not yet been approved by state CSEPP officials. Additionally, not all of the 13 hospitals that participate in the program have the needed chemical antidotes. FEMA has not decided whether it will provide funding to fully resource these hospitals. In addition to these four items, Kentucky is seeking additional personal protective equipment, decontamination equipment, and sirens. FEMA is in the process of reviewing the request for these additional items. Army and state CSEPP officials were concerned that without an approved CSEPP response capability, states will delay the issuance of environmental permits needed before the destruction of chemical weapons can take place. In August 2000, the governor of Oregon appointed an executive review panel to evaluate whether an adequate emergency response program was in place and fully operational for any emergency arising from the storage or destruction of chemical weapons at the Umatilla Chemical Depot. The panel is expected to provide an interim recommendation in June 2001 and a final recommendation in October 2001 on whether the governor should certify CSEPP as fully effective and operational. State CSEPP officials were concerned that the lack of a CSEPP-approved tactical communications system and the state’s need for equipment to monitor for chemical agent will delay the issuance of environmental permits in that state. FEMA officials however told us they had approved funding for equipment to monitor for chemical agent. Although Alabama does not have a CSEPP certification requirement, state and county CSEPP officials told us they will not support the Army’s goal to begin the destruction phase of the chemical demilitarization program until critical CSEPP items are in place and fully operational. CSEPP officials in Indiana and Kentucky expressed similar sentiments. The Program Manager for Chemical Demilitarization has gone on record as being committed to addressing local communities’ concerns regarding CSEPP’s readiness to avoid delays in the start of demilitarization operations. The Army and FEMA have improved their joint management of CSEPP since our 1997 report, which found that no state was fully prepared and cited several major management weaknesses. Since then, the Army and FEMA have acted upon our recommendations. They have improved their working relations with each other and have more clearly defined their individual roles and responsibilities. They have not, however, been as successful in their working relations with states and local communities. FEMA, in particular, has not always taken a proactive approach to helping states and their local communities with technical support, using best practices, and disseminating information. FEMA has not provided as much guidance as it could to help local communities fully understand all critical aspects of the program. Thus, the local communities have not been able to take advantage of all available resources, maximize coordination and efficiency, and assume their place as full partners in the program. Additionally, the national benchmarks and accompanying planning guidelines for interpreting and assessing the program’s progress are unclear. As a result, communities interpret the benchmarks differently and apply different measures of capability. Moreover, the Army and FEMA have failed to provide enough guidance on an essential element of the program—reentry to areas potentially contaminated by chemical agents. This lack of program guidance has caused uncertainty and concern among state and local CSEPP officials. Since we reported on a number of management problems with CSEPP in 1997, FEMA and the Army have made considerable progress in how they work together. Among the problems we reported on were that (1) management roles and responsibilities were fragmented between Army and FEMA offices and were not well defined, (2) planning guidance was imprecise, (3) the budget process lacked coordination and communications, and (4) financial data and internal controls were inadequate. Partially in response to our recommendations, in October 1997 they signed a new memorandum of understanding that clarified their roles and responsibilities in the program. This arrangement has greatly reduced conflict in their direction and guidance of oversight. They also revised benchmarks that are used to identify local communities’ needs and progress. In addition, they use national planning guidance to shore up their efforts to enhance accountability and performance. Since 1997, the Army and FEMA have both been placing greater emphasis on public awareness and readiness campaigns. For instance, FEMA has helped local communities establish procedures for the dissemination of accurate and coordinated information in case of an emergency, and it has established an “integrated process team” at each storage site to obtain community input into initiatives. Also, FEMA and the Army have established a site on the World Wide Web that provides a list of materials that an emergency manager or planner can consult for basic information about the program, including technical reports and publications. The Army and FEMA have not, however, been able to develop the effective working relations with all states and local communities that they developed with each other. FEMA and the Army have not been proactive in providing some much- needed technical assistance, advice, and budget guidance. This void left some state and local CSEPP officials in seven states without assistance in areas where it was clearly needed. Three states and their communities are still experiencing trouble carrying out their roles and have unresolved issues. For example, many local CSEPP officials do not have either the training or substantial expertise in chemical weapons, budgeting, or the acquisition of very specialized high-tech equipment needed for emergency response systems. Yet in spite of complaints by some local CSEPP officials that they need more and better technical and budgetary assistance, Army and FEMA officials have not always reached out to help communities learn what they need, how to get it, or, most importantly, who they can turn to for assistance. Army and FEMA officials said that they have provided both general and specific information on many of these topics via training opportunities, publications, and copies of exercise reports. But because they view the program as primarily a state-managed endeavor, they also normally rely on the state and local community officials to ask for such assistance. We found a number of cases where FEMA did not offer specific technical assistance when local CSEPP officials were having difficulties with complicated administrative processes or were unaware of available options to meet requirements. For example, several local community officials said they were unaware that various radio communication systems (tone alert radios) and alert and notification (sirens) options are available or that different states had varying experiences with contractors. Similarly, various state officials said they needed additional technical risk assessment assistance from the Army and FEMA to evaluate the toxic properties of various stored chemicals and the potential adverse exposure effects they may have on humans. Furthermore, several local community officials said that unfamiliarity with federal contracting procedures and accounting practices have caused unnecessary program delays and confusion. Particularly in the case of CSEPP’s budgeting matters, the lack of assistance and guidance has created delays in requesting needed items. Many local CSEPP officials told us they still do not understand how the Army’s budget process works and how to plan ahead for future requirements and acquisitions. Without accurate and timely estimates, program officials have difficulty determining how much funding they will need and when they will need it. We recognize the need for the Army and FEMA to give states and local communities both flexibility and sufficient independence in carrying out their programs. However, we believe that the Army and FEMA also have a responsibility to fully inform state and local CSEPP officials of the types of assistance the federal government is able and willing to provide. The FEMA officials we spoke with agreed that some local CSEPP officials may not know of the types of assistance available, but said they had, in most cases, responded to the local officials’ needs. FEMA officials said that, starting in January 2001, they began to formally educate state and local officials on budgetary issues through a seminar. However, this single seminar did not reach all CSEPP staff in the states and local communities and will need to be repeated. In commenting on a draft of this report, FEMA said it is providing other budgetary assistance and guidance in the form of additional instruction on topics such as federal grants and financial processes. Most of these new initiatives had not been fully implemented at the time we ended our review. Although FEMA and the Army have both been placing greater emphasis on public awareness campaigns, they have not always carried out effective public information or awareness campaigns about CSEPP in local communities. As a result, communities in some states are openly hostile or suspicious of the overall aims and goals of the program and do not see it as their own. Furthermore, FEMA has not taken the lessons learned from some of the more successful states and applied them elsewhere to avoid public relations problems or to increase overall understanding and acceptance of the program. One prime example of such problems has been the controversy in Alabama over two different types of responses to a chemical emergency: “shelter in place,” whereby people seek shelter in whatever building they are in and take specific protective actions, and evacuation, which involves leaving an area of risk until the hazard has passed and the area is safe for return. Alabama’s local CSEPP communities had only planned for evacuation for years. The Army funded the production of a guidebook published in 2000 that provides emergency personnel with step-by-step instructions to evacuate or shelter in place in the event of a chemical accident or incident at the Alabama storage site. County officials claim that the Army and FEMA have been trying to use the guidebook to persuade them to adopt shelter-in-place strategies without addressing several outstanding safety issues. The Army, which funded the guidebook through FEMA under an existing Army contract at the request of the state and counties, initially refused to endorse or assume any ownership of the study. However, the Army acknowledged that local communities’ continued reservations to the idea of sheltering in place raised questions about the whole CSEPP concept of sheltering-in-place. It has now formally supported the guidebook, provided that its use does not hamper the Army’s ability to meet mandated alert and notification times to the off-post community. The Army also announced that it would evaluate the assumptions and scope of the guidebook for correctness and applicability. Much of the controversy surrounding the study and its recommended response strategy of sheltering or evacuation was due to poor relations with the Calhoun county CSEPP officials. FEMA and Army officials did not have a “partnered” strategy with local community participants and a coordinated public information initiative on the study, thus causing a public relations problem that placed both agencies on the defensive and in a reactive, rather than proactive, mode. FEMA has had other controversies that led to similar public relations problems, though not as severe, in Indiana, Kentucky, and Oregon. At various times, some local community leaders have been advocating a greater proactive role by the Army and FEMA in public relations and team-building initiatives for the program— not just for emergency planning, but also for the decision-making process that comes before the planning and that requires local CSEPP officials’ involvement, support, and ownership. Strategies that include resources for proactive information campaigns can be very effective in building local CSEPP officials’ ownership. FEMA has rarely leveraged the lessons learned from some of the more successful state efforts and applied them elsewhere to increase effectiveness while avoiding public relations problems. An example of a successful approach that has not been used is FEMA’s very positive experience in Oregon, where innovative management schemes and practices were implemented to improve coordination, services, and local community participation. We recommended such program coordination in our 1999 report identifying strategies and results-oriented organizational frameworks for enhancing the program’s implementation in Oregon.There, FEMA and the state of Oregon placed both of their CSEPP representatives inside the local community (rather than at state or regional headquarters) to provide a concrete and daily presence that is both reassuring and more immediately effective. In addition, the state of Oregon has organized a governing ruling board—composed of all key state and local CSEPP officials—to provide more direction, coordination, and oversight at the local level. All the Oregon CSEPP community participants we spoke with expressed great satisfaction with this arrangement and feelings of accomplishment, thanks to the new organizational structure. Although FEMA is not actively considering setting up or endorsing similar structures elsewhere, officials said they had explored such an arrangement in Alabama. FEMA also has no plans to disseminate best practices or lessons learned among the different states and communities. The Army and FEMA use the quarterly meetings of CSEPP’s State Directors and annual gatherings of all CSEPP stakeholders as an opportunity for participants to share information and experiences. Only recently, in November, 2000, did FEMA create a public affairs team to recommend ways to ensure that the public is aware of protective action strategies. In addition, FEMA provides an inventory of literature that may have implications for emergency preparedness on a Web-site. This is not enough. If FEMA had a more timely, proactive approach to sharing lessons learned with all 10 states and had taken the initiative to apply them where unresolved issues were slowing progress, the program would be farther along. A more proactive management approach to share and apply success stories, such as with special tone alert radios purchased by Arkansas, may have helped resolve issues in Indiana. The benchmarks FEMA uses to measure performance are not defined consistently in the national planning guidance and in FEMA’s policy papers. The information about the benchmarks in these documents cannot be fully reconciled and used for measuring compliance. Additionally, FEMA officials told us that the benchmarks were not evaluated with the same standards in all states. This makes it difficult to measure and compare performance or accountability and to identify requirements correctly to assist in budgetary determinations. The new and revised national benchmarks that FEMA issued in August 2000 identify both the items and processes necessary for full chemical emergency preparedness. Also in 2000, FEMA and the Army issued supplementary information (policy papers) to the national planning guidance for the development of local emergency response plans. However, the 1996 guidance does not consistently match the definition of terms in the revised benchmarks. Furthermore, the guidance for measuring compliance with the benchmarks (known as “community profile” guidance) is not always internally consistent. For example, one benchmark says that communities must have a “functioning communications system” (so emergency personnel can talk to one another) and another mentions a “functioning alert and notification system” (to alert citizens of an emergency). But the community profile guidance does not specify what constitutes a functioning item, and the 1996 guidance cannot be traced to the definition of terms in the revised benchmarks to determine what constitutes a functioning item. The Army and FEMA believe that states are in the best position to determine their priorities and requirements. They cite “functional equivalency”—the concept that it is not necessary to provide every local community with identical assets and resources, as long as the community’s basic emergency management capabilities meet CSEPP’s guidance. Thus, CSEPP policy allows benchmarks to be modified from state to state as appropriate to address any unique community circumstances. In some cases, however, states do need clarification on the benchmarks and additional guidance in order to perform their responsibilities. For example, at least three states (Alabama, Indiana, and Kentucky) have had problems interpreting some of the benchmarks for 2000. And because there is limited guidance on how to measure the local communities’ compliance with the benchmarks, state and federal assessments are not standardized. Alabama, Oregon, and Utah, for example, use different grading systems to measure local community compliance. At the same time, FEMA’s regional offices have, at times, used their own and different criteria for measuring compliance. Some state officials expressed concerns about the lack of standardization of benchmark measurement. For example, about the possible adverse effects that this unevenness may have on funding in states with more rigorous standards. One of the areas where the Army and FEMA do not agree concerns planning guidance for what is known as “reentry.” Reentry is the process of determining if and when it is safe to return to a contaminated area or leave shelters after a chemical accident. In 1996, we reported that the planning guidelines for reentry were missing and needed to be developed. Although the Army did develop draft guidance in 1997, 5 years later no site-specific guidelines for reentry have been distributed or used. Additionally, we found that no one at FEMA knew of generic (not site- specific) guidance issued by the Army in 1997. Neither the Army nor FEMA has endorsed or funded any technical or support studies to assist local communities in planning for reentry. Currently, a working group, composed primarily of state, local, and installation planners, is studying reentry and recovery. The Army believes it has provided an adequate comprehensive framework to communities for developing site specific plans to address reentry in any given scenario. It said it has conducted classroom simulation exercises on reentry with some communities. However, we do not believe the guidance or exercises are sufficient. The guidance is not site-specific, and the exercises are tabletop—not on-the-ground exercises—and have been limited in number. State and local CSEPP officials do not agree that the Army has provided sufficient guidance for their planning purposes. The principal reason for inaction is a disagreement over whether reentry is in fact part of the initial response to a chemical stockpile emergency, and therefore part of CSEPP. If it is not considered an element of CSEPP, then it is exclusively under the purview of the Army. While FEMA has been largely noncommittal on the issue, Army officials insist that reentry must be implemented and planned by the Army’s Service Response Force, with assistance from state and local officials. Army officials also believe that because every emergency is different and unpredictable, there is no way to assess local preparedness for reentry or make specific reentry plans until an emergency actually happens. State and local CSEPP officials disagree with the Army and have been working together on an interim conceptual plan. A 1994 planning concept paper on recovery from a chemical weapons accident was prepared for the Army. But it contained only limited public awareness information and no guidance based on it was distributed to the states and their communities. The only guidance prepared by the Army has not been distributed to the CSEPP community nor to FEMA officials we interviewed. Furthermore, the guidance does not address the local CSEPP officials’ concerns. The Army and FEMA have, thus, left unanswered a number of questions on such issues as participants’ roles and responsibilities, effective monitoring and verification schemes, and the appropriate types of protective clothing that would be required. While the Army and FEMA have made considerable progress in assisting state and local communities to be fully prepared to respond to a chemical emergency, thousands of people who live near at least three of the eight chemical storage sites are still at a higher risk of exposure to a chemical accident than necessary. Since the Army and FEMA have not always actively assisted the states in determining their local communities’ CSEPP needs, seven states have not been able to provide local emergency responders with all the necessary items. Of these seven, three are still seriously unprepared to respond to a chemical accident. The Army may not be able to begin destroying its chemical agents at two of these sites on schedule unless further improvements are made in the emergency preparedness of those communities. As a result, residents will face higher risks for a longer period, the Army may incur millions of additional dollars to maintain the program beyond its projected completion date; and the Army may not meet the Chemical Weapons Convention destruction deadline. To ensure that communities are safe and that demilitarization can begin on schedule, the Army and FEMA need to move in a timely manner to apply lessons learned and best practices to improve poor working relations with these states and their communities, especially with those where demilitarization of the stockpile is most threatened by delays. These lessons include better guidance to the state and local CSEPP officials in the three states with unresolved issues to determine needed critical items and additional technical assistance to acquire them. In addition, the Army and FEMA need to improve the accuracy of the life-cycle cost estimate for CSEPP so that estimated funding is sufficient to procure all needed items as quickly as possible. They also need to make the measurement of the program’s benchmarks consistent in all states to better monitor accountability and identify requirements correctly, and they need to provide guidance and planning for reentry to all states and their communities. We recommend that the Secretary of the Army and the Director of the Federal Emergency Management Agency adopt a more proactive approach to improve working relations with Chemical Stockpile Emergency Preparedness Program states and communities. Better relations would help assure the states and their communities that all the necessary actions will be taken to fully prepare them and keep them prepared to respond to a chemical accident. Specific actions should (1) provide technical assistance, guidance, and leadership to the three states with long-standing issues to resolve their concerns, especially Alabama and its issues with sheltering-in-place, evacuation, and the collective protection of facilities; (2) provide all states and their communities with training and assistance in preparing budget and life-cycle cost estimates and guidance and plans on reentry; and (3) establish specific measures of compliance with the benchmarks to more evenly assess performance and to correctly identify requirements. In commenting on a draft of this report, the FEMA and the Army generally concurred with our recommendations. In its comments, FEMA focused on the “need to capture and share lessons learned and best practices” with local communities and cited a series of very recent initiatives it has undertaken to do so. However, FEMA’s characterization of this issue as one of our key concerns is incorrect. Capturing and sharing lessons learned and best practices is only one of several areas in which we believe FEMA and the Army need to become more proactive. These include providing technical assistance, planning guidance, and outreach. FEMA also disagreed with our finding that three states are not fully prepared to respond to a chemical emergency and claimed that the tables in appendix III and IV show that all states are indeed fully prepared. FEMA claimed that “the language in the body of the report does not accurately reflect the GAO findings displayed in Appendix III and IV.” We disagree. As our report and the tables in the appendices clearly show, seven states do not have all the critical items they need to have in place and functioning in order to respond to a chemical emergency—as FEMA’s own criteria (in CSEPP guidance and in FEMA’s benchmarks) clearly state that they should. The three states in question, furthermore, are even farther behind in their preparedness than the other four. Furthermore, in its comments, FEMA also acknowledged that Calhoun county, Alabama, is “far from being fully prepared.” The Army’s comments are included in their entirety in appendix V. FEMA’s comments are reproduced in appendix VI. We are sending copies of this report to the appropriate congressional offices; the Secretary of Defense; the Secretary of the Army; the Assistant Secretary of the Army (Installations & Environment); the Under Secretary of Defense (Comptroller); the Director, Federal Emergency Management Agency; and the Director, Office of Management and Budget. Please contact me at (202) 512-6020 if you have any questions. Key contributors to this report were Donald Snyder, Joseph Faley, Bonita Oden, James Ohl, and Stefano Petrucci. During our review, we interviewed officials and obtained data from the Department of Defense, including the Office of the Inspector General. Within the Department of the Army, we interviewed and obtained data from officials in the offices of the Assistant Secretary of the Army for Acquisition, Logistics, and Technology. In addition, we obtained data from representatives of the Program Manager for the Chemical Demilitarization Program and the U.S. Army Soldier and Biological Chemical Command. Since we recently examined the Army’s on-post efforts, we focused our efforts on FEMA’s off-post or civilian community activities. Accordingly, we met with officials of and obtained data from FEMA’s headquarters and its regional offices concerned with CSEPP. Furthermore, we conducted site visits and interviewed program officials at the Anniston Army Depot, Alabama; Pine Bluff Arsenal, Arkansas; Pueblo Chemical Depot, Colorado; Newport Chemical Depot, Indiana; Blue Grass Chemical Activity, Kentucky; Edgewood Chemical Activity, Maryland; Umatilla Chemical Depot, Oregon; and Deseret Chemical Depot, Utah. We either visited or contacted state emergency management officials in the 10 states involved in CSEPP: Alabama, Arkansas, Colorado, Illinois, Indiana, Kentucky, Maryland, Oregon, Utah, and Washington. The counties closest to the chemical stockpile storage sites, and therefore the off-post areas most at risk during a chemical accident, are known as the Immediate Response Zone counties. The adjacent counties, and the areas with a lesser risk, are known as the Protective Action Zone counties. Funding and time schedule restraints did not allow us to visit all of these counties. However, we did interview emergency management officials in all of the Immediate Response Zone counties. These counties are: Calhoun and Talladega counties, Alabama; Grant and Jefferson counties, Arkansas; Pueblo county, Colorado; Parke and Vermillion counties, Indiana; Madison county in Kentucky; Morrow and Umatilla counties, Oregon; Tooele county, Utah; and Benton county, Washington. The state of Maryland refers to the at-risk area as the Emergency Planning Zone; we visited and interviewed emergency management officials in Baltimore, Harford, and Kent counties. We also visited and interviewed emergency management officials in St. Clair, Alabama, and Pulaski, Arkansas, both of which are Protective Action Zone counties. To assess FEMA’s financial management controls over CSEPP, we traced the funding provided for this program from the Army through FEMA to the states and local communities. We interviewed officials, obtained data, and examined records to determine (1) the extent of CSEPP’s off-post funding provided by the Army to FEMA for fiscal years 1989 through 2000, (2) FEMA’s use of these funds, and (3) the funding FEMA provided for the 10 CSEPP states. For the fiscal years 1989 through 2000, we reconciled CSEPP’s off-post funding that the Army stated it provided for FEMA with the funding that FEMA stated it received from the Army. We similarly reconciled the amount of funding FEMA stated that it provided for the states with the amount of funding that the states stated they received from FEMA. We initially wanted to determine the amount of funding used by each of the 10 CSEPP states in terms of the CSEPP National Benchmarks. However, we found that consistent and reliable data were not available, especially for the earlier fiscal years, from either FEMA or the 10 CSEPP states. We also attempted to determine the further distribution of the funding provided to the states and to the local communities. However, not all states were able to easily provide this information for the earlier fiscal years, so we were unable to report these amounts. In performing this review, we used the same accounting records and financial reports that the Army, FEMA, and the 10 CSEPP states used to manage and monitor the Chemical Stockpile Emergency Preparedness Project. We did not independently determine the reliability of the reported financial information. In some cases, because of the age of the financial data collected, we had to rely upon oral statements and verified this information to the extent possible and practical. To determine the status of achieving CSEPP preparedness in communities near the chemical weapons stockpiles and what remains to be done, we started with our 1997 CSEPP report results. Since our 1997 report, FEMA has established new CSEPP National Benchmarks used to identify the capabilities being funded and for the annual reporting to the Congress. In our 1997 report, we considered 8 critical items and have since then, in keeping with CSEPP’s evolving measures, considered 19 critical items during this assessment. In determining our performance measures we, in some cases, identified sub-elements within a benchmark and included reentry. According to Army officials, reentry is not a CSEPP issue. Since the Army and FEMA have yet to resolve their positions on reentry, we did not consider it when determining whether a state is fully prepared. We did, however, solicit comments regarding reentry planning from CSEPP managers at the federal, state, and local levels. For our assessment of its status, a state must have all its required items (with the exception of reentry) in place and operational by February 2001 to be considered fully prepared. (See table 4 and table 5 in app. III for a status update.) We then obtained FEMA’s latest categorization of the preparedness status of the 10 CSEPP states as they relate to these CSEPP National Benchmarks. We then visited each state except Illinois and discussed the preparedness status of its program with the appropriate state emergency management personnel. To the extent possible and practical, we also contacted FEMA personnel from the appropriate FEMA regional offices as well as county emergency management personnel. From this information, we determined the preparedness status of each state’s program in terms of how many critical items were in place and determined changes since our 1997 report. We then sent a structured questionnaire to the emergency management personnel in the 10 states to confirm our analysis and obtain their comments. To ascertain how CSEPP lessons learned are developed and shared among Army, FEMA, and the local communities and how this process might be improved, we initially contacted the Army and FEMA. We discovered that there is no formal, established CSEPP lessons learned process. Accordingly, we asked Army, FEMA, state, and county officials for examples of the lessons learned that they had shared with each other. We also obtained their concerns and opinions about management issues confronting the program. We performed our review from November 2000 through April 2001 in accordance with generally accepted government auditing standards, except for limitations regarding financial information. Since the inception of the Chemical Stockpile Emergency Preparedness Program (CSEPP) in 1988, the Army has provided $761.8 million— $509 million in operation and maintenance funding and $252.7 million in procurement funding. The Army-managed on-post activities at the eight storage sites received $270.2 million (one-third) of the total. The Federal Emergency Management Agency (FEMA)-managed off-post activities received $491.6 million (two-thirds) of the total. The off-post funds are to be used to help the communities surrounding the storage sites in 10 states enhance their emergency management and response capabilities in the unlikely event of a chemical stockpile accident. The Army funds and FEMA manages the procurement of the additional items needed to bring each community to a CSEPP standard of preparedness. The Army has made several life-cycle cost estimates for the program. Of the $491.6 million provided for the off-post activities, FEMA used $122.6 million (one-fourth) through fiscal year 2000, including some funds used to support the efforts by the 10 states. This included $79.4 million used by FEMA. $29 million of the operation and maintenance funding was used to support FEMA’s headquarters and the six regional offices involved with CSEPP. $42.3 million of the operation and maintenance funding was used to support planning, exercises, training, public affairs, and automation efforts being performed by the CSEPP states. $8.1 million in procurement funding was also used to support the CSEPP states’ efforts. In addition, FEMA currently has $41 million in unissued funding— $1.9 million in operation and maintenance funding for fiscal year 2000 and $39.1 million in procurement funding for fiscal years 1998 through 2000. Most of these funds will be issued to the states for their program efforts with smaller amounts retained for FEMA’s headquarters and regional offices. The remaining $368.9 million, or 75 percent of the off-post total of $491.6 million, was distributed to the 10 states, as shown in table 2. Annually, each state prepares a budget proposal and, in essence, negotiates a level of projects and funding with the appropriate FEMA regional office. Then, the approved budget proposal is forwarded to FEMA’s headquarters for further review and approval. Once approved, FEMA’s headquarters prepares cooperative agreements with specific activities, funding, and periods of performance for each state. On the basis of these cooperative agreements, FEMA issues funds in increments through the fiscal year to match the state’s budget proposal and agreed upon activities. The funding provided is within the Army’s life cycle cost estimate. In turn, the states disburses the funds received from FEMA to the various state offices and local communities. Army funding provided through fiscal year 2000 included $509 million in operation and maintenance funding and $252.7 million in procurement funding, as shown in table 3 below. Of this amount, the Army managed on- post activities at the eight Army storage sites that received total funding of $270.2 million. The $761.8 million total funding from fiscal year 1988 through fiscal year 2000 is slightly below the Army’s projected funding. As part of an acquisition program, the Army prepares a life-cycle cost estimate for CSEPP. In 1997, the Army estimated the life-cycle cost of this program to be $1,273.6 million (in 1997 current-year dollars). Of this amount, $776.2 million ($536.4 in operation and maintenance funding and $239.8 million in procurement funding) was incurred through fiscal year 2000, and the remaining funds are estimated costs through fiscal year 2010. In 1999, the Army prepared a working life-cycle cost estimate that reflected a slight decrease to $1,237.3 million (in 1999 current-year dollars). This estimate included $781.7 million ($517.7 million in operation and maintenance funding and $264.1 million in procurement funding) incurred through fiscal year 2000, and the remaining funds are estimated costs through fiscal year 2010. The 1999 working estimate is $19.9 million above the $761.8 million in actual funding provided by the Army through fiscal year 2000. In addition, the Army has an ongoing Defense Acquisition Board Review whereby it and FEMA are undertaking a complete review of the CSEPP life-cycle cost estimate through fiscal year 2009 to more adequately address required resources-based upon requirements established by the various on-and off-post entities. This appendix reviews the development of the CSEPP benchmarks used by the Army and FEMA to measure the program’s status and guide funding. We used subcategories of these benchmarks—specific critical items—to measure the program’s status in 2001. Overall, half of the needed items are in place in all the states. In 1997, none of the critical items were in place in all the states. As CSEPP has developed, its performance measures have expanded. In 1993 and 1996, the Army and FEMA issued CSEPP benchmarks and program guidance that identified off-post items critical to respond to a chemical stockpile emergency. Specifically, the National CSEPP Benchmark guidance issued in 1993 identified nine items needed for emergency preparedness: alert and notification system, emergency operations center, communications system, automated-data-processing system, training programs, exercise programs, community involvement (for public information and education), CSEPP personnel, and coordinated plans. The CSEPP National Planning Guidance, dated May 6, 1997 supplements this list by describing various aspects about each needed item so that it meets CSEPP’s standards. For example, the 1993 benchmark lists the need for a functional communications system; the planning guidance further states that the system must be reliable with at least two independent methods of simultaneous communications to protect against equipment failure. In August 2000, FEMA and Army issued CSEPP Policy Paper Number 18, which reaffirms the 1993 guidance and adds three additional benchmarks that include administrative support, medical program, and protective action strategy. And, according to the FEMA CSEPP FY 2000 Annual Report to Congress (Dec. 15, 2000), personal protective equipment, decontamination equipment, and medical preparedness are needed for operations at the CSEPP sites. These items are now considered in the program’s benchmarks. We used the Army’s and FEMA’s guidance to measure whether the 18 critical items were in place, were being put in place, or were not agreed to by the states and local communities, the Army, and FEMA. In our 1997 assessment, we considered eight critical CSEPP items. Since that report, we have added 10 more items needed to meet CSEPP’s guidance for full preparedness. Some of our critical items are subcategories of the CSEPP benchmarks. For example, in table 4, we divide the CSEPP benchmark alert and notification system into the following categories: sirens, tone alert radios, and highway reader boards. We also included reentry, for a total of 19 items considered. To judge preparedness, we looked at 18 critical items to determine if they were in place and operational (we excluded reentry in this analysis because it does not affect the ability to respond to an emergency). If an item met the requirements that the states, communities, and FEMA and the Army had agreed to, we measured its status as “Yes.” If the states and communities were in the process of acquiring the item, we measured it as “Partial.” If the item was not in the process of being acquired and there was no agreement to obtain it, then it was measured as “No.” In cases where a state had a critical item in place but required additional equipment, such as sirens to place near newly constructed housing, we coded the status as “Yes*.” This means that the initial requirement had been met, but as the benchmark item was being completed, needs had changed and more of the item was requested. We found that 9 of the 18 CSEPP-funded items are in place and operational in all states where the item was part of the preparedness requirements. Table 5 compares the eight items we reported on in 1997 and in 2001 and shows only four of eight items in place and operational in all states. Table 6 contains the additional 10 items we reviewed and shows 5 of 11 items in place and operational. Four of the eight CSEPP-funded items evaluated in our 1997 report are in place and operational in all 10 states. Since the time of our 1997 report, all 10 states have acquired CSEPP-approved automated data processing systems and emergency operations centers. In addition, the initial requirement for sirens and decontamination equipment has been funded and items are in place and operational. However, some states have identified a need to expand their capability in these two areas to accommodate changes in local demographics, such as population growth, and to replace outdated equipment. In some locations, the remaining four items—overpressurization projects, personal protective equipment, tactical communications systems, and tone alert radios—are in varying stages of readiness. Five of the 11 other CSEPP-funded items are in place and operational. All 10 states have CSEPP-approved community involvement, exercise, and training programs in place. They also have functional joint information centers and on-going public awareness campaigns. The other six items (coordinated plans, CSEPP staffing, highway reader boards, medical planning/support, shelter-in-place kits, and reentry plans) are at varying stages of completeness. This appendix presents the results of our review of the emergency preparedness in the 10 CSEPP states. For each state, we list the 19 critical items and provide our assessment of each. We include a summary of the condition of each item in each state, on the basis of our observations and interviews with state and local CSEPP officials in the state. The status of the critical items is discussed for each state in alphabetical order within the categories of fully prepared, progressing, and unresolved issues. Table 7 presents our summary of the comments of state and local CSEPP officials we talked to concerning the status of the 19 critical items in the states. Maryland’s CSEPP officials said that the state had an extensive disaster control program in place prior to CSEPP because of its involvement in the Radiological Emergency Program. It’s easier to plan for a chemical event in Maryland because only one chemical agent (mustard) is stored in bulk in Maryland and according to the Army, mustard agent is the most stable and least toxic agent in the U.S. stockpile. The local CSEPP officials credited the mitigation activities undertaken by the Army that reduced the “at risk” population from 333,000 to 55,000. In addition, the Maryland State CSEPP director told us that a cooperative community effort, such as participation in the integrated process team (a group of key CSEPP personnel that focus on a particular issue), helps CSEPP achieve its goals in Maryland. Utah’s CSEPP officials said that communications, cooperation, teamwork, and interpersonal relationships are the root of Utah’s success in implementing CSEPP. For example, Utah integrated all of the affected parties and entities into its CSEPP effort early in the program to facilitate effective communications and foster good working relationships amongst the CSEPP stakeholders. Washington state’s CSEPP officials said that like Maryland it too had an extensive disaster control program in place prior to CSEPP because of its involvement in the Radiological Emergency Program. And like Utah, Washington’s CSEPP officials cite good coordination among all participating agencies and the inclusion of state and local CSEPP officials in the budgeting process as contributing factors to the program’s success. Arkansas still has gaps in five of its critical items. For example, not all of the personal protective equipment has been distributed to the first responders. According to state CSEPP officials, the overpressurization project at the local high school is underway and expected to be completed in August 2001. The elementary school project is in the design phase, and its estimated completion date is August 2002. FEMA approved the overpressurization project for an elementary school for $2.25 million. According to a state CSEPP official, 15 additional sirens are needed and FEMA is reviewing this issue. The current tone alert radios do not work as intended, and Arkansas has $2.5 million to replace them. Medical training is ongoing. Thus far not all medical response personnel have received the necessary CSEPP training. Colorado is in the process of distributing its tone alert radios. Once Colorado completes this distribution effort, it will be considered fully prepared. Illinois still has capability gaps in three of its critical need items. For example, a state CSEPP official indicated that the state has a need for additional replacement personal protective suits and FEMA is reviewing this issue. Although FEMA approved funding for 40 tone alert radios in February 2001, they have not yet been delivered and distributed in Vermillion County. In addition, only one of three hospitals participating in the program has a full supply of antidote. Oregon still has capability gaps in five of its critical items. The current communications system, consisting of a high-banded very high frequency radio, is cumbersome to use and does not meet CSEPP’s standards. A 450-megahertz communications system project has been studied and approved. Its estimated cost is $7.2 million; FEMA is committed to funding the project, which is expected to be complete no later than August 2002. A proposal for five additional overpressurization projects is under review. The state and counties identified a need for additional personal protective suits, sirens, and CSEPP staff. FEMA will then validate the need for more suits and it has funded a sound propagation study to validate the need for the seven additional sirens requested. FEMA officials said they will consider the need for more staff. Oregon has also recently requested chemical-monitoring equipment to allow reentry after a chemical accident. Alabama has at least two unresolved issues involving overpressurization projects and coordinated plans, resulting in gaps in its emergency response capability. State officials told us that Calhoun County, the Army, and FEMA have yet to agree on the number of facilities requiring overpressurization systems. Calhoun County requested that more than 130 facilities be over-pressurized. Excluding the emergency operations centers, currently there are no facilities in the immediate response zone that have been over-pressurized. According to FEMA officials, they are planning to over-pressurize some portion of 28 different facilities but has only funded eight of these projects. Part of the delay in these projects was due to the limited procurement experience of the county. The projects were turned over to the U.S. Army Corps of Engineers to manage, and work has begun on five schools. Another unresolved issue in Alabama centers around its coordinated emergency response plans. Despite the Army’s attempt to have the state and Calhoun county consider a strategy including both evacuation and sheltering, Alabama’s overall immediate response zone counties’ protective action strategy remained for evacuation only. As early as November 5, 1993, the Army informed the local emergency management directors of both of Alabama’s immediate response zone counties that an evacuation-only strategy may not be feasible. In 1999, the Army funded a study to produce a guidebook with step-by-step instructions to Alabama county emergency personnel on how best to respond to a chemical emergency. The study supported the Army’s position that a strategy of evacuation and shelter-in-place provided the safest response to a chemical incident. Talladega county, Alabama, uses the guidebook to determine its emergency response strategy. However, Calhoun county’s CSEPP leaders and FEMA still do not agree on how to incorporate and fund the guidebook strategy. FEMA is in the process of funding Alabama’s shelter- in-place kits, providing the resources to purchase additional sirens, hiring additional staff, and supporting a public awareness campaign. In Indiana, it isn’t clear whether FEMA will provide more funding for highway reader boards. According to state CSEPP officials, the state now needs more funding for highway reader boards, which FEMA approved earlier. The state later reprogrammed the funds in support of another CSEPP project but was hoping to use the Indiana Department of Transportation’s reader boards during a chemical emergency. However, the transportation department decided that it did not have enough reader boards for CSEPP to use. Now Indiana’s CSEPP managers are in need of more funding to purchase this capability. The state is also now considering purchasing shelter-in-place kits, but FEMA has not yet provided funding. FEMA is also funding personal protective equipment. Kentucky’s CSEPP officials and FEMA have yet to resolve issues involving enhanced sheltering projects, coordinated plans, and medical planning. Although 2 schools and 1 hospital will be over-pressurized, the state identified over 35 facilities that will require enhanced sheltering. FEMA and state and local CSEPP officials have not yet finalized the number of facilities. Also, CSEPP needs school buses to be placed by two schools to evacuate students during an emergency. Additionally, the state and counties are using draft plans that have not yet been approved. A state CSEPP official we interviewed was unaware of a target date for final approval. Additionally, of the 13 hospitals that participate in the program, only about half have the needed chemical antidote. Local CSEPP officials are concerned that FEMA has not acted in a timely fashion to fill this gap. FEMA has not decided if it will provide funding to fully outfit these hospitals.
Millions of people who live and work near eight Army storage facilities containing 30,000 tons of chemical agents are at risk of exposure from a chemical accident. In 1988, the Army established the Chemical Stockpile Emergency Preparedness Program (CSEPP) to assist 10 states with communities near these eight storage facilities. The Army and the Federal Emergency Management Agency (FEMA) share the federal government's responsibility for the program's funding and execution. Since its inception, the program has received more than $761 million in funding. One third of this amount has been spent to procure critical items. Because each community has its own site-specific requirements, funding has varied greatly. For example, since the states first received program funding in 1989, Illinois received as little as $6 million, and Alabama received as much as $108 million. GAO found that many of the states have made considerable progress in preparing to respond to chemical emergencies. Three of the 10 states in the CSEPP are fully prepared to respond to an emergency and four others are making progress and are close to being fully prepared. This is a considerable improvement since 1997, when no state was fully prepared. However, three states are still considerably behind in their efforts and will require additional technical assistance to become fully prepared to respond to a chemical accident.
Navy ships undergo a variety of tests, trials, and construction after delivery from the shipbuilder (when the Navy takes custody of the ship) and before the Navy provides the ship to the fleet—a time referred to as the post-delivery period. The Navy’s policy for ship delivery is outlined in OPNAVINST 4700.8K, which establishes major milestones including the beginning (delivery) and end (OWLD) of the post-delivery period, the expected condition of ships and submarines at these milestones, procedures for executing the post-delivery period, and the responsibilities of various Navy organizations during the post- delivery period. Figure 1 provides a notional timeline of the delivery and post-delivery process for new construction ships, per the Navy’s ship delivery policy. Delivery (from shipbuilder): The Navy takes custody of a new construction ship from the shipbuilder at preliminary acceptance, which is also commonly known as delivery. Delivery occurs after the completion of acceptance trials, during which INSURV evaluates the ship and identifies deficiencies (we discuss INSURV’s role in more detail below). The Navy’s Supervisor of Shipbuilding, responsible for ship construction quality, signs a Material Inspection and Receiving Report (Form DD-250) at this time, which includes a list of outstanding construction deficiencies and incomplete work for which the contractor is responsible for completing based upon the terms of the contract. Delivery is the beginning of the post-delivery period. Guaranty period: A specified period of time after delivery during which the shipbuilder retains responsibility for correcting construction defects that arise on the ship after the Navy accepts delivery. The specific terms of the guaranty period, including its duration and who pays to correct deficiencies, are established in the shipbuilding construction contract. Final contract trials: INSURV inspectors conduct a second round of sea trials to determine if there are any defects, failures, or deterioration other than that due to normal wear and tear. Typically, these trials are held prior to the post-shakedown availability. Post-shakedown availability (PSA): A period of work toward the end of the post-delivery period, during which the Navy’s Supervisor of Shipbuilding and other organizations, as appropriate, oversee the correction of deficiencies, installation of class-wide upgrades, and completion of incomplete construction work. The duration and scope varies from ship to ship depending on its material condition at delivery and whether significant alterations must be implemented during the post- delivery period. OWLD: The date when full financial responsibility for maintaining and operating a ship is transferred from the acquisition command to the operational fleet. In this report, we refer to OWLD as when the ship is provided to the fleet; this date generally concludes the post-delivery period. In addition to these milestones and events that occur on all new Navy ships, Department of Defense (DOD) acquisition policy also calls for events that usually occur during the post-delivery period on one ship per class, typically the first (or lead) ship: Initial operational capability (IOC): A key milestone in weapon system acquisitions that typically refers to the point in time when the warfighter (in the Navy’s case, the operational fleet) has the ability to employ and maintain a new system. Operational Test and Evaluation: A period of testing to characterize the performance of a ship under realistic operational conditions during a discrete period of time. Testers may also use actual mission performance data and data from fleet exercises in making their assessments. In conducting operational testing, testers make a determination regarding the ship’s operational effectiveness and suitability: For operational effectiveness, testers determine whether or not a ship can perform its missions when operated by the ship’s crew. For operational suitability, testers determine whether or not the Navy can logistically support the ship in the field, with consideration given to interoperability, safety, and reliability, among other attributes. Interoperability measures the extent to which information systems and other equipment work with other Navy systems, and other U.S. government agencies, such as the Coast Guard. Reliability measures the probability that the system will perform without failure for a certain period of time and in certain conditions. The post-delivery period requires coordination between many of the Navy’s acquisition and fleet organizations. Figure 2 provides an overview of the organizations involved in the post-delivery period and how they fit together within the overall structure of the Navy. The Chief of Naval Operations (CNO) is the senior military officer of the Department of the Navy. Among other things, the CNO is responsible for determining when to accept delivery of ships from the shipbuilders. The Navy’s ship delivery policy, OPNAVINST 4700.8K, was written and is maintained by the Office of the CNO. Program Executive Offices (PEO) are responsible for all aspects of their assigned shipbuilding programs, including program initiation, ship design, construction, testing, delivery, fleet introduction, and maintenance activities. Responsibilities for managing the designing, building, and testing of new ships are assigned to a shipbuilding program office within the PEO. Program offices are responsible for implementing the Navy’s delivery and post-delivery process, as prescribed in the CNO’s ship delivery policy, OPNAVINST 4700.8K. Naval Sea System Command (NAVSEA) is responsible for engineering, building, buying, and maintaining ships, submarines, and combat systems to meet the fleet’s operational requirements. NAVSEA is organized by specialty, such as contracting, engineering, or quality assurance. INSURV inspects newly constructed and in-service Navy ships to assess and track the material condition of the Navy’s active fleet. For new construction ships, INSURV inspects prior to delivery (during acceptance trials) and again prior to the end of the guaranty period (during final contract trials). Commander, Operational Test and Evaluation Force conducts operational testing and serves as an independent evaluator of a ship’s capabilities and supportability. Its operational testing is overseen by DOD’s Director of Operational Test and Evaluation (DOT&E), who issues policy and procedures on operational testing, approves the adequacy of operational test plans, monitors and reviews all operational test and evaluation, and independently evaluates and reports test results. U.S. Fleet Forces Command and Pacific Fleet are the operational fleet forces of the Navy that assume full financial responsibility for operating and maintaining ships at the end of post-delivery. Fleet officials include port engineers, who are responsible for ship maintenance; ship managers, who oversee all aspects of maintaining and operating the ship; and senior crew members, such as the Commanding Officer and Chief Engineer, who are responsible for operating the ships. During the post- delivery period, key organizations within the fleet are the Type Commands and the ships’ crews. The Type Commands provide support during the post-delivery process and manage ship maintenance after ships are provided to the fleet. The ship’s crew begins operating the ship shortly before delivery from the shipbuilder or earlier for vessels that are nuclear-powered. Quality deficiencies are identified throughout the shipbuilding construction process. Navy program managers told us that they assess a ship’s quality and completeness using three primary metrics: (1) trial deficiency correction, (2) certification completion, and (3) casualty report correction. Trial Deficiencies: During acceptance and final contract trials, INSURV documents deficiencies, which are categorized according to their severity, as explained in table 1. The correction of INSURV-identified deficiencies could be the responsibility of the government or the shipbuilder, depending on the nature of the deficiency. If an INSURV deficiency is not resolved before delivery, the Navy usually aims to correct it during the post-delivery period. Certifications: NAVSEA guidance states that the certification process is a critical tool in the effort to ensure ship systems fully meet design specifications and operational standards. There are many different types of ship certifications, from potable water to combat systems. Some certifications are common to all ships, while others apply to specific vessels; for instance, only ships with the ability to deploy aircraft or helicopters require aviation certifications, while submarines require certifications to demonstrate the ability to dive safely. An incomplete certification indicates that required tests are incomplete or that a key system does not meet a specification or standard. The ship’s crew cannot operate particular systems or complete certain missions until certifications are complete, though certifications may be partially completed. For example, a Navy ship may have an interim aviation certification, which can mean that the ship’s crew can only conduct daytime operations or can fly but not maintain certain aircraft. Casualty reports: At or around delivery, the fleet begins operating the ship and may document any mechanical issues the crew encounters in casualty reports. These reports represent significant deficiencies to the pieces of equipment that contribute to the ship’s ability to perform its missions. Casualty reports demonstrate a deficiency but generally do not identify a cause. Causes could be related to construction defects, operator errors, or equipment malfunction. Category 3 and 4 casualty reports indicate degradation to critical mission capability that needs immediate repair, while category 2 reports contain issues that are important to the fleet but do not affect the ship’s core missions. The Navy completes a range of work during the post-delivery period that varies from ship to ship, but generally falls into three categories: Incomplete work is all work that was planned to be completed during construction, but was not accomplished. There are two primary types of incomplete work: 1. Deferred work is construction required by the shipbuilding contract but not completed prior to delivery. The Navy may shift completion of this work to the post-delivery period so it can take custody of the ship. In some cases, deferred work remains on the shipbuilding contract; in other cases this work is de-scoped from the original shipbuilding contract to reduce cost and schedule before ship delivery—this work is then completed under a separate contract during the post-delivery period. 2. Contractor and government-responsible deficiencies that are identified during acceptance trials, but not corrected before delivery. These deficiencies can overlap with other incomplete work. Modernizations and upgrades include work to replace existing systems and equipment either because (1) parts or tools are no longer available to maintain the system—a condition known as obsolescence—or (2) the Navy wants to upgrade the system to improve capability. According to Navy officials, a modernization replaces, but does not increase, current capability, while upgrades replace existing systems with more capable alternatives. New work is new ship construction to implement a requirements change or add something to the ship. As many Navy organizations are involved in the post-delivery period, so are different appropriations accounts. Table 2 provides a list of appropriations accounts used during the post-delivery period. All six ships we reviewed that had completed the post-delivery period— LPD 25, LHA 6, DDG 112, LCS 3, LCS 4, and SSN 782—were provided to the fleet with varying degrees of incomplete work and quality problems. Although the Navy resolved the majority of construction deficiencies by the end of the post-delivery period, these ships were not fully complete or free from deficiencies when provided to the fleet. Fleet officials responsible for operating and maintaining these ships reported varying degrees of concern about the overall quality of these six ships, noting that two were ready for operations upon being provided to them but that there were particular quality concerns with the other four. We also reviewed two additional ships that had yet to finish the post-delivery period—CVN 78 and DDG 1000—which are lead ships of a new class of carriers and destroyers, respectively. These ships are also at risk of being delivered to the Navy and, eventually, provided to the fleet with incomplete work and quality problems. We assessed six selected ships that had been provided to the fleet against metrics that Navy program managers identified as indicators of completeness and quality for new ships at the end of the post-delivery period: numbers of (1) uncorrected deficiencies, (2) incomplete certifications, and (3) open casualty reports. These metrics indicated that DDG 112 was largely complete and had few outstanding quality issues when provided to the fleet. Similarly, fleet maintenance officials stated the fleet was generally satisfied with the ship’s condition. Despite some outstanding quality deficiencies, fleet maintenance officials were also satisfied with SSN 782 because the submarine was ready to deploy when it was provided to the fleet and its incomplete work did not hamper the submarine’s operations. In contrast, fleet officials expressed concerns about the quality of LPD 25, LHA 6, LCS 3, and LCS 4, which had significant deficiencies when provided to the fleet. Further, fleet engineers and other officials highlighted additional quality issues beyond the scope of these metrics that may have a long-term impact on the maintenance of the ships. Construction deficiencies: While the Navy corrected many construction-related deficiencies during the post-delivery period, all six selected ships still had unresolved construction deficiencies to varying degrees when they were provided to the fleet. INSURV identified these construction deficiencies during sea trials before delivery and categorized them by severity—with starred and Part 1 deficiencies being the most serious. Table 3 shows the quantity and severity of uncorrected INSURV- identified deficiencies at the time the ships were delivered to the Navy and at the end of the post-delivery period when the ships were provided to the fleet. As reflected in table 3, two ships were provided to the fleet at OWLD with starred deficiencies that had previously been waived by the CNO at delivery—LCS 4 and SSN 782. LCS 4 was provided with two open starred deficiencies. One of these concerned a radar system that did not work properly; this problem could have resulted in unintended countermeasure launches. This deficiency was not corrected until nearly 4 months after the ship was provided to the fleet. The other starred deficiency concerned a system planned to help LCS 4 identify friendly and enemy ships, aircraft, and other platforms. Though this system is used across the Navy, LCS 4 has a unique installation which requires additional testing to determine its capabilities and limitations. This deficiency remained unresolved nearly 1 year after the Navy accepted delivery of the ship. The second ship, SSN 782, was provided to the fleet with one open starred deficiency regarding a mast that is only used in certain operations; the CNO’s waiver allowed the fleet to install this mast rather than having the program office complete this task. Also as reflected in table 3, five of the six ships had Part I deficiencies when they were provided to the fleet. Examples of the Part 1 deficiencies that were not resolved when these ships were provided to the fleet included a deficiency with a system used for refueling at sea on LHA 6, incomplete testing on LCS 3’s unmanned aerial vehicle (used for surveillance and minehunting), and a discrepancy with the refrigerant leak monitors on LPD 25. DDG 112 was the only ship among the six that had no significant deficiencies when the ship was provided to the fleet. It had also corrected nearly all of its minor deficiencies. Certifications: All six of the ships we reviewed had incomplete shipboard system certifications when provided to the fleet. Table 4 provides a summary of incomplete certifications for the six ships we reviewed. Navy officials identified several reasons why ship certifications may occur during the post-delivery period—or even after a ship is provided to the fleet—including incomplete installation of critical equipment needed to conduct certifications or challenges in scheduling certification activities, among other things. Among the six selected ships we reviewed, a majority of the required shipboard system certifications were incomplete at delivery, and a large number of these were completed during the post- delivery period. However, in some cases, certifications were not completed before these ships were provided to the fleet, which could have restricted the conduct of certain mission-critical functions. In some cases in which the ship was provided to the fleet with incomplete certifications, the program office continued to oversee the completion of this work shortly after OWLD; in other cases, however, the fleet was responsible for the certifications. For instance, three ships—LCS 3, LCS 4, and LHA 6—were provided to the fleet without full aviation certifications, restricting these ships’ aviation operations until the certification requirements were met. In the case of LHA 6, the ship was not authorized to fully operate the Joint Strike Fighter when the ship was provided to the fleet, even though the Navy spent $60 million during the post-delivery period modifying it for Joint Strike Fighter operations. One of the items preventing a full aviation certification on LHA 6 was incomplete work on a lithium-ion battery shop, which charges and stores batteries used by the Joint Strike Fighter for a variety of purposes, including starting the aircraft’s integrated power system. According to a senior fleet official, work on the lithium-ion battery shop was not scheduled for completion until December 2016, 9 months after the ship was provided to the fleet. This work is now complete. Casualty reports: According to officials with two of the program offices, ships should not be provided to the fleet with open category 3 or 4 casualty reports, and some officials stated there should be very few in the less severe categories by the end of the post-delivery period. While the fleet submits casualty reports starting at delivery, the program office is responsible for correcting construction-related problems prior to providing a ship to the fleet. Fleet officials stated that casualty reports submitted within the first 3 months of fleet operations are generally indicative of the ship’s quality, since the crew will begin more fully operating the ship’s systems and equipment and submitting casualty reports when they identify problems. However, officials from several program offices disagreed with this assessment and stated that some deficiencies after the ship is provided to the fleet are due to operator error and are not related to construction quality. Table 5 summarizes the open casualty reports at the time these six selected ships were provided to the fleet and after their initial 3 months of operation. Two of the selected ships had open category 3 casualty reports when the program offices provided the ships to the fleet, and more than half of the ships had casualty reports within the first 3 months of fleet operations. For example, on LHA 6, the program office did not repair an electronic warfare system before the end of the post-delivery period, resulting in a casualty report when the ship was provided to the fleet. In addition, four ships had equipment that failed during the post-delivery period and failed again within 3 months—requiring the fleet to pay for at least a portion of the repair. Furthermore, DOT&E reports confirm that these same pieces of equipment were found to be unreliable during testing, except for the equipment on LHA 6 because this ship has yet to be tested. Examples of equipment that broke during the post-delivery period, after the ship was provided to the fleet, and had issues during testing include anchor system and air search radar (LCS 3); water jet, radar and propulsion systems (LCS 4); and steering system, including steering oil migration (LPD 25). Fleet officials, including engineers, maintenance officials, managers, and crew, identified additional issues beyond the ship completeness and quality metrics discussed above that significantly degraded the quality of four of the six ships we reviewed. Fleet officials told us they were generally satisfied with DDG 112 and SSN 782, as these ships were largely complete and ready to deploy when provided to the fleet, did not require significant work, and could be maintained within the fleet’s budget and schedule. For example, while SSN 782 and DDG 112 were provided to the fleet with incomplete certifications, fleet officials reported that the program offices paid for the work to complete these certifications and there were no other major outstanding construction deficiencies that affected the ships’ ability to deploy. In contrast, we found that fleet engineers, operators, and other officials had some quality concerns about LPD 25 and another ship, and significant concerns about the quality of LCS 3 and LCS 4 after these ships were provided to the fleet. Table 6 provides examples of the quality issues identified by fleet officials on these ships. These additional fleet concerns about quality can stem from differences in how the fleet and the shipbuilding program offices assess the quality of new ships. The program offices generally define quality as the degree to which the ship is constructed according to its contract specifications—that is, the design of the ship. In contrast, according to fleet managers and maintenance officials, the fleet’s assessment of quality is based on a ship’s operational capability and maintenance considerations. For example, program officials stated that the contractor-furnished communications system on LCS 3 and LCS 4, discussed in table 6, meets quality expectations because it was installed in accordance with the contractor’s specifications. However, fleet officials have found this system to be of poor quality because it is unreliable and difficult to maintain. According to fleet officials, not addressing these types of quality issues by the end of the post-delivery period results in shifting costs to the fleet’s operations and maintenance funding and contributes to a maintenance backlog from the first day the fleet is responsible for the ship. Our recent work has found that maintenance shortfalls generally increase throughout the life of a ship, which increase costs and consume time that is needed for training and operations. DDG 1000 and CVN 78 are technologically complex, first-in-class ships for which the Navy is pursuing delivery and post-delivery plans that deviate significantly from the Navy’s process for constructing more typical surface ships. For these two programs, the Navy plans to rely on waivers or exceptions to its policy, allowing it to accept delivery of these ships from the shipbuilder in incomplete condition. This will, in turn, lead to the Navy conducting more work during the post-delivery period than the other ships we reviewed, including deferring a substantial amount of construction work to the post-delivery period to save money, reach delivery more quickly, or incorporate later versions of technology, among other reasons. For CVN 78, cost growth and delays led the Navy to accept delivery of the aircraft carrier with a substantial amount of incomplete work. In the case of DDG 1000, the Navy has planned a two-phase construction approach in which the hull, mechanical, and electrical systems were delivered first, prior to the combat systems. The Navy is now planning a delivery approach for CVN 79, the second ship in the Ford class, which is similar to that of DDG 1000. For CVN 78 and DDG 1000, the Navy plans to complete significantly more work and testing during the post- delivery period than the other six ships we reviewed. As such, CVN 78 and DDG 1000 are at greater risk of being provided to the fleet at the end of their post-delivery periods with incomplete construction work and unknown quality. The Navy took delivery of CVN 78 with a significant amount of work scheduled for completion during the post-delivery period, including completing construction and executing a number of tests and trials. Some of this work, particularly several tests and trials, is not scheduled until after the ship will have been provided to the fleet (following OWLD). For example, at delivery, the ship will have yet to complete its navigation certification and cybersecurity inspection; in addition, as planned, the carrier will not yet have all of the certifications necessary to conduct aviation operations, among other things. The magnitude of construction work that has been deferred to the post-delivery period has also contributed to the Navy’s decision to schedule combat and warfare systems certification after the ship is provided to the fleet. For this reason, CVN 78 will not be ready for deployment until fiscal year 2021 at the earliest, even though the Navy accepted delivery of the ship in May 2017 and plans to provide it to the fleet in fiscal year 2019, as shown in figure 3. The completion of the aircraft carrier’s outstanding tests and trials, deferred construction, and other work is planned to cost nearly $780 million and take more than 4 years to complete. For example, the Navy plans to spend over $400 million to conduct several years of testing, including full ship shock trials, total ship survivability trial, and operational testing, with associated maintenance to correct deficiencies from these tests and trials. As we have previously found, construction challenges and continuing work on maturing technologies—combined with a $12.9 billion construction cost cap, which the program office is actively managing to— have resulted in the Navy’s decision to accept delivery of CVN 78 with incomplete work. The timely and successful execution of tests and trials during the post- delivery period remains dependent on the maturity of key technologies, including the advanced arresting gear (used to stop aircraft on the flight deck), dual band radar (used to track aircraft among other tasks), and advanced weapons elevators (used to move ordnance). For instance, program officials reported that only 2 of the 11 advanced weapons elevators will be installed prior to delivery; the installation, testing, and certification of the other 9 elevators have been deferred to the post- delivery period. Additionally, while installation of the advanced arresting gear and dual band radar is complete on CVN 78, the Navy plans to continue testing these systems during the post-delivery period to verify they will perform as intended. It is likely that significant work will be required on all three of these systems during CVN 78’s post-delivery period, particularly because DOD’s Director of Operational Test and Evaluation found in June 2016 that each system continues to have poor or unknown reliability. According to DOT&E’s report, these reliability issues are the most significant risk facing the program. Beyond the completion of these tests and trials, in November 2014, we found that CVN 78 will have significant incomplete construction work at delivery, which is being deferred to the post-delivery period. This deferred work included building 367 compartments that were de-scoped from the shipbuilding contract, installing 12 government furnished systems not completed during construction, installing 10 modernized systems, and completing at least 147 other work deferral requests. The CVN 78 program office estimates that this deferred work will cost at least $65 million. Table 7 provides examples of construction work on CVN 78 that has been deferred to the post-delivery period. Due to the magnitude of deferred work planned for the CVN 78 post- delivery period, PEO Aircraft Carriers has determined that a final contract trial, which typically occurs before the post-shakedown availability per the Navy’s ship delivery policy, would be of limited utility for CVN 78. Instead, the Navy’s senior aircraft carrier acquisition official has requested the CNO waive the requirement for a final contract trial and grant permission for the program to conduct a special trial after the post-shakedown availability, when the deferred work will be complete and the crew will have completed training on the aircraft carrier’s new systems. When requesting this permission, the official provided the CNO with advance notice that the program would require a waiver at delivery for the work that will be deferred to the post-delivery period. By design, the Navy planned to deliver DDG 1000 in two phases—the first phase included only the hull, mechanical, and electrical systems of the ship, followed by a second phase to activate the combat systems. In May 2016, the Navy accepted delivery of the hull, mechanical, and electrical portion of the ship and is now beginning post-delivery efforts, including combat systems activation and the installation of several shipboard systems, such as the navigation system, the close-in gun system, the communications system, and advanced flight deck lighting. Following combat systems delivery planned for fiscal year 2018, DDG 1000 will begin 2 years of tests and trials, during which time the ship will complete various certifications and an operational evaluation. As a result of delays during construction of the hull and the two-phased approach, 24 required shipboard system certifications were incomplete at delivery, including the certifications for aviation and navigation. For example, testing of the advanced stabilized glide slope indicator, which is a helicopter landing system that previously encountered challenges and delays on LCS 3 and 4, was deferred to the post-delivery period. Given the scope of deferred work and testing, DDG 1000 will not be provided to the fleet until fiscal year 2020 (potentially a delay of more than a year from the Navy’s estimates in 2016), making this the longest post-delivery period of the eight ships we reviewed. Figure 4 provides an overview of the post-delivery schedule for DDG 1000, with hull, mechanical, and electrical delivery occurring approximately 5 years before the ship will be deployment-ready. When the hull, mechanical, and electrical systems were delivered, DDG 1000 had 32 unresolved starred deficiencies that required CNO waivers and 291 uncorrected Part I deficiencies, out of an overall total of 3,457 trial deficiencies on these systems. For example, INSURV issued a starred card on DDG 1000’s navigation system, which the CNO had to waive before the Navy could accept delivery. At the time of the acceptance trial, the ship was equipped with a temporary navigation system; its planned navigation system will be installed during the post- delivery period. INSURV and the DDG 1000 program office plan to hold a second acceptance trial for the ship’s combat systems during the post-delivery period. During this second acceptance trial, the program plans to have INSURV re-inspect the hull, mechanical, and electrical deficiencies that have been corrected. Currently, the Navy’s program office is not planning on conducting a final contract trial because the two-phased delivery approach calls for post-delivery work well beyond that of the original shipbuilding contract. The Navy’s ship delivery policy emphasizes the importance of ensuring that defect-free and mission capable ships are provided to the fleet. But the policy does not elaborate on which defects it is referring to or when they should be corrected. All Navy program offices we spoke with said that, in general, delivering a ship free from all government and contractor deficiencies is not realistic—for instance, some deficiencies require a disproportionate amount of time or money to correct that do not merit the costs of delaying ship delivery. In addition, while the policy states that ships will be fully mission-capable, it does not define what levels and aspects of performance would meet that objective. Further, the policy identifies INSURV as the independent entity responsible for verifying the quality of Navy ships and making a recommendation for fleet introduction. However, we found that INSURV does not make a recommendation for fleet introduction because its inspections occur well before ships are provided to the fleet. As a result, INSURV does not assess the condition of the ships after the majority of post-delivery work is completed, and therefore cannot ensure that all defects have been corrected prior to ships being provided to the fleet at OWLD. The Navy’s ship delivery policy does not provide sufficient guidance or specificity on (1) what constitutes a defect-free ship, (2) what constitutes a mission-capable ship, and (3) the timing of when newly constructed ships are to be free from deficiencies and mission-capable. In the absence of clarity, we found that Navy program officials have different interpretations regarding how to meet the policy’s goals and by when, resulting in variations in quality among ships provided to the fleet— including deficient and incomplete ships. Although the Navy’s policy asserts a goal of providing defect-free ships to the fleet, it does not define what types of deficiencies must be corrected in order for a ship to be considered free of deficiencies. Specifically, the policy requires that Navy shipbuilding programs deliver to the Commander of U.S. Fleet Forces Command “complete ships, free from both contractor and government responsible deficiencies.” However, the policy does not explain what constitutes a defect-free ship with respect to providing ships to the fleet. A clear and comprehensive definition is important because it provides a framework for measuring performance. According to the Standards for Internal Control in the Federal Government, government agencies must create policies that are clear and measureable, and use performance measures to assess whether or not the designed policy objective is being achieved. In the absence of a clear definition, ship program offices do not have a consistent view regarding what standards constitute a defect- free ship. We asked each of the seven program offices responsible for constructing the eight ships we reviewed to define what constitutes a complete and quality ship when provided to the fleet. Table 8 illustrates the varying responses we received. In addition, officials from every program office we spoke with stated that providing a ship free from all government and contractor deficiencies is simply not realistic. In particular, several of these officials stated that the Navy may decide to leave some deficiencies uncorrected if the repair would be cost-prohibitive or if the deficiency has minimal impact on the capability of the ship. The current ship delivery policy does not account for these situations. In practice, ship program offices balance risk and cost when choosing what deficiencies to correct during the post-delivery period. For example, low-cost items with a high impact on capability or quality will be fixed first, while high-cost items with low impact on quality will be prioritized much lower. Officials from the Office of the CNO (responsible for the ship delivery policy) reported a similar caveat to the stated goal of providing deficiency- free ships to the fleet. According to these officials, ships are considered to be free from deficiencies as long as all defects have been “adjudicated”— in other words, the deficiencies have been identified and there is a plan to fix them. However, the ship delivery policy does not include this caveat and provides no guidance for how to prioritize deficiencies. In the absence of clear and comprehensive guidance that realistically establishes what it means to provide a defect-free ship to the fleet, including the types of deficiencies that must be corrected, program offices and fleet representatives will continue to have a conflicting understanding of the policy’s goal of providing complete and quality ships to the fleet. While the Navy’s ship delivery policy states that ships should be mission- capable, the policy does not define what levels and aspects of performance would meet that objective. The policy states that ships should be “capable of supporting the Navy’s mission” and “fully mission capable, in the sense that all contractual responsibilities shall be resolved, prior to delivery, except for crew certification, outfitting, or special Navy range requirements which cannot be met until after delivery.” However, the policy does not define full mission capability in terms of the ship’s operational effectiveness and suitability in general— metrics typically associated with determining mission capability in DOD acquisition guidance. Operational suitability assesses the reliability, maintainability, and availability of a ship, which inform the Navy’s assessment of the probability that the ship will perform without failure for a certain period of time and in certain conditions. While the Navy conducts testing to determine the operational suitability of new ship classes, program offices do not factor these tests into their assessment of full mission capability and therefore do not consider the results of these tests prior to providing new ships to the fleet. For example, the Navy decided to provide LHA 6 to the fleet before it had completed these tests. In addition, CVN 78, also a lead ship, is planned to be provided to the fleet prior to undergoing an operational suitability assessment during testing. The policy does not address the role of operational suitability in a ship’s ability to be mission-capable or whether a ship should be provided to the fleet that has yet to be operationally tested. Furthermore, the ship delivery policy makes no distinction between early- in-class ships and later-in-class ships, which Navy program and fleet officials identify as a key predictor of completeness and quality, with earlier ships being more likely to experience problems. For example, three of the six ships we reviewed (LCS 3, LCS 4, and LHA 6), all earlier in class, were provided to the fleet either without being tested or after being found unsuitable for fleet operations due to unresolved concerns regarding the equipment reliability, maintainability, crew training, or other aspects crucial to successfully demonstrating adequate mission performance. Table 9 illustrates the status of operational suitability of the classes of ships at the time the six ships we reviewed were provided to the fleet. One reason later-in-class ships are generally better quality than earlier-in- class ships is that the Navy makes corrections based on tests and feedback from operational missions that may be factored into the design and construction of future ships. However, the policy does not articulate mission capability in terms of operational effectiveness and suitability metrics and does not make any distinctions for early or first-in-class ships. The Standards for Internal Control in the Federal Government emphasize the importance of clearly defined and specific objectives. Incorporating a mission capability definition that includes levels and aspects of ship performance into the Navy’s policy would provide program offices and fleet representatives more clarity about the expected level of capability of ships when they are provided to the fleet. In addition to a lack of definitional clarity, the Navy’s ship delivery policy does not specify when a ship should be defect-free and mission-capable. The Navy’s ship delivery policy and officials with the Office of the CNO, who are responsible for the policy, identify two different time frames regarding when ships should be complete, defect-free, and mission capable: 1. at delivery, when the Navy accepts custody of the ships, and 2. at OWLD, when the Navy provides the ship to the fleet. Consequently, we found confusion among policy makers and program offices as to when a defect-free and mission-capable ship is expected to be achieved. We have identified this issue in our previous work and made recommendations, which have not been addressed to date. Specifically, in November 2013, we found that CNO officials stated that the intention of the ship delivery policy was for ships to be defect-free and fully mission- capable when delivered from the shipbuilder; that is, at the beginning of the post-delivery period. At the same time, however, we also found that program officials believed a ship did not need to be free from deficiencies and fully mission-capable until it was provided to the fleet, that is, at the end of the post-delivery period. We recommended in our November 2013 report that the Navy clarify the policy with regard to the point at which deficiencies are to be fully corrected. DOD partially concurred with this recommendation and stated that the Navy’s goal is to reduce the number of deficiencies at delivery to zero “when practical,” although the ship delivery policy itself includes no such caveat. The Navy revised its ship delivery policy in October 2014 to clarify roles and responsibilities, among other things, but the timing of defect correction and mission capability was neither clarified nor addressed. Office of the CNO officials stated that they were not aware of our recommendation when revising the policy. Similarly, in speaking with a range of officials across the Navy for this review, we continued to find conflicting views on when ships are to be deficiency-free and mission-capable. CNO officials responsible for the policy told us that ships should be free of all deficiencies by the time they are provided to the fleet, meaning at the end of the post-delivery period at OWLD, which is a change from their previous interpretation of the ship delivery policy that they authored. However, the policy does not include this clarification on the timing. In contrast, INSURV officials told us they believe the policy states that the shipbuilder should deliver defect-free ships at the beginning of the post-delivery period, with a few exceptions for items that can only be accomplished during the post-delivery period. However, as noted above, the Navy often delivers ships with open starred deficiencies, INSURV’s most severe category of ship deficiency. For the eight ships we reviewed, INSURV identified a total of 117 starred cards before delivery during acceptance trials. Twelve starred cards were corrected prior to delivery while the remaining 105 were waived by the CNO. Despite these deficiencies, INSURV recommended that the CNO accept the ships. In fact, INSURV officials stated that they have only recommended against delivery one time in 18 years. Standards for Internal Control in the Federal Government require objectives to be clear and measureable. Without clarifying when ships should achieve a certain level of completeness and quality, the Navy does not have a clear standard or objective against which it can measure the condition of its ships and ensure quality. The Navy’s ship delivery policy identifies INSURV as the independent entity charged with verifying the quality of ships at delivery and recommending introduction to the fleet. But we found a disconnect between the Navy’s policy and INSURV’s practice. While INSURV makes a recommendation for ship delivery, officials stated that they do not make a recommendation for provision to the fleet because ship trials are not well-timed to independently verify the completeness and quality of ships at the point when they are provided to the fleet. As figure 5 illustrates, INSURV currently conducts acceptance trials and final contract trials prior to delivery and the post-shakedown availability, respectively, but does not conduct a trial between the post-shakedown availability and the end of the post-delivery period (at OWLD)—the point at which ships are provided to the fleet. Significant work is conducted during the post-shakedown availability. For the six ships we reviewed that have completed the post-delivery period, post-shakedown availability costs ranged from approximately $30 million to $83 million per ship and ranged in duration from 3 months to 16 months. According to INSURV officials, the post-shakedown availability used to be a minor availability but, increasingly, ships are undergoing higher intensity and more complex activities during this period, including correcting starred INSURV deficiencies, finishing construction, installing new systems, and modernizing equipment. For instance, of the four ships we reviewed that had starred deficiencies waived at delivery, all four had starred cards that remained open during INSURV’s final contract trials because the program office planned to fix these deficiencies during the post-shakedown availability. The correction of these starred deficiencies was therefore not inspected by INSURV. According to INSURV officials, because a significant amount of work is conducted during the post- shakedown availability, the ship’s condition at final contract trials is not indicative of the ship’s condition when it is provided to the fleet following this availability. Therefore, INSURV cannot make a recommendation for fleet introduction based on the final contract trial—INSURV’s last inspection before ships are provided to the fleet. As a result, the Navy is providing ships to the fleet with systems and equipment that were repaired or changed during the post-shakedown availability and have not been verified by INSURV, creating a greater potential for breakdowns or failures that would be the responsibility of the fleet to repair. For example, INSURV identified leaking couplings during LPD 25’s acceptance trial in October 2013. The LPD 17 program repaired the couplings during the ship’s post-shakedown availability in June 2015—after INSURV had conducted the final contract trial in November 2014. The ship was then provided to the fleet in July 2015. Shortly after the ship was provided to the fleet, according to fleet engineers and operators, the new couplings—designed to last the life of the ship—failed again, requiring the fleet to pay approximately $600,000 every 3 months to replace them each time they failed. The root cause remains under investigation, according to fleet engineers, although program officials stated that the leaks were due to a manufacturing defect that has now been corrected. In another example, INSURV identified several issues with LCS 3’s anchor that precluded the crew from retrieving it. The LCS program office repaired the anchor during the post-shakedown availability, following final contract trials. Following fleet introduction, the anchor failed again, and the fleet was required to fix it. Under the current practices, INSURV also does not have an opportunity to inspect ship changes that are implemented during post-shakedown availability. LHA 6, for example, was modified so it can operate with the Joint Strike Fighter—these changes totaled approximately $60 million— but INSURV did not inspect the changes. Resolving complications from this work, such as issues with the lithium-ion batteries we noted above, will be the fleet’s responsibility. Lastly, several programs install new equipment during the post-shakedown availability, such as aviation and information technology systems. In the absence of an INSURV fleet introduction recommendation, the Navy’s current practice does not align with its ship delivery policy, and uninspected equipment is provided to the fleet. There are some rare cases in which INSURV and the program office have agreed to inspect specific issues after the post-shakedown availability and before the ship is provided to the fleet. For instance, INSURV conducted a limited post-repair trial on LCS 3 that looked at a few specific issues, such as the anchor, and it plans to conduct a special trial on CVN 78 following the aircraft carrier’s post-shakedown availability. Navy program office and INSURV officials cited two factors that influence the timing of final contract trials. First, the final contract trial occurs just prior to the end of the guaranty period, which enables INSURV to identify deficiencies the contractor may be responsible for correcting prior to the expiration of the guaranty period. Second, INSURV and program officials stated that final contract trials inform the program office’s prioritization of deficiency correction during the post-shakedown availability. Program officials stated that this ensures that construction funding (SCN) is obligated for the highest-priority post-delivery work before OWLD—the final point at which the Navy can obligate shipbuilding and conversion funds before providing the ship to the fleet. While the timing of final contract trials facilitates the prioritization and funding of post-delivery work, it is not optimally aligned to verify that the work completed during post-shakedown availability meets quality standards before a ship is provided to the fleet. INSURV officials stated that there could be benefits to conducting an additional trial before providing a ship to the fleet. For example, they could re-inspect deficiencies, like the ones noted above, that the program office corrects during the post-shakedown availability. These inspections could, in turn, reduce the likelihood that systems and equipment break down shortly after ships are provided to the fleet. However, INSURV and Navy program officials also pointed out that conducting another trial after the post-shakedown availability would require additional funding. The Navy has not evaluated the cost or quality risks associated with providing the fleet with unverified repairs and equipment—such as the fleet’s costs to repair construction defects—against the costs of conducting an additional INSURV trial after the post-shakedown availability. Until the Navy studies this problem and develops a solution that reconciles current practices with its ship delivery policy, the Navy will not know whether the benefits of conducting an additional inspection outweigh the costs. The Navy’s Selected Acquisition Reports to Congress do not clearly communicate ship progress toward completeness and capability, which can inhibit oversight, particularly in terms of measuring results. Specifically, the Navy’s reported delivery dates are not accurate indicators of ship completion because the delivery date for one ship can reflect a much different level of completion than for another ship. Even after ships are reported as delivered, it will still be several years before the ship is fully complete. No other ship completeness milestones—such as when the ship is provided to the fleet (OWLD) or is deemed ready to deploy—are included in the Selected Acquisition Reports to Congress. Recently, Congress has enacted legislation that may better align ship delivery dates with ship completion in these reports by establishing criteria that must be met in order for a ship to be deemed delivered, specifically a determination by the Secretary of the Navy that a vessel is assembled and complete and that custody of the vessel and all systems has been transferred to the Navy. Further, the Navy’s criteria for IOC—a milestone associated with ship progress—vary from ship class to ship class and its assessments of IOC do not comport with DOD’s guidance. In addition, the IOC milestones for most of the ship classes we reviewed do not reflect demonstrated capability or performance. According to Standards for Internal Control in the Federal Government, government managers should externally communicate the necessary quality information to achieve the entity’s objectives. Without using consistently defined measures in its reporting, such as for delivery or IOC, the Navy is not accurately conveying the completeness and quality of its ships to Congress. The Navy, in its Selected Acquisition Reports, typically reports delivery as the date that the lead ship in a class or flight is delivered from the shipyard to the Navy. However, the delivery milestone is not an accurate indicator of ship completeness. As discussed previously, ships vary in their level of completeness at delivery. In many cases, several years will pass between delivery and provision to the fleet, and even at that point, more time may be required before a ship is ready to deploy. Figure 6 shows, for the eight ships we reviewed, the length of time between the delivery of each ship, when each ship was provided to the fleet (at OWLD), and the ship’s first deployment after all planned construction work, tests, and trials were completed. For example, after CVN 78 is provided to the fleet, it will need to undergo shock trials, operational testing, and combat certifications, among other things, before it is ready for its first deployment. Recipients of the Selected Acquisition Reports would not have insight into this situation because the Navy reports the date of delivery but does not include additional important milestones about ship completeness, such as when ships are provided to the fleet at OWLD and when ships are ready for deployment. Without including this additional information, decision makers will not have a clear understanding of when ships are ready for fleet operations. Current Selected Acquisition Reports on the DDG 1000 and CVN 78 ship classes also illustrate the inconsistency in the Navy’s definition of “delivery.” As discussed earlier, the Navy will complete the construction of DDG 1000 in two phases. In the December 2015 Selected Acquisition Report, the Navy indicated lead ship delivery would be April 2016, and the ship was subsequently delivered in May 2016. Though the report noted that this delivery was focused on hull, mechanical, and electrical systems, it did not provide an additional indication of when all ship construction—including activation of the combat systems—is planned to be fully complete. It also did not note that the ship is not planned to deploy until fiscal year 2021, 5 years after the reported delivery date. DDG 1001 and 1002, which are later ships in the same class, are planning to use the same approach. The Selected Acquisition Report for the CVN 78-class was clearer about its key milestones. For CVN 79, the December 2015 report reflected the delay in deployment after delivery. The Navy reported the carrier’s delivery date as June 2022 in its schedule of events, but stated in the executive summary that the carrier will not be deployable until 2027, after it goes through a second phase of construction. Because policy makers and others rely on the Navy’s reports to understand ship progress and review reported ship schedules as an indicator of a potential breach of the agreed-to program baseline, it is important that the information be clearly and consistently communicated. Congress included a provision in the National Defense Authorization Act for Fiscal Year 2017 that may address this lack of clarity and consistency in reported delivery dates by establishing criteria that must be met in order for a ship to be deemed delivered. According to this legislation, the delivery of a ship shall occur when (1) the Secretary of the Navy determines the ship is assembled and complete and (2) custody of the ship and all of its systems are transferred to the Navy. The legislation further requires the Navy to review the planned delivery dates for ships under construction and adjust them, if the planned dates did not reflect a level of construction completeness in line with the new criteria. In particular, the legislation directed the Navy to realign the delivery dates for ships with phased delivery strategies—CVN 79, DDG 1000, DDG 1001, and DDG 1002—so that delivery will occur when the Secretary of the Navy determines that each vessel is assembled and complete (that is, when all phases of construction are complete), rather than when the first phase is complete as was previously the case. Congress directed the Navy to certify adjusted delivery dates for all ships under construction to the congressional defense committees by January 1, 2017, and to include these revised dates in the next Selected Acquisition Reports and budget documents sent to Congress. In February 2017, the Navy adjusted the delivery dates for these four ships to coincide with the completion of significant construction events following preliminary acceptance, such as the activation of DDG 1000’s combat systems. As noted above, however, Navy ships are not fully complete until at least OWLD—when a ship is provided to the fleet. The Navy’s Selected Acquisition Reports to Congress also state when ship classes achieve IOC; however, the reports generally do not state the criteria the Navy used to make these capability determinations, and the criteria used are not consistent with DOD guidance. In January 2015, DOD updated its acquisition guidance to include a number of program models that DOD agencies and military services can use to structure programs for the purpose of attaining knowledge prior to committing to more purchases. In nearly all acquisition program models, DOD guidance states that IOC occurs toward the end of operational testing. Even the most aggressive model of delivering programs, the accelerated acquisition program—which by design accepts significant risk to add capability in a compressed time frame (such as during a time of war)— defines IOC as occurring simultaneously with operational testing, not before testing. DOD acquisition guidance and GAO best practices state that testing provides critical information to make informed production and other acquisition decisions. The Navy’s criteria for declaring IOC differs across ship classes, and none of them require achieving favorable results from operational testing. Of the eight ships we reviewed, program offices had used two sets of criteria for IOC, both of which were schedule-driven rather than capability- driven milestones; that is, they did not take into account the successful completion of operational testing. Table 10 shows the ship classes for the eight ships we reviewed, how the programs defined IOC, and the status of operational testing at the time the Navy declared IOC for the class. For several of the ships we reviewed, the Navy defined and declared IOC for the ship class without ever testing the operational capabilities of, or deploying, the lead ship. As a result, achieving IOC did not provide an indication that the ships could conduct operations as intended, which can provide a false sense of the ships’ capabilities. For instance, after the Navy declared IOC for the LPD 17 class, the lead ship suffered a severe engineering casualty during its first deployment that limited its availability for several years. After this incident, the Navy’s Commander of Operational Test and Evaluation reported that the LPD 17 class of ships was not operationally suitable and was operationally effective with the exception of the combat system. After nearly 3 years of follow-on tests and a considerable number of design changes to correct problems, the Navy’s testers determined in December 2012 that the LPD 17 class was operationally suitable and operationally effective—4 years after the Navy originally declared IOC for the class, with 10 ships completed or under construction. Shipbuilding is a complex endeavor, and a certain amount of deficiencies can be expected. However, all of the Navy ships we reviewed were, or likely will be, provided to the fleet with outstanding deficiencies, incomplete certifications, or open casualty reports, among other quality issues—resulting in additional costs that the fleet will have to bear. Moreover, the Navy has made liberal use of the various exceptions to its process for some of its most expensive and technologically sophisticated ships—namely, the CVN 78 and DDG 1000 classes—to allow these ships to be delivered in a substantially incomplete state, placing the fleet at even greater risk of absorbing excessive costs and having to face unknowns about ship quality. While Navy officials offered some reasons that ships are accepted in incomplete states, the ship delivery policy makes no reference to these reasons. The policy states that ships should be defect-free and mission-capable, but these objectives are not defined. Further, INSURV’s only post-delivery trial is not well-timed to independently verify the completeness and quality of ships before they are provided to the fleet. As a result, key quality control measures in the Navy’s ship delivery policy are not implemented, resulting in uninspected systems and equipment being provided to the fleet, with no verification of completeness and quality at this key milestone. The Navy’s Selected Acquisition Reports to Congress do not clearly communicate its ships’ progress and completion, which can inhibit oversight, particularly for measuring results. Simply reporting delivery dates does not signify a ship’s completeness or readiness to deploy, as there is considerable variation in the level of completeness of ships at delivery, and it will still be several additional years before ships are ready to deploy. Recent legislation has established criteria for ship delivery dates that, depending on its implementation, may help improve the consistency and clarity of the Navy’s reporting to Congress on this milestone. Similarly, IOC is reported but does not signify that ships have successfully demonstrated capability. Without consistent and meaningful capability and schedule milestones, decision makers may not be able to understand the progress toward ship completion or may be surprised to learn of complications after the ship appeared to be delivered or completed, which may require additional funding. The Secretary of Defense should direct the Secretary of the Navy to take the following four actions: 1. Revise the Navy’s ship delivery policy to clarify what types of deficiencies need to be corrected and what mission capability (including the levels of quality and capability) must be achieved at (1) delivery and (2) when the ship is provided to the fleet (at OWLD). In doing so, the Navy should clearly define what constitutes a complete ship and when that should be achieved. 2. Reconcile policy with practice to support INSURV’s role in making a recommendation for fleet introduction. Accomplishing this may require a study of the current timing of ship trials, and the costs and benefits associated with adding an INSURV assessment prior to providing ships to the fleet. 3. Reflect additional ship milestones in Selected Acquisition Reports to Congress, including OWLD and readiness to deploy. 4. In Selected Acquisition Reports to Congress, ensure that the criteria used to declare IOC aligns with DOD guidance, and reflect the definition of this milestone in the reports. We provided a draft of our report to DOD for review and comment. In its written comments, which are reprinted in appendix III of this report, DOD did not concur with two recommendations, partially concurred with a third recommendation, and fully concurred with a fourth recommendation. DOD provided technical comments that we incorporated as appropriate. With regard to our first recommendation, DOD disagreed with our focus on OPNAVINST 4700.8K as the primary criteria for assessing Navy ship quality and completeness when ships are provided to the fleet, stating that multiple instructions govern this process. This response was puzzling, as we reviewed relevant Navy policies and confirmed with acquisition officials within the Department of the Navy that OPNAVINST 4700.8K is the primary policy governing the quality standards for Navy ships at delivery. The statute and two policies that other policies DOD references in its response are not focused on construction and the post- delivery period and do not provide guidance on the level of quality and completeness expected when ships are provided to the fleet. Therefore, we focused on OPNAVINST 4700.8K because it is the only Navy instruction that attempts to set a quality standard for Navy ships rather than provide guidance on managing the inspection process. As such we maintain that OPNAVINST 4700.8K should be clarified regarding the level of quality and completeness required of Navy ships at key points in the shipbuilding process. By not acknowledging the importance of OPNAVINST 4700.8K and establishing a clear and comprehensive quality standard, the Department of the Navy is missing an opportunity to improve the quality of its ships and risks continuing to provide ships to the fleet with significant quality problems. With regard to the second recommendation, DOD did not agree to study the current timing of ship trials or the costs and benefits of conducting an additional INSURV assessment prior to providing ships to the fleet. In particular, DOD stated that the current timing of Navy inspections is deliberate because it enables INSURV to inspect the ship and identify any additional deficiencies for correction during the post-shakedown availability. However, while the timing of final contract trials facilitates the prioritization of post-delivery work, as our report points out, it is not optimally aligned to verify that the work completed during post- shakedown availability meets quality standards before a ship is provided to the fleet. For example, for the eight ships we reviewed, 90 percent of the 117 starred cards identified during acceptance trials were waived by the CNO prior to delivery and we found that many of these cards are corrected during the post-shakedown availability, which is after final contract trials—INSURV’s final review before a ship is provided to the fleet. As a result, INSURV does not have an opportunity to verify that even the Navy’s most significant issues have been corrected before ships are provided to the fleet at the time of OWLD. By refusing to even consider changes to the status quo, the Navy may be missing an opportunity to improve the quality of ships delivered to the operational fleet. With regard to the third recommendation, DOD agreed to report OWLD in its Selected Acquisition Reports to Congress but disagreed with reporting the ready-to-deploy date for its ships, noting that operational factors outside of acquisition concerns can affect the timing of this milestone. We acknowledge that ready-to-deploy decisions reside with fleet commanders and are independent of acquisition milestones. However, we maintain that this date is important for Congressional oversight because it remains the best milestone for determining when a ship has achieved a sufficient level of completeness to operate, under the Navy’s current framework for ship delivery. DOD agreed with our fourth recommendation, stating that the criteria for IOC are defined in each ship class’ Capability Development Document or Operational Requirements Document and that, for ships that have not achieved IOC, it will include that definition in the Selected Acquisition Reports. The response, however, did not indicate that DOD will ensure that the criteria used to declare IOC aligns with DOD guidance. We continue to believe that such an action would result in more meaningful and consistent information provided to Congress. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of the Navy, and other interested parties. This report will also be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or by e-mail at mackinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. This report assesses: (1) the extent to which the Navy provides complete, quality ships to the fleet that are free of government and contractor deficiencies; (2) the extent to which the Navy’s policy governing ship delivery facilitates efforts to deliver complete and quality ships; and (3) the extent to which Navy reports to Congress on the progress of shipbuilding programs consistently define key milestones such as ship delivery and initial operational capability. To gain an understanding of the condition in which shipbuilding programs deliver newly constructed ships to the fleet after accepting these ships from construction shipyards, we reviewed eight case studies. To select case studies for this review, we identified Navy ships which were either delivered within the last 5 years or are likely to be delivered within the next year, and were constructed by a variety of shipyards. We also avoided using multiple ships from the same class or variant, and selected a mix of early- and late-in-class ships. These parameters resulted in reviewing the following ships as a non-generalizable sample: DDG 112, SSN 782, LPD 25, LCS 3, LCS 4, LHA 6, DDG 1000, and CVN 78. Six of these ships (DDG 112, SSN 782, LPD 25, LCS 3, LCS 4, and LHA 6) had finished their post-delivery periods at the time of our review, while CVN 78 and DDG 1000 had not. For the purposes of this review, the delivery date marks the beginning of the post-delivery period and the obligation work limiting date (OWLD) is the end of the post-delivery period. Table 11 provides additional information on the 8 ships selected as case studies for this review. To assess the extent to which the Navy provides quality, complete ships to the fleet, free of government and contractor deficiencies, we reviewed Navy documentation related to the delivery and subsequent post-delivery period for selected new construction ships. For each case study, we reviewed such documentation as Chief of Naval Operations waivers for delivery, readiness briefings for Navy Board of Inspection and Survey (INSURV) trials, trial cards and reports, the form DD-250 Material Inspection and Receiving Report, operational assessments, and the Transfer Book, among others. Through our review of this documentation, we assessed what construction work was incomplete or deficient when each case study ship was delivered to the Navy from the shipbuilder; the availabilities, tests, and trials each ship completed during the post- delivery period; and the condition of each ship when it was provided to the fleet following the post-delivery period. In particular, for the selected ships that have already completed the post-delivery period, we assessed the number and type of INSURV-identified deficiencies at the time of ship delivery and tracked these through the post-delivery period to determine whether they were passed to the fleet. Additionally, we identified which shipboard system certifications were required for these ships and evaluated Navy documentation and supplementary program office information to determine when these certifications were completed. We also reviewed Navy casualty report data at the time ships were passed to the fleet. Senior fleet personnel told us that the first 3 months after a ship is passed to the fleet are indicative of the condition the ship was passed to the fleet as crewmembers gain an understanding and operate these systems. Thus, we aggregated the open category 2 and 3 casualty reports during the three months following OWLD to understand the status of the ship at this time. For CVN 78 and DDG 1000, the two ships which have not yet completed the post-delivery period, we reviewed the Navy’s post-delivery plans for these two ships, including proposed schedules and plans to complete deferred construction. To gain additional understanding of how and why the Navy decides to accept delivery from the shipbuilder and provide to the fleet ships that are not free of deficiencies, we interviewed officials from several Navy entities, including the shipbuilding program office for each case study ship, the Supervisor of Shipbuilding, Conversion, and Repair (SUPSHIP), INSURV, Naval Air Systems Command, Space and Naval Warfare Systems Command, and representatives from the fleet, among others. The fleet officials we met with were senior leaders of the Navy commands responsible for operating and maintaining these vessels, as well as port engineers, senior crew members (such as the commanding officer and chief engineer), and other individuals with management and technical responsibilities for maintaining the ships. We generally reported statements that were widely agreed upon. To evaluate the extent to which the Navy’s policy governing ship delivery facilitates efforts to deliver complete and quality ships, we reviewed the Navy’s ship delivery policy covering trials, delivery, and post-delivery activities (referred to as the Navy’s ship delivery policy)— Office of the Chief of Naval Operations Instruction (OPNAVINST) 4700.8K—and identified the key terms, roles, responsibilities, and processes associated with post-delivery. Through our review of Navy shipbuilding and quality assurance guidance—such as OPNAVINST 4730.5R (Trials and Material Inspections of Ships Conducted by the Board Of Inspection And Survey)—and through interviews with acquisition officials, we determined that OPNAVINST 4700.8K was the primary policy governing ship quality and completeness and the Navy’s program offices verified this conclusion. We further examined this policy to determine the objectives, processes, and definitions of key terms that were relevant to the scope of our engagement, and assessed these elements of the policy for both internal consistency and consistency with other Navy and DOD guidance. We conducted interviews with the Office of the Chief of Naval Operations, Navy program officials, Naval Sea System Command directorates, SUPSHIP, INSURV, Navy general counsel, fleet maintenance officials, and other entities to determine how organizations across the Navy interpret the Navy’s ship delivery policy. We also reviewed Standards for Internal Control in the Federal Government and determined which standards were relevant to the Navy’s post-delivery process. In reviewing the Navy’s quality practices, we focused on INSURV and SUPSHIP’s respective roles in ensuring quality ships are built. We assessed INSURV and SUPSHIP reports, talked to inspectors, and read the guidance governing these organizations to look at how these organizations improve ship quality. In addition, we evaluated the results against the Standards for Internal Control in the Federal Government to assess the extent to which the Navy controls ship quality as an outcome or objective. To determine the extent to which reports to Congress on the progress of shipbuilding programs consistently define key milestones, we obtained and reviewed the Selected Acquisition Reports and budget justification documents for the ship classes of each of the eight ships we reviewed going back at least two fiscal years. We reviewed the milestones and dates reported in these documents, such as delivery and initial operational capability, and used our other analyses of the completeness and performance of ships to determine the condition and capability of the selected ships at the relevant milestone dates. We obtained and reviewed the high-level requirements documents for the ships in our review, as well as Navy and DOD policies and guidance that define and describe key milestones to determine whether (1) the Navy reported these milestones in accordance with relevant guidance and definitions, and (2) whether the Navy’s guidance and definitions were consistent with DOD guidance and meaningful to congressional overseers. We supplemented these analyses with interviews and other data from Navy program offices, where needed. We conducted this performance audit from February 2016 to July 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Navy accepted delivery of CVN 78 with a significant amount of outstanding construction, tests, and trials. According to the Navy’s plans, this incomplete work will be completed over the course of more than 4 years and is expected to cost nearly $780 million. As is typical for most shipbuilding programs, the program office requested post-delivery and outfitting funding for CVN 78, totaling $216 million; however, the program office’s total planned cost for CVN 78’s post-delivery activities also includes funding to complete deferred work (end cost), prepare training materials (other procurement and operations and maintenance), and execute an extended testing phase (research, development, test, and evaluation)–for a total of at least $779 million. Table 12 shows the Navy’s planned cost for CVN 78 post-delivery activities. In addition the contact named above, the following staff members made key contributions to this report: Diana Moldafsky, Assistant Director; Laurier Fish; Laura Greifner; Samuel Harris; Kristine Hassinger; Chad Johnson; Jillian Schofield; and Robin Wilson. Navy Shipbuilding: Need to Document Rationale for the Use of Fixed- Price Incentive Contracts and Study Effectiveness of Added Incentives. GAO-17-211. Washington, D.C.: March 1, 2017. Littoral Combat Ship and Frigate: Slowing Planned Frigate Acquisition Would Enable Better-Informed Decisions. GAO-17-279T. Washington, D.C.: December 8, 2016. Littoral Combat Ship and Frigate: Congress Faced with Critical Acquisition Decisions. GAO-17-262T. Washington, D.C.: December 1, 2016. Navy Ship Maintenance: Action Needed to Maximize New Contracting Strategy’s Potential Benefits. GAO-17-54. Washington, D.C.: November 21, 2016. Military Readiness: Progress and Challenges in Implementing the Navy’s Optimized Fleet Response Plan. GAO-16-466R. Washington, D.C.: May 2, 2016. Navy and Coast Guard Shipbuilding: Navy Should Reconsider Approach to Warranties for Correcting Construction Defects. GAO-16-71. Washington, D.C.: March 3, 2016. DOD Operational Testing: Oversight Has Resulted in Few Significant Disputes and Limited Program Cost and Schedule Increases. GAO-15-503. Washington, D.C.: June 2, 2015. Ford-Class Carrier: Congress Should Consider Revising Cost Cap Legislation to Include All Construction Costs. GAO-15-22. Washington, D.C.: November 20, 2014. Weapon Systems Management: DOD Has Taken Steps to Implement Product Support Managers but Needs to Evaluate Their Effects. GAO-14-326. Washington, D.C.: April 29, 2014. Littoral Combat Ship: Navy Complied with Regulations in Accepting Two Lead Ships, but Quality Problems Persisted after Delivery. GAO-14-827. Washington, D.C.: September 25, 2014. Navy Shipbuilding: Opportunities Exist to Improve Practices Affecting Quality. GAO-14-122. Washington, D.C.: November 19, 2013. Best Practices: High Levels of Knowledge at Key Points Differentiate Commercial Shipbuilding from Navy Shipbuilding, GAO-09-322. Washington, D.C.: May 13, 2009. Defense Acquisitions: DOD Has Paid Billions in Award and Incentive Fees Regardless of Acquisition Outcomes. GAO-06-66. Washington, D.C.: December 19, 2005.
The U.S. Navy spends at least $18 billion per year on shipbuilding—a portion of which is spent after ships are delivered. During the post-delivery period—after delivery from the shipbuilder and before the ships enter the fleet—Navy ships undergo a variety of tests, trials, and construction. GAO was asked to assess the post-delivery period, including quality and completeness of ships when they are delivered to the fleet. The Senate Report on the National Defense Authorization Act for Fiscal Year 2017 included additional questions about ship status after delivery. This report assesses the extent to which the Navy (1) provides complete and quality ships to the fleet, (2) has a ship delivery policy that supports those efforts, and (3) reports ship quality and completeness to Congress. GAO reviewed a nongeneralizable sample of eight Navy ships, six of which have entered the fleet and two that recently began the post-delivery period. GAO reviewed program documentation and interviewed Navy officials. GAO reviewed six ships valued at $6.3 billion that had completed the post-delivery period, and found they were provided to the fleet with varying degrees of incomplete work and quality problems. GAO used three quality assurance metrics, identified by Navy program offices, to evaluate the completeness of the six ships—LPD 25, LHA 6, DDG 112, Littoral Combat Ships (LCS) 3 and 4, and SSN 782—at delivery and also at the time each ship was provided to the fleet. Although the Navy resolved many of the defects by the end of the post-delivery period, as the table below shows, quality problems persisted and work was incomplete when the selected ships were turned over to the operational fleet. Fleet officials reported varying levels of concern with the overall quality and completeness of the ships, such as with unreliable equipment or a need for more intense maintenance than expected. For CVN 78 and DDG 1000, the Navy plans to complete significantly more work and testing during the post-delivery period than the other six ships GAO reviewed. As such, these ships are at a greater risk of being provided to the fleet at the end of their post-delivery periods with incomplete construction work and unknowns about quality. The Navy's ship delivery policy does not facilitate a process that provides complete and quality ships to the fleet and practices do not comport with policy. The policy emphasizes that ships should be defect-free and mission-capable, but lacks clarity regarding what defects should be corrected and by when. Without a clear policy, Navy program offices define their own standards of quality and completeness, which are not always consistent. Further, because the Navy's Board of Inspection and Survey (INSURV) does not inspect ships at the end of the post-delivery period, it is not in a position to verify each ship's readiness for the fleet, as required by Navy policy. The Navy has not assessed the costs and benefits of ensuring INSURV does this. Addressing these policy concerns would improve the likelihood of identifying and correcting deficiencies before fleet introduction and increase consistency in how the Navy defines quality. The Navy does not use consistent definitions for key milestones in its reports to Congress—such as delivery or Initial Operational Capability (IOC)—and, therefore, these milestones are not as informative as they could be regarding ship quality and completeness. For example, the Navy has routinely declared IOC on new ship classes without having demonstrated that ships are able to perform mission operations—contrary to Department of Defense (DOD) guidance, which, for nearly all acquisition models, generally states that IOC should be declared only after successful operational testing that demonstrates performance. The Navy should revise its ship delivery policy to identify what kinds of defects should be corrected and by when and study how to best ensure that INSURV verifies ships. Also, the Navy should reflect in its reports to Congress key milestones and consistent definitions in line with DOD policy. DOD did not concur with two recommendations, partially concurred with a third, and fully agreed with a fourth. GAO stands by its recommendations, which will help ensure that complete and quality ships are provided to the fleet and that Congress is provided with meaningful information on ship status.
The CNMI comprises a group of 14 islands in the western Pacific Ocean, lying just north of Guam and 5,500 miles from the U.S. mainland. Most of the CNMI population—58,629 in 2007—resides on the island of Saipan, with additional residents on the islands of Tinian and Rota. After World War II, the U.S. Congress approved the Trusteeship Agreement that made the United States responsible to the United Nations for the administration of the islands. Later, the Northern Mariana Islands sought self- government while maintaining permanent ties to the United States. In 1976, after almost 30 years as a trust territory, the District of the Mariana Islands entered into a Covenant with the United States establishing the island territory’s status as a self-governing commonwealth in political union with the United States. The Covenant grants the CNMI the right of self-governance over internal affairs and grants the United States complete responsibility and authority for matters relating to foreign affairs and defense affecting the CNMI. The Covenant initially made many federal laws applicable to the CNMI, including laws that provide federal services and financial assistance programs. The Covenant preserved the CNMI’s exemption from certain federal laws that had previously been inapplicable to the Trust Territory of the Pacific Islands, including federal immigration laws and certain federal minimum wage provisions. However, under the terms of the Covenant, the U.S. government has the right to apply federal law in these exempted areas without the consent of the CNMI government. The U.S. government enacted the recent federal immigration legislation under this authority. Three DHS components—CBP, ICE, and USCIS—have responsibility for federal immigration and border control. Customs and Border Protection. CBP is the lead federal agency charged with keeping terrorists, criminals, and inadmissible aliens out of the country while facilitating the flow of legitimate travel and commerce at the nation’s borders. Prior to international passengers’ arrival in the United States, CBP officers are required to cross-check passenger information, which air and sea carriers submit electronically prior to departures from foreign ports, against law enforcement databases. On arrival, the passengers are subject to immigration inspections of visas, passports, and biometric data. Generally, international passengers must present a U.S. passport, permanent resident card, foreign passport, or foreign passport containing a State-issued visa. Federal regulations require that international airports provide facilities for the inspection of aliens and provide office and other space for the sole use of federal officials working at the airport. Immigration and Customs Enforcement. ICE is responsible for enforcing immigration laws within the United States, including, but not limited to, identifying, apprehending, detaining, and removing aliens who commit crimes and aliens who are unlawfully present in the United States. ICE’s Office of Investigations investigates offenses, both criminal and administrative, such as human trafficking, human rights violations, human smuggling, narcotics, weapons, and other types of smuggling, and financial crimes. ICE’s Office of Detention and Removal Operations is the primary enforcement arm within ICE for the identification, apprehension, and removal of aliens unlawfully in the United States. The Office of Detention and Removal’s priority is to detain aliens that pose a risk to the community and those that may abscond and not appear for their immigration hearing. Consequently, the office uses detention space to hold certain aliens while processing them for removal or until their scheduled hearing date. ICE acquires detention space by negotiating intergovernmental service agreements with state and local detention facilities, using federal facilities, and contracting with private service contracting facilities. U.S. Citizenship and Immigration Services. USCIS processes applications for immigration benefits—that is, the ability of aliens to live, and in some cases to work, in the United States permanently or temporarily or to apply for citizenship. Most applications for immigration benefits can be classified into three major categories: family-based, employment-based, and humanitarian-based. Family-based applications are filed by U.S. citizens or permanent resident aliens to establish their relationships to certain alien relatives, such as a spouse, parent, or minor child, who wish to immigrate to the United States. Employment-based applications include petitions filed by employers for aliens to enter the United States temporarily as nonimmigrant workers for temporary work or training or as immigrants for permanent work. USCIS reviews petitions for certain nonimmigrant workers against criteria such as whether the petition is accompanied by a certified determination from DOL, whether the employer is eligible to employ a nonimmigrant worker, whether the position is a specialty occupation, and whether the prospective nonimmigrant worker is qualified for the position. Humanitarian-based applications include applications for asylum or refugee status filed by aliens who fear persecution in their home countries. USCIS also processes applications for Temporary Protected Status by aliens affected by natural disasters or other temporary emergency conditions for employment authorization and applications for adjustment of status to lawful permanent residence by alien beneficiaries of family-or employment-based immigrant petitions who are lawfully present in the United States. In addition, the Secretary of Homeland Security has delegated to all DHS components certain immigration authorities, such as authority to grant parole—that is, official permission for an otherwise inadmissible alien to be physically present in the United States temporarily. For example, CBP can grant visitors entry into the United States under the Secretary’s parole authority, and USCIS can issue advance parole to aliens in the United States who need to travel abroad and return and whose conditions of stay do not otherwise allow for readmission if they depart. DHS also operates the U.S. Visa Waiver Program. Under this program, foreign nationals from 36 countries may qualify for temporary entry to the United States with a valid passport from their own country. DOL responsibilities under its labor certification programs include ensuring that U.S. workers are not adversely affected by the hiring of nonimmigrant and immigrant workers. Certain employers must attest to taking certain steps, depending on the particular labor certification program, such as notifying all employees of the intention to hire foreign workers and offering their foreign workers the same benefits as U.S. workers. For most labor certification programs, DOL certifies eligible foreign workers to work in the United States on a permanent or temporary basis if it determines that qualified U.S. workers are not available to perform the work and that the employment of the foreign worker will not adversely affect the wages and working conditions of U.S. workers similarly employed. State has responsibility for issuing visas to foreign nationals who wish to come to the United States on a temporary or permanent basis. State’s process for determining who will be issued or refused a visa comprises several steps, including documentation reviews, in-person interviews, collection of biometrics, and cross-referencing an applicant’s name against a database that U.S. embassies or consulates (posts) use to access critical information for visa adjudication. Each stage of the visa process varies in length depending on a post’s applicant pool and the number of visa applications that a post receives. Figure 1 shows the responsibilities of the DHS components and of DOL and State related to U.S. immigration and border control. Ste issu visas thllow lien to pply for dmission t the order. ICE i reponle for the enforcement of immigrtion l within the interior of the United Ste, inclding the identifi- ction, pprehenion, detention, nd removl of criminlien. USCIS process lien’ ppliction for immigrtion enefit (the ability to live, nd in ome cas work, in the United Ste permnently or temporrily). Except where not reqired te, DOL reqire employer to flly tet the labor mrket for U.S. worker nd ensure tht U.S. worker re not dverely ffected y the hiring of nonimmigrnt nd immigrnt worker. Under U.S. immigration law, noncitizens may apply for U.S. entry visas either as nonimmigrants or as immigrants intending to reside permanently. The nonimmigrant categories for temporary admission include workers who meet certain requirements, visitors for business or pleasure, and treaty investors, among others. The immigrant categories include permanent immigrant investors, family-based, and various employment- based categories for admission to the United States as lawful permanent residents permitted to work in the United States. Following are descriptions of the nonimmigrant categories for temporary admission. Foreign workers. U.S. immigration law provides for several types of visas for nonimmigrant workers and their families—H visas and certain others—and sets caps for two types of H visas, H-1B and H-2B. In addition to providing for nonimmigrant visas, federal law provides for permanent employer-sponsored immigrant visas for individuals seeking to reside permanently in the United States. Visitors. Under federal law, visitors may come to the United States for business on a B-1 visa, for pleasure on a B-2 visa, or for business or pleasure on a combined B-1/B-2 visa. Visitors with B visas are normally admitted for a minimum of 6 months and a maximum of 1 year. Eligible nationals of the 36 countries included in the general U.S. Visa Waiver Program may stay for up to 90 days for business or pleasure in the United States without obtaining a nonimmigrant visa. Foreign investors. Federal law allows foreign investors to enter the United States as nonimmigrants under treaty investor status with an E-2 visa. Treaty investors must invest a substantial amount of capital in a bona fide enterprise in the United States, must be seeking entry solely to develop and direct the enterprise, and must intend to depart the United States when their treaty investor status ends. Treaty investors must be nationals of a country with which the United States has a treaty of friendship, commerce, or navigation and must be entering the United States pursuant to the provisions of the treaty. Federal law also allows foreign investors to seek permanent immigrant visas (EB-5) for employment-creation purposes. CNRA applied federal immigration laws to the CNMI beginning on November 28, 2009, subject to a transition period that ends on December 31, 2014, and with key provisions affecting foreign workers, visitors, and foreign investors. CNRA includes several provisions that affect foreign workers and investors during the transition period but that may be extended indefinitely for foreign workers. During the transition period, the U.S. Secretary of Homeland Security, in consultation with the U.S. Secretaries of the Interior, Labor, and State and the U.S. Attorney General, has the responsibility to establish, administer, and enforce a transition program to regulate immigration in the CNMI. Agencies must implement agreements with the other agencies to identify and assign their respective duties for timely implementation of the transition program. The agreements must address procedures to ensure that CNMI employers have access to adequate labor and that tourists, students, retirees, and other visitors have access to the CNMI without unnecessary obstacles. In addition, CNRA requires, among other things, that the CNMI government provide the Secretary of Homeland Security all CNMI immigration records, or other information that the Secretary deems necessary to help implement the transition program. Following are descriptions of key CNRA provisions related to foreign workers, visitors, and foreign investors. Foreign workers. CNRA allows federal agencies to preserve access to foreign workers in the CNMI during the transition period, as well as any extensions of the CNMI-only permit program, but limits subsequent access to foreign workers to those generally available under U.S. immigration law. Key provisions regarding foreign workers in the CNMI include the following: During the transition period, existing CNMI-government-approved foreign workers lacking U.S. immigration status can continue to live and work in the CNMI for a limited time—2 years after the effective date of the transition program or when the CNMI-issued permit expires, whichever is earlier. However, CNMI employers hiring workers on or after the transition effective date must comply with U.S. employment authorization verification procedures. During the transition period and any extensions of the CNMI-only permit program, employers of workers not otherwise eligible for admission under federal law can apply for temporary CNMI-only nonimmigrant work permits. During this period, the Secretary of Homeland Security has the authority to determine the number, terms, and conditions of these permits, which must be reduced to zero by the end of the transition period and any extensions of the CNMI-only work permit program. This program may be extended indefinitely beyond December 31, 2014, by the U.S. Secretary of Labor for up to 5 years at a time. During the transition period, employers in the CNMI and Guam can petition for foreign workers under the federal nonimmigrant H visa process, without limitation by the established numerical caps, for two types of H visas. This exemption from the visa caps expires when the transition period ends in 2014. During and after the transition period, CNMI employers can petition for nonimmigrant worker visas generally available under U.S. law. During and after the transition period, CNMI employers can also petition for employment-based permanent immigration status for workers under the same procedures as other U.S. employers. Visitors. CNRA amends U.S. immigration law to replace the existing Guam visa waiver program with a joint Guam-CNMI program, in addition to other changes. Under the Guam-CNMI visa waiver program, eligible visitors from designated countries who travel for business or pleasure to Guam or the CNMI are exempt from the standard federal visa documentation requirements. The Secretary of Homeland Security is to determine which countries and geographic areas will be included in the Guam-CNMI visa waiver program. Citizens of countries that do not qualify for entry under the Guam-CNMI visa waiver program or other U.S. visa waiver programs may apply for U.S. visitor visas valid for entry to any part of the United States, including Guam and the CNMI. Foreign investors. CNRA establishes that foreign investors in the CNMI who meet certain requirements can convert from a CNMI long-term investor to U.S. CNMI-only nonimmigrant treaty investor status during the transition period. New foreign investors can apply for U.S. nonimmigrant treaty investor status and also can petition for U.S. permanent immigration status, which was previously unavailable in the CNMI. The Secretary of Homeland Security is to decide which CNMI foreign investor permit holders will receive status as U.S. nonimmigrant treaty investors during the transition period. Figure 2 shows key federal immigration provisions related to foreign workers, visitors, and foreign investors. CNRA does not allow aliens present in the CNMI to apply for asylum until 2015. In the interim, an alien present in the CNMI can request not to be removed based on a claim of protection from persecution or torture. Since enactment of CNRA in 2008, the CNMI has taken several actions related to the implementation of federal immigration law. On September 12, 2008, the CNMI filed a lawsuit against the United States in the U.S. District Court for the District of Columbia to have specific provisions of Title VII of CNRA overturned on the grounds that it constituted unnecessary intrusion into the CNMI’s local affairs, violating the terms of the CNMI Covenant and the U.S. Constitution. The CNMI argued that provisions of CNRA violated the CNMI’s right of local self- government guaranteed by the Covenant, denying it the right to regulate its local labor force and economy as well as depriving it of revenue, all without its consent. The CNMI argued that the Constitution limits the power of Congress to impose a regulatory regimen upon a state without giving the local government the opportunity to participate in the political process that resulted in the legislation. The United States, argued, in part, that the CNMI lacked standing to pursue its claims. The federal government further argued that even if the CNMI had standing, the commonwealth had failed to state a claim upon which relief could be granted, because the legislation applying immigration law to the CNMI was lawful. The U.S. District Court for the District of Columbia has issued several rulings in the lawsuit. On November 25, 2009, the court agreed with the United States that the provisions of CNRA extending U.S. immigration laws to the CNMI beginning on November 28, 2009, do not violate the U.S.- CNMI Covenant or the U.S. Constitution. The court dismissed the two counts of the CNMI’s complaint alleging these violations. The court granted a CNMI motion for a preliminary injunction prohibiting the implementation of DHS regulations to implement the transitional worker program. On September 15, 2009, the CNMI government issued “The Commonwealth’s Protocol for Implementing P.L. 110-229,” covering the use of CNMI facilities for U.S immigration purposes and U.S.-CNMI data exchange, among other topics. CNMI facilities. The protocol outlines the approach that the CNMI will take regarding certain aspects of the transition program, including those pertaining to facilities. Specifically, regarding airport facilities, the protocol describes an intent to work with CBP, taking account of the Commonwealth Port Authority’s practical and financial limitations. The protocol explains that the CNMI was prepared to vacate its existing immigration space at the Saipan, Tinian, and Rota airports but does not intend to remove any existing lessee currently occupying space at the airport to accommodate CBP. The CNMI intends to provide facility space on terms to be negotiated. Regarding detention space in its prison, the CNMI noted that it was discussing this issue with ICE. Data exchange. The CNMI protocol proposes to allow the U.S. government access to immigration-related data. The CNMI has used two databases, the Labor Information Data System (LIDS) and the Border Management System (BMS), respectively, to record the permit status of certain aliens and to record the arrivals and departures of travelers. Specifically, the CNMI protocol envisions the following: DHS and the CNMI will engage in a two-way data exchange, with DHS providing flight entry data and the CNMI providing information from its immigration records (LIDS and BMS). The CNMI will provide access to CNMI immigration records that DHS formally requests via an appropriate document and within a reasonable time frame. The CNMI will consider privacy protections in making information available to the U.S. government. The CNMI expects to recover the cost of generating and producing any information requested by DHS. The CNMI issued temporary permits authorizing the holders to remain in the commonwealth after the federalization transition date, November 28, 2009, for a maximum of 2 years consistent with the terms of the permit. These “umbrella” permits also include provisions for extending, transferring, and seeking employment. Between October 15 and November 27, 2009, the CNMI Department of Labor, Department of Commerce, and Attorney General’s office identified all aliens eligible to receive umbrella permits, which they issued if an alien appeared personally with adequate identification and signed the contractual agreement contained in the umbrella permit. Permits were issued to workers, students, and investors as well as to their immediate relatives. Since the injunction against DHS’s regulations for the transitional worker program, a disagreement has arisen between the U.S. and CNMI governments regarding employment authorization for aliens who were authorized to be present by the CNMI government as of November 28, 2009, and were issued an umbrella permit. The U.S. government considers the employment authorization of aliens to be a matter of federal law, while the CNMI government maintains that it is a shared responsibility. As a result of the disagreement, the federal government and CNMI government have issued conflicting guidance. For example, according to USCIS, an employer in the commonwealth does not need the approval of the CNMI Department of Labor to hire a holder of a CNMI foreign worker permit (Foreign National Worker Permit). In contrast, the CNMI government maintains that the approval of the local Department of Labor is required. DHS components CBP, ICE, and USCIS have each taken steps to secure the border in the CNMI in accordance with CNRA. In addition, DHS has taken several steps to facilitate the implementation of CNRA. However, lack of resolution of the components’ negotiations with the CNMI government contributes to operational challenges. CBP operational space at the CNMI airports does not meet its facility standards for ports of entry, and DHS and the CNMI government have not executed long-term occupancy agreements that would allow DHS to upgrade the airport facilities. ICE efforts to acquire detention space at the CNMI local correctional facility also have been unsuccessful. As a result, as of March 2010, ICE has transferred only 3 of 30 aliens with prior criminal records to correctional facilities in Guam or Honolulu and released the other 27 on their own recognizance. Additionally, DHS has not succeeded in negotiating with the CNMI for direct access to CNMI immigration data, making it difficult for U.S. officials to verify the status of aliens in the CNMI and hampering enforcement operations. Prior to beginning inspection of arriving travelers in the CNMI, CBP officials made numerous visits to the CNMI to determine resource requirements and prepare for implementation of federal border control. In June 2009, CBP officially notified the CNMI Port Authority of its border control facility space, configurational, infrastructure, and physical security requirements. In response, the CNMI Port Authority sent a letter stating that it was unable to meet CBP requirements owing to limited financial resources and expertise and asking CBP to initiate efforts to meet the facility requirements. According to CBP, it subsequently began preparations to reconfigure the facilities. CBP officials told us that the Commonwealth Port Authority gave information technology staff access to the Saipan and Rota airports to install secure wireless networks on November 23, 2009, pursuant to CBP’s signing of right-of-entry agreements for the Saipan and Rota airports on that date. According to CBP, these agreements allowed it to prepare to begin operations in the airports by November 28, 2009, while the agency sought to negotiate permanent occupancy agreements. On November 28, 2009, 45 CBP officers moved into space previously occupied by the CNMI Department of Immigration at the Saipan airport and space previously occupied by the airport police at the Rota airport and began inspecting travelers’ immigration status on entry into, and in some cases on exit from, the CNMI. In January 2010, we observed CBP officers at the Saipan airport following procedures consistent with those required at other U.S. international airports. For example, we watched CPB officers screen arriving visitors in the immigration inspection area. According to the CBP officials in Guam and the CNMI, prior to visitors’ arrival in the CNMI, CBP officers screen 100 percent of the names that airlines submit electronically through a passenger information system, which the officers access through a database known as TECS. At immigration booths, we observed CBP officers verifying arriving passengers’ admissibility by scanning passports, reviewing other travel documents, and asking questions about the traveler’s intent. We also observed CBP officers taking photos and fingerprints and enrolling travelers in an immigration database known as US-VISIT. We further observed CBP officers escorting some travelers to a temporary secondary screening area, where officers asked additional questions to determine travelers’ admissibility and subsequently admitted or denied travelers entry into the CNMI. In addition, we observed CBP officers interviewing Chinese and Russian visitors in the primary screening area who were eligible for, and granted parole into, only the CNMI under the Secretary’s parole authority. Because China and Russia are not currently included in the U.S. or Guam- CNMI visa waiver programs, CBP inspectors complete several more administrative steps to parole Chinese and Russian visitors into the CNMI than are required to admit visitors from eligible countries. According to the CBP shift supervisor, while a typical primary interview may take 2 to 3 minutes, an interview for parole may take 5 to 6 minutes. From November 28, 2009, to March 1, 2010, CBP officers working at the Saipan and Rota airports processed 103,565 arriving travelers, granting parole to 11,760 (11 percent). Table 1 summarizes the number of arrivals processed by CBP officers at the Saipan and Rota airports from November 28, 2009, to March 1, 2010, including those admitted from primary and secondary screening areas, those granted parole, and those refused entry from the secondary screening area. During this period, more than 80 percent of the arriving travelers came from Japan or South Korea (see fig. 3). Of the arriving travelers from China and Russia, 86 percent (10,398 of the 12,131) and 90 percent (1,027 of the 1,146), respectively, were paroled into the CNMI only, under DHS authority. On March 28, 2010, CBP replaced the first group of officers temporarily assigned to the Saipan and Rota airports with a new group, according to CBP officials. On the basis of current flight schedules and estimated number of travelers, CBP has reduced from 45 to 30 the number of full- time officers required in Saipan and Rota. CBP posted announcements for entry-level and supervisory officer positions in the CNMI in November 2008 and April 2009 and received approximately 500 job applications from the CNMI community. Consistent with provisions of CNRA that require DHS, among other agencies, to recruit and hire staff for its operations from among qualified U.S. citizens and nationals residing in the CNMI, CBP hired seven local CNMI citizens, including two who had previously worked for the CNMI Department of Immigration, and three residents of Guam. According to CBP’s Human Capital Office, permanent staff will start working at CNMI airports in July 2010. Since November 28, 2009, 10 ICE officials detailed to Saipan have provided outreach to the CNMI community, assessed local security risks, identified aliens in violation of U.S. immigration laws, and processed or detained aliens for removal proceedings. During the first month of operations in the CNMI, ICE officials met with local law enforcement officials and provided information at local events to educate the community on ICE’s law enforcement role and responsibilities. ICE officials also established a point of contact in the CNMI Department of Labor and met with staff in the CNMI Attorney General’s office. To protect national security, public safety, and the integrity of the U.S. border in the CNMI, ICE assessed potential security risks that may lead to future criminal and civil enforcement in the commonwealth. First, ICE officials predicted that as CNMI labor permits expire, aliens ineligible for immigration benefits may file fraudulent immigration benefit applications. Second, ICE officials anticipate an increase in alien smuggling to Guam as aliens ineligible for immigration benefits try to reach Guam to apply for asylum. On January 5, 2010, ICE and the U.S. Coast Guard interdicted 24 Chinese nationals attempting to enter Guam illegally by boat. ICE has also identified individuals who may be in violation of U.S. immigration laws and has begun processing some aliens for removal. From December 7, 2009, to March 1, 2010, ICE identified 264 aliens subject to possible removal from the CNMI—including 214 referrals from the CNMI Attorney General’s office with pending CNMI deportation orders and 49 referrals from the ICE Office of Investigations and the community—and requested immigration status information about these individuals from the CNMI Department of Labor. As of March 1, 2010, ICE officials had processed 72 of the 264 aliens for removal proceedings, either for being present in the United States without inspection or parole or for not possessing a required valid entry document. Of these 72 aliens, 56 were convicted criminals under CNMI or U.S. law, including 30 who had completed their sentences at the local correctional facility and had been released into the community under CNMI authority. ICE also had transferred 3 of these 30 aliens convicted of crimes under CNMI or U.S. law to correctional facilities in either Guam or Honolulu and had released the other 27 on their own recognizance. On March 9, 2010, ICE officials told us that they had not deported any of the 72 aliens being processed for removal but that 31 were scheduled for immigration hearings by the end of March 2010 and 9 had agreed to waive their right to a hearing and to be deported after completing their criminal sentences. According to ICE officials, immigration hearings take place during 1 week of every month, when a judge from the U.S. Department of Justice Executive Office of Immigration Review travels to Saipan. Prior to November 28, 2009, USCIS representatives visited the CNMI to establish contacts, prepare plans for outreach to the community on forthcoming federal regulations and the transition to federal control of immigration in the CNMI and identify issues to resolve subsequent to the transition. Key USCIS activities included the following. In March 2009, USCIS opened an Application Support Center in Saipan and stationed two full-time employees at the center to provide information services, interview residents currently eligible to apply for lawful permanent resident status or citizenship, and process requests requiring biometric services. The center is also staffed by three contract employees who provide biometric collection services. In early December 2009, USCIS officials met with CNMI employers, business groups, representatives of community organizations, and the general public by conducting 13 town hall or public forum meetings on U.S. immigration law and procedures with a particular focus on completion of the Form I-9, Employment Eligibility Verification. Topics discussed included (1) the process for CNMI nationals to apply for immigration benefits under U.S. law; (2) the process for U.S. citizens to file petitions for alien relatives; and (3) the requirements for aliens living in the CNMI to obtain the advance parole needed to travel abroad and return to the CNMI. For calendar year 2009, USCIS processed 515 CNMI applications for permanent residency and 50 CNMI applications for naturalization or citizenship, more than doubling the number of interviews conducted for applications for residency or citizenship from calendar year 2008, according to data provided by USCIS officials. By March 17, 2009, USCIS also received 1,353 advance parole requests and approved 1,123 of them. USCIS also granted 705 paroles-in-place for domestic travel and 24 group paroles. To facilitate implementation of CNRA in the CNMI, DHS led meetings with DOI, DOL, and State, the other departments charged with implementing CNRA; reported to Congress on the budget and personnel needed by the DHS components; and initiated outreach to the CNMI government. Led interdepartmental meetings. From May 2008 through November 2009, DHS led, jointly with DOI, several interdepartmental meetings to discuss the implementation of CNRA, according to DHS, DOI, and DOL officials. Discussion during the meetings focused on operational and legal issues related to implementation of federal immigration law in the CNMI and on developing an interdepartmental memorandum of understanding of the departments’ respective duties. According to DHS and DOL officials, by the end of March 2010, the memorandum had been finalized but not yet signed by the departments’ Secretaries’ and was therefore not publicly available. Reported to Congress on needed budget and personnel. In January 2009, DHS submitted a report to Congress, as required by CNRA, on current and planned federal personnel and resource requirements. The report estimated that $97 million was necessary to fulfill all DHS responsibilities in the CNMI for fiscal years 2009 and 2010. In June 2009, responding to questions for the record in conjunction with a May 2009 hearing on the implementation of CNRA, DHS presented a new estimate of $148.5 million and described a phased approach to distribute costs from fiscal years 2009 to 2011. As of April 2010, DHS had not yet specified the changes in resources required for administering immigration and travel laws for the CNMI and Guam, as directed by Congress in its fiscal year 2009 appropriation. Initiated outreach to CNMI government. Although it has implemented CNRA primarily through its components, DHS has also initiated department-level outreach to the CNMI government. Prior to November 28, 2009, the DHS Office of Policy—charged with coordinating DHS components and working with other federal departments involved in implementing CNRA—contacted the CNMI government and led several intercomponent DHS visits to the commonwealth to meet with CNMI officials and gather information related to the DHS components’ efforts to establish federal border control in the CNMI. Additionally, in September 2009, the Secretary of DHS met with the Governor of the CNMI to discuss several aspects of CNRA implementation. The space that the CNMI government has provided for CBP operations at the Saipan and Rota airports is inadequate to meet CBP’s basic facility requirements, and the two parties have not yet concluded negotiations for long-term occupancy agreements that would allow CBP to begin upgrading the facilities. The CBP Airport Technical Design Standards describes basic CBP facility requirements for international airports and reflects U.S. policy, procedures, and minimum development standards for the design and construction of CBP facilities at airports. These standards specify space requirements for CBP’s primary, secondary, and administrative areas, among others, based on the size of the airport and the number of passengers processed per hour. In addition, U.S. law requires that airports designated as international airports must provide the U.S. government, without charge, adequate space for inspection and temporary detention of aliens as well as for offices. CBP has estimated that it will process between 800 and 1,400 passengers per hour at peak hours at the Saipan International Airport and has designated the airport as a low-volume and midsize airport, requiring at least 15,000 square feet for primary and secondary screening and other space. CBP currently occupies approximately 9,390 square feet of airport space previously used by CNMI Immigration. CBP’s current configuration at the airport does not include holding cells that meet federal standards; as a result, CBP lacks space to temporarily detain individuals who present a risk to public safety and to its officers. According to CBP officials, as of April 2010, CBP continued to seek access to approximately 7,200 additional square feet of space at the Saipan airport. CBP officials told us that they were considering three alternatives: reconfigure part of a 15,390 square-foot space as of January 2010, leased for storage by a tenant but, according to CBP, not in use; identify other space in the airport for reconfiguration, in close proximity to the current immigration processing area; or build an additional facility on airport land adjacent to CBP’s immigration processing area at the Saipan airport. As of April 2010, CBP and the Commonwealth Port Authority had not concluded negotiations regarding long-term occupancy agreements for space at the Saipan and Rota airports or resolved key differences. CBP: In technical comments on a draft of this report, CBP stated that, given the CNMI’s economic and financial conditions, the agency will initially fund any construction or reconfiguration required to bring CNMI existing airport facilities into compliance with CBP’s operational requirements. CBP also stated that it was working to define its space needs and to complete facility design plans. However, CBP said that it would not rent airport space that the CNMI is obligated to provide at no cost. CBP stated that it agreed with the CNMI regarding the need for discussion of identified options to meet CBP space needs and for negotiation of certain key points. As of May 2010, CBP officials reported that they had not requested that the DHS Office of Policy intervene in conversations with the CNMI government regarding long-term occupancy agreements for airport space. CNMI: According to CNMI officials, the Commonwealth Port Authority is aware that the airport space does not meet CBP operational requirements. However, the officials told us that the port authority is not in a financial position to provide space to CBP without charge, including space that is currently generating revenue from a tenant. In January 2010, CNMI port authority officials told us that CPB had not consulted with them regarding any construction plans, which would require their approval. Additionally, in commenting on a draft of this report in April 2010, the CNMI said that CBP had not officially communicated a request regarding its space needs. The CNMI further commented that the commonwealth is not prepared to enter into negotiations with CBP unless it is assured that the request for space has been cleared at least at the assistant secretary level at DHS and that the department has received the necessary assurance from Congress that the funds necessary to fulfill CBP’s space needs will be available. ICE has been unable to conclude negotiations with the CNMI government to arrange access to detention space in the CNMI correctional facility. In March 2010, ICE estimated that it required 50 detention beds for its CNMI operations. Under a 2007 intergovernmental service agreement between the U.S. Marshals Service and the CNMI Department of Corrections, the CNMI adult correctional facility in Saipan provides the U.S. government 25 detention beds at a rate of $77 per bed per day. As of September 2008, less than 30 percent of the facility’s beds (134 of 513) were filled. To obtain needed detention space, ICE proposed to either amend the 2007 U.S. Marshals Service agreement before it expired on April 1, 2010, or establish a new agreement with the CNMI government. As of March 2010, after a year of negotiation, ICE had not finalized an agreement with the CNMI government owing to unresolved cost documentation issues, according to a senior ICE official. In March 2009, ICE officials initiated discussion with the CNMI government regarding needed detention space and requested that CNMI representatives complete a jail service cost statement. In October 2009, representatives from the CNMI provided an incomplete jail service cost statement. The statement did not include capital construction costs, and CNMI representatives informed ICE officials that all estimates were preliminary and that the statement would require additional review. In November 2009, a CNMI official provided ICE with an e-mail containing top-level cost estimates, including capital and operating costs totaling approximately $107 per day. In December 2009, ICE requested additional documentation for the construction costs, and the CNMI Attorney General provided a second jail service cost statement with a further breakdown of the CNMI rate of $107 per day. An ICE assessment of the CNMI statement deemed that the CNMI had miscalculated certain costs and, after recalculating these costs, proposed a bed rate of approximately $89 per day. In January 2010, according to ICE officials, the CNMI acknowledged calculation errors but did not agree to a bed rate lower than $105. Since January 2010, negotiations between ICE and the CNMI regarding detention space have been on hold. According to the ICE contracting official, the CNMI has not provided any additional information supporting its $105 rate. Before contracting for beds, ICE requires documentation that establishes a fair and reasonable cost. According to the CNMI Attorney General, further documentation for the $105 rate is not necessary because the commonwealth is negotiating as an equal partner rather than as an applicant submitting cost proposals to DHS. ICE officials noted that although they had briefed the DHS Office of Policy on this operational challenge, ICE had remained responsible for the negotiations because of its expertise. ICE officials also observed that the CNMI had rebuffed all ICE efforts to acquire detention space. According to ICE officials, ICE prefers to detain aliens with prior criminal records while they await their immigration removal hearings, owing to possible flight risk and danger to the community. Given the current lack of needed detention space, ICE has identified three alternatives regarding detainees it seeks to remove from the CNMI while removal proceedings are under way: 1. Issue orders of supervision. Since November 28, 2009, ICE has released 43 detainees into the CNMI community, including 27 with prior criminal records, under orders of supervision. According to ICE officials, orders of supervision are appropriate for detainees who do not present a danger to the community or a possible flight risk. 2. Pay to transport detainees to other U.S. locations. ICE can transport detainees to another detention facility, such as in Guam or Honolulu. Guam’s correctional facility charges $77 per day. As of March 1, 2010, ICE had paid approximately $5,000 to transport two detainees to Guam and one to Honolulu. 3. Pay CNMI’s daily rate at Saipan correctional facility. ICE may pay the CNMI’s $105 daily rate for each detainee, if the CNMI provides appropriate documentation justifying its proposed rate. In addition, because ICE has been unable to conclude its negotiations with the CNMI Department of Corrections, ICE cannot conduct immigration removal hearings for persons currently serving time in the CNMI corrections facility. As of March 1, 2010, ICE identified 26 CNMI prisoners serving criminal sentences in the local CNMI correctional facility for removal proceedings. In general, ICE attempts to conclude removal proceedings before inmates are released, in order to expedite removals and avoid additional detention costs, according to ICE officials. However, the CNMI Department of Corrections will not permit ICE to conduct immigration hearings at the facility unless ICE agrees to pay utility and access fees to establish video conferencing services in the CNMI prison. Officials with the CNMI correctional facility proposed a fee of $84 per day for utilities and to allow video conferencing hookups. According to an ICE official, ICE has agreements with other federal and state prisons in other U.S. locations to hold immigration hearings while inmates are incarcerated and has installed video-conferencing equipment, free of charge, to allow inmates to participate in their immigration proceedings while in custody. As of April 1, 2010, DHS components lacked direct access to CNMI immigration and border control data contained in two CNMI databases, LIDS and BMS. The CNMI government assigned a single point of contact in the CNMI Department of Labor to respond to CBP, ICE, and USCIS queries from the databases, most commonly for verification of an individual’s immigration status. However, DHS component officials have expressed concerns about the reliance on the CNMI point of contact and stressed that it is imperative for the department to have direct access to the CNMI data systems to perform the department’s mission with maximum efficiency. ICE officials expressed the following concerns regarding DHS’s reliance on a single CNMI point of contact for requests for CNMI immigration data: ICE may lack information needed to support decisions regarding aliens’ status or eligibility to remain in the CNMI. For example, ICE must rely on the CNMI point of contact for information to determine the status of a given individual with an umbrella permit. Relying on one CNMI point of contact to verify immigration status for individuals subject to ICE investigations could compromise security for ongoing operations. Because the CNMI point of contact is an indirect source, basing ICE detention and removal decisions on data provided by the point of contact could lead to those decisions’ eventual reversal in court. Given that ICE operates 24 hours per day, 7 days per week, the CNMI point of contact cannot respond to all of ICE’s needs in a timely manner. USCIS officials also expressed concerns regarding lack of direct access to LIDS: Direct access to LIDS would allow USCIS to verify information provided by applicants for immigration benefits such as advance parole. For example, when an applicant for advance parole presents the required CNMI-issued entry permit or umbrella permit, direct access to LIDS would let USICS officials verify the authenticity of the permit. Direct access to the data will facilitate the processing of applications for CNMI-only work permits and for CNMI-only nonimmigrant treaty investor status. Direct access to CNMI immigration status information would assist USCIS in responding to interagency requests for immigration status verification through its SAVE program and in implementing the E-Verify program in the CNMI. In February 2010, CNMI officials reported that the point of contact assigned to work with the U.S. government had promptly supplied information on individual cases to U.S. officials from immigration and border control databases. Moreover, a senior CNMI official stated that if the point of contact is unable to respond to future DHS inquiries in a timely manner, CNMI officials would be willing to engage in additional discussions regarding more direct access to LIDS and BMS. According to ICE officials, the CNMI responses to ICE inquiries have not been timely and have not always provided sufficient information. Documentation that ICE provided shows that from late December 2009 through March 2010, ICE’s Office of Detention and Removal made 68 inquiries to CNMI’s Department of Labor to determine aliens’ immigration status. We examined ICE’s record of these inquiries and found that CNMI response times ranged from 16 minutes to around 23 hours, averaging roughly 4 and a half hours. ICE officials reported that the responses contained first and last names and LIDS numbers but rarely included biographical or identifying information, such as date-of birth, nationality, or photographs, that could be used to further ICE investigations. An ICE official also told us that in late February 2010, he sent an inquiry regarding whether 214 aliens with pending deportation orders, referred to ICE by the CNMI Attorney General, had been granted valid work permits prior to November 28, 2009. According to ICE officials, by the end of March 2010, the CNMI Department of Labor had provided a blanket response that was insufficient to answer the inquiry. DHS has communicated, at the department and component levels, with the CNMI government regarding access to CNMI immigration data. In a July 2008 letter to the Governor of the CNMI, the DHS Office of Policy requested information on the current CNMI system for recording and documenting the entry, exit, work authorization, and authorized conditions of individuals staying in the CNMl. DHS also requested any repositories of fingerprints, photographs, or other biometric information included in the system. On August 19, 2008, the office of the Governor of the CNMI responded to the DHS letter by providing an overview of the BMS system but stated that the CNMI does not maintain any repositories of fingerprints or other biometric information to share with DHS. According to a CNMI official, the commonwealth requested fingerprint scanners from DHS but did not receive them. During the September 2009 meeting between the Governor of the CNMI and the Secretary of DHS, the Governor proposed, through the CNMI protocol for implementing CNRA, providing restricted access to information contained in LIDS and BMS, for a fee and in exchange for airline flight entry data. On February 18, 2010, the Governor sent a letter to CBP indicating that he had been preliminarily advised that CBP would not share with the CNMI advanced passenger information provided by airlines and he reiterated the CNMI’s request for this information. The letter indicated that access to the airline flight data would facilitate CNMI efforts to prevent an increase in the number of aliens remaining in the commonwealth beyond their authorized stay. On March 31, 2010, CBP Office of Field Operations responded to the CNMI letter, denying the CNMI access to advanced passenger information provided by the airlines. The CBP letter stated that the CNMI’s intended use of the data did not justify their release to CNMI authorities. The CBP letter further indicated that, given DHS’s responsibility for removing aliens present in the CNMI beyond their authorized stay, it would be in the CNMI’s and DHS’s mutual interest for DHS to have access to CNMI immigration records or any other information that the Secretary deems necessary. In March 2010, CNMI officials told us that the commonwealth would not provide DHS increased access to immigration and border control data because DHS was unwilling to share airline flight data. In written comments on a draft of this report, the CNMI government stated its intention to appeal to the Secretary of Homeland Security the DHS decision not to share these data. U.S. agencies have begun to implement CNRA for workers, visitors, and investors, but key regulations are not final and, as a result, transition programs to preserve access to foreign workers and for investors are not yet available. In August 2008, we reported on key decisions that the agencies must make to implement the legislation. On November 25, 2009, the U.S. District Court for the District of Columbia issued a preliminary injunction prohibiting implementation of the DHS interim rule for the CNMI-only transitional worker program. As a result, although federal immigration laws now apply to the CNMI, the regulatory framework for the CNMI-only transitional work program is not yet in place and the permit program is not yet available. DHS has established the Guam-CNMI visa waiver program but did not include two countries, China and Russia, that the CNMI and Guam consider key to their tourist industries. According to DHS officials, a policy review is under way to determine whether the program should be revised to include these countries, and visitors from both nations meanwhile may enter the CNMI on the Secretary of Homeland Security’s discretionary authority to grant parole on a case-by-case basis. The DHS rule for investors currently exists in a proposed form, and as a result, the regulatory framework for the CNMI-only investor status is not yet available. On October 27, 2009, DHS issued an interim rule comprising regulations to implement the CNMI-only work permit program for foreign workers not otherwise admissible under federal law that was established in CNRA. These regulations address (1) the number of permits to be issued, (2) the way the permits will be distributed, (3) the terms and conditions for the permits, and (4) the fees for the permits. The rule was scheduled to take effect in its current form on November 27, 2009. In issuing the interim rule, DHS announced that it would accept comments in the development of the final rule but was not following notice-and-comment rulemaking procedures, asserting that it had good cause not to do so. Table 2 shows the key decisions that CNRA calls for the Secretary of Homeland Security to make in implementing the CNMI-only work permit program. DHS’s interim rule establishes the following: Number of permits. DHS will grant up to 22,417 CNMI-only work permits between November 28, 2009, and September 30, 2010, based on the CNMI government’s estimate of the maximum number of foreign workers in the commonwealth on May 8, 2008. The interim rule notes that DHS will publish annually in the Federal Register its determination of the number of permits to be granted each year of the transition period. Distribution of permits. Under the CNMI-only work permit program, employers must petition for nonimmigrant workers to obtain status, so that DHS can administer the work permit program in a manner consistent with other nonimmigrant categories for temporary admission, such as H- 1B visas. Accordingly, DHS created the CW-1 status, which it deemed to be synonymous with the term “permit” referenced in the legislation. DHS will determine whether an occupational category requires alien workers to supplement the resident work force. The DHS interim rule does not exclude any specific occupations from the program. However, the rule notes concerns that three occupational categories—dancing (such as exotic dancing), domestic workers, and hospitality workers—are subject to exploitation and abuse, and it invites comments on whether DHS should exclude these occupations in a final rule. Terms and conditions of the permit program. Employers must attest to their eligibility to petition for a CNMI-only work permit, and foreign workers must meet qualifications for positions. If a foreign worker is in the CNMI, the employer must attest that the worker is there lawfully. Additionally, the employer must attest that the position is nontemporary or nonseasonal and is in an occupational category as designated by the Secretary and that qualified U.S. workers are not available to fill the position. Permit fee. The fee for the CNMI-only work permit is $470. This fee includes an annual supplemental fee of $150 per worker per year to fund CNMI vocational education, with the remaining $320 charged per Petition for a Nonimmigrant Worker in the CNMI (I-29CW). To reduce costs, an employer may name more than one foreign worker on each petition, provided that the workers are in the same occupational category, for the same period of time, and in the same location. In issuing the interim rule, DHS claimed that it qualified for an exemption from a requirement that federal agencies publish a notice of proposed rulemaking in the Federal Register and give the public 30 days to comment. DHS raised several points to support its finding that it had good cause to dispense with the notice-and-comment period for the CNMI- only work permit rule. For example, DHS asserted that 18 months is a short time frame in which to review the CNMI’s immigration system and develop the regulatory scheme necessary to transition the CNMI to the U.S. federal immigration system. DHS noted in the interim rule it would accept comments through November 27, 2009, and would consider those comments in developing a final rule. DHS stated that the interim rule would go into effect in its current form on November 27, 2009. The D.C. District Court found these arguments unpersuasive in its decision to issue a preliminary injunction for this rule. DHS received numerous comments on the interim rule from the CNMI government, a private sector group, and interested businesses and individuals. The CNMI government asserted that the rule was incomplete and would damage CNMI workers, employers, and community and commented that the rule violated procedural requirements for agency rulemaking. In addition, the Saipan Chamber of Commerce raised concerns regarding the economic impact of the regulations and made a proposal to make it easier for workers with the CNMI-only work permit to return from travel outside the commonwealth. (See text box.) Comments from the CNMI Government and Private Sector on DHS Interim Rule for CNMI-Only Work Permit Program The CNMI government commented on the DHS interim rule stating that, in addition to disregarding the notice and comment provisions of the Administrative Procedure Act, the rule was deficient for the following reasons, among others: The interim rule fails to implement the transitional work program mandated by CNRA. It does not establish how permits are to be allocated among competing employers, and it does not establish a procedure for reducing the number of permits to zero by the end of the transition period. DHS failed to conduct a required economic impact analysis of the proposed rule. The interim rule will harm the Commonwealth’s U.S. workers, foreign workers, employers, and community: The regulations do not provide preferences for U.S. workers and require only that employers attest that qualified U.S. workers are not available to fill the position. Based on CNMI experience with such an “attestation” system, the CNMI Department of Labor believes it will invite widespread abuse and decrease the job opportunities available to U.S. workers. The regulations would cause substantial harm to foreign workers in the CNMI by subjecting them to increased fees and abuses. For example, the CNMI Department of Labor finds that the federal system does not bar employers with records of prior labor abuse from hiring foreign workers and does not assure that employers have sufficient resources to pay wages. The regulations hurt employers by defining “legitimate business” to exclude the direct employment of housekeepers or caregivers by households. The CNMI Department of Labor also notes the importance of male and female waiters, hosts, and entertainers to the tourist industry and states that prostitution and other forms of exploitation occur in the CNMI at a rate far lower than the U.S. national average. The regulation will hurt the community by greatly increasing the number of illegal aliens, with no concomitant federal enforcement capability to remove them. Comments from the Saipan Chamber of Commerce cite several concerns: the lack of a DHS schedule for allocating and reducing the number of worker permits and the possibility that DHS might restrict access to certain job categories for law enforcement purposes instead of directly targeting businesses that engage in illegal activity. Additionally, the chamber asks that multiple-entry visas be made available within the CNMI to workers who qualify for status under the interim rule. This would allow workers who travel abroad for a visit to return to the CNMI without undergoing the time- consuming and expensive federal visa process at a U.S. consulate. Because of the injunction issued in response to the CNMI’s amended lawsuit against the U.S. government, the CNMI-only foreign work permits are not yet available. In its November 2, 2009, amendment to its ongoing lawsuit to overturn portions of CNRA, the CNMI filed a motion for preliminary injunction to prevent the operation of the DHS interim rule until a procedural violation is remedied. The CNMI argued that DHS had violated procedural requirements of the Administrative Procedure Act, which requires notice and the opportunity for public comment before regulations can go into effect. On November 25, 2009, the U.S. District Court for the District of Columbia issued an order prohibiting implementation of the interim rule, stating that DHS must consider public comments before issuing a final rule. In granting the preliminary injunction, the court found, among other things, that DHS had had a lengthy period in which to develop regulations and had not demonstrated that it had used that time to complete implementation as efficiently as possible. The court also noted that the commonwealth’s residents and government had meaningful concerns about the regulations. In response to this preliminary injunction, DHS reopened the comment period from December 9, 2010, until January 8, 2010. As of May 2010, DHS had not yet issued a final rule and, as a result, CNMI-only work permits are not available. DHS plans to issue a final rule for the CNMI-only work permit program in September 2010. DOL officials informed us that they had not yet obtained sufficient experiential data to make a decision to extend the CNMI-only work permit program. DOL officials further indicated that a determination to extend the transition period well in advance of the expiration of the transition period may raise concerns about the validity of the Secretary’s determination, in light of the factors that CNRA authorizes the Secretary to consider in making the determination (see table 3). DOL officials also told us that they still lacked key data on which to base an extension decision. On January 16, 2009, DHS issued an interim final rule for the Guam-CNMI joint visa waiver program, which is intended to allow visitors for business or pleasure to enter the CNMI and Guam without obtaining a nonimmigrant visa for a stay of no longer than 45 days. DHS’s rule designates 12 countries or geographic areas, including Japan and South Korea, as eligible for participation in the program but excludes several countries that had been part of the previous Guam visa waiver program. DHS considered designating Russia and China as eligible for participation, because visitors from those countries provide significant economic benefits to the CNMI. However, because of political, security, and law enforcement concerns, including high nonimmigrant visa refusal rates, DHS deemed China and Russia as not eligible to participate in the program. Table 4 shows the key decision that, under CNRA, the Secretary of Homeland Security is to make regarding countries to be included in the Guam-CNMI visa waiver program. In developing the Guam-CNMI visa waiver program, DHS officials consulted with representatives of the CNMI and Guam governments, both of which sought the inclusion of China and Russia in the program. In the regulations, DHS states that after additional layered security measures are in place, DHS will make a determination as to whether nationals of China and Russia can participate in the visa waiver program. These security measures may include, among others, electronic travel authorization to screen and approve potential visitors prior to arrival in Guam and the CNMI. In May 2009, DHS officials informed Congress that the department is reconsidering whether to include China and Russia in the Guam-CNMI visa waiver program. DHS plans to issue a final rule for the Guam-CNMI visa waiver program in November 2010. Public comments on the proposed regulations from the Guam and CNMI governments and private sectors asked DHS to delay the Guam-CNMI visa waiver program implementation date, as allowed for in CNRA, from June 1, to November 28, 2009. The comments emphasized the economic significance of including China and Russia in the program. Guam officials argued that tourist arrivals in Guam from traditional markets were declining and that having access to China presented an important economic benefit. CNMI officials noted that the CNMI economy would be seriously damaged unless the CNMI retained access to the China and Russia tourism markets. (See text box.) Comments from CNMI and Guam Governments and Organized Private Sector on Interim Final Rule for Guam-CNMI Visa Waiver Program CNMI Government and Private Sector CNMI government comments on the interim final rule stressed the serious economic losses that would occur if China and Russia visitors were excluded from the visa waiver program and sought a delay in the program’s implementation until additional security measures are in place and DHS has amended the regulation to allow visitors from China and Russia under the program. The Saipan Chamber of Commerce sought to delay the implementation of the rule and asked that DHS identify the specific additional layered security measures that would allow it to reconsider its exclusion of China and Russia from the visa waiver program. Further, the chamber commented that the economic analysis used by DHS was substantially flawed, including an underestimate of the declines in tourists coming to the CNMI under standard U.S. visa requirements. Guam Governor and Private Sector The Guam Governor’s comments noted the economic benefit from the new provision allowing longer stays but identified the need to include visitors from China in the visa waiver program and the need for a formal mechanism to add countries to the program. The Governor supported the CNMI recommendation that implementation be delayed. The Guam Visitor Bureau also sought a delay in implementation so that additional layered security could be put in place, such that DHS could reach a determination to allow visitors from China and Russia. Guam private sector groups emphasized the economic benefits to Guam if DHS were to include China in the program. The private sector groups also identified China as a future growth market that could offset declines in visitors from Japan. On October 21, 2009, the Secretary of Homeland Security announced to Congress and the Governors of the CNMI and Guam the decision to parole tourists from China and Russia into the CNMI on a case-by-case basis for a maximum of 45 days, in recognition of their significant economic benefit to the commonwealth. CBP issued procedures for administering the parole in a bulletin to members of its Carrier Liaison Program and internal guidance to staff. According to a State official, information regarding the decision to parole visitors did not reach Chinese officials working at the airports in that country and, as a result, the Chinese authorities suspended charter flight service between China and the CNMI between November 28, 2009, and December 18, 2009. According to CNMI officials, the suspension of charter flight service resulted in the loss of approximately $7.8 million in visitor revenue. DHS has proposed a rule to allow a large proportion of holders of CNMI foreign investor permits to obtain U.S. CNMI-only nonimmigrant investor treaty status during the transition period. Table 5 shows the decision, with its federal requirements and authorizations, that CNRA calls for the Secretary of Homeland Security to make regarding CNMI foreign investors. Eligibility for CNMI-only treaty investor status. In proposing to allow CNMI foreign investor permit holders to obtain U.S. CNMI-only nonimmigrant treaty investor status, DHS included three types of CNMI permits: the long-term business investor entry permit, the foreign investor entry permit, and the retiree investor entry permit. As we reported in 2008, long-term business entry permits accounted for a large proportion of CNMI foreign investor entry permits that were active and valid in July 2008. According to the DHS proposed rule, eligibility criteria for CNMI- only nonimmigrant investor treaty status during the transition period include, among others, having been physically present in the CNMI for at least half the time since the investor obtained CNMI investor status. Additionally, investors must provide evidence of maintaining financial investments in the CNMI, with long-term business investors showing an improved investment of at least $150,000. Validity period for CNMI-only treaty investor status. DHS proposed terminating the validity period for the CNMI-only nonimmigrant treaty investor status on December 31, 2014. Under the proposed rule, the status would terminate regardless of whether the temporary worker provisions are extended. DHS proposed the rule on September 14, 2009, and accepted comments until October 14, 2009. According to DHS’s April 2010 Semiannual Regulatory Agenda, the department intends to issue a final rule for the investor program in July 2010. CNMI-only nonimmigrant treaty investor status will not be available until the final rule is issued with an effective date. DHS received several comments on the proposed rule from the CNMI government, Saipan Chamber of Commerce, and individuals (see text box). Comments from CNMI Government and Organized Private Sector on Proposed DHS Rule for CNMI-only Nonimmigrant Investor Treaty Status In its comments on the proposed regulations, the CNMI government disagreed with DHS’s conclusion that the CNMI-only investor status must end in 2014, stating that the status would instead be extended if the U.S. Secretary of Labor extends the transition period for the CNMI-only worker program. Further, the CNMI noted that the proposed regulations would exclude many current CNMI investors from qualifying for the E-2 CNMI investor status. For example, the CNMI reported that about 85 of 514 long-term business entry permit holders could not qualify if an investment level of $150,000 is required. CNMI also reported that 251 of the 514 permit holders were granted at a $50,000 required investment level and were “grandfathered” in 1997 when the minimum investment requirement was increased. Further, the CNMI noted that the requirement of continuous residence is unnecessarily restrictive and would operate to exclude some of the CNMI’s current investors. For the period beyond the end of the transition period, the CNMI government projected that only 42 of 514 long-term business entry permit holders may be able to meet the minimum investment level to qualify for federal investor status. The Saipan Chamber of Commerce also provided several comments on the proposed regulations: The transition period for investors would be extended if the U.S. Secretary of Labor extends the transition period for the CNMI worker program. DHS has the option to extend grandfathered treaty investor status beyond the end of the transition period and should take this step to benefit the economy. All holders of CNMI Long-Term Business Certificates should be grandfathered, as the proposed regulations would exclude those who had received CNMI permits with less than a $150,000 investment and those who are not nationals of nations with which the United States maintains a treaty of friendship, commerce, or navigation. Multiple-entry visas should be made available to E-2 CNMI investors within the CNMI, to allow investors who travel abroad to return to the CNMI without undergoing the time-consuming and expensive federal visa process at a U.S. consulate. Responding to CNRA’s extension of federal immigration law to the CNMI, DHS components have taken a number of steps since November 28, 2009, to ensure effective border control procedures in the commonwealth and to protect national and homeland security. In 2008 and 2009, DHS also initiated department-level outreach to the CNMI government to facilitate the components’ implementation of CNRA. Additionally, DHS and other agencies have taken steps to implement CNRA provisions for workers, visitors, and investors, although the programs for workers and investors are not yet available to eligible individuals in the CNMI. Despite the DHS components’ progress in establishing federal border control in the CNMI, however, their inability to conclude negotiations with the CNMI government regarding access to airport space, detention facilities, and CNMI databases has resulted in continuing operational challenges. First, lacking occupancy agreements with the CNMI, CBP officers have continued to operate in CNMI airport space that does not meet the agency’s facility standards. Second, lacking an agreement with the CNMI government regarding detention space, ICE has released a number of aliens with criminal records into the CNMI community under orders of supervision and has paid to transport several detainees to Guam and Hawaii. Third, lacking direct access to CNMI’s immigration and border control databases, ICE officials have instead directed data requests to a single CNMI point of contact, limiting their ability to quickly verify the status of aliens and compromising the security of ongoing operations. Although the DHS components have made continued efforts to overcome these operational challenges without department-level intervention, in each case, their efforts have encountered obstacles. Negotiations with the CNMI government for long-term access to the CNMI airports have not been concluded, and key differences remain unresolved; meanwhile, negotiations for access to CNMI detention facilities and databases have reached impasse. Without department-level leadership, as well as strategic approaches and time frames for concluding its components’ negotiations with the CNMI, DHS’s prospects for resolving these issues is uncertain. To enable DHS to carry out its statutory obligation to implement federal border control and immigration in the CNMI, we recommend that the Secretary of Homeland Security work with the heads of CBP, ICE, and USCIS to establish strategic approaches and time frames for concluding negotiations with the CNMI government to resolve the operational challenges related to access to CNMI airport space, detention facilities, and information about the status of aliens. We provided a draft of this report to officials in DHS, DOI, DOL, State, and the governments of the CNMI and Guam for review and comment. We received written comments from DHS, DOI, the CNMI government, and the Guam government, which are reprinted in appendixes II, III, IV, and V, respectively. We also received technical comments from DHS and DOL, which we incorporated as appropriate. State did not provide comments. Following are summaries of the written comments from DHS, DOI, the CNMI government, and the Guam government and of our responses where appropriate. DHS. DHS agreed with our recommendation that the Secretary of Homeland Security work with the heads of CBP, ICE, and USCIS to establish strategic approaches and time frames for concluding negotiations with the CNMI to resolve the operational challenges related to CBP’s access to airport space, ICE’s contract negotiations regarding detention facilities, and the ability for DHS and its component agencies to obtain information about the status of aliens from databases under the control of the CNMI government. DOI. DOI stated that the report clearly sets out the problems of implementing the extension of U.S. immigration law to the CNMI and that the information contained in the report corresponds to the observations and analyses of the department’s Office of Insular Affairs. CNMI government. The CNMI government raised concerns about the scope of our report and its support for several findings. The CNMI government expressed particular concern that we did not address certain issues that CNRA directed GAO to assess. As stated in the objectives of this report, we describe the steps taken by federal agencies to establish federal border control in the CNMI and the status of efforts to implement CNRA programs specific to the CNMI for workers, visitors, and investors. Recognizing that the regulations establishing the CNMI-only programs for workers and investors are not yet available, we reached agreement with the offices of the addressees of this report to examine the likely economic impact of federalization after regulations are in place. The CNMI also expressed concerns regarding the timeliness and content of federal agencies’ regulations to implement the CNRA programs for workers, visitors, and investors and regarding DHS efforts to identify overstayers and remove aliens. In our report, we discuss the CNMI’s concerns regarding each regulation. Additionally, the CNMI raised concerns regarding the adequacy of our evidence in some cases. In responding to CNMI’s comments and after considering technical comments from DHS, we modified our discussion of CBP’s effort to acquire operational space at the Saipan airport. In addition, we added information from ICE tracking logs to our discussion of DHS’s interest in obtaining direct access to the CNMI’s immigration-related databases, and we clarified other sections as appropriate. (See app. IV for more details of our responses to the CNMI’s comments.) Guam government. The government of Guam made several observations about the interim final rule for the Guam-CNMI visa waiver program. First, Guam stated that the DHS Secretary's decision to use her authority to parole tourists from China and Russia into the CNMI, but not to use her authority similarly for such tourists seeking to enter Guam, contravenes Congress's intent that a unified visa waiver program operate in Guam and the CNMI. Second, Guam stated that CNRA was designed to expand tourism to the islands and that China and Russia must be added to the Guam-CNMI Visa Waiver Program to achieve that result. Third, Guam concluded that the interim final rule makes the eligibility requirements for the Guam-CNMI program more stringent than those of the U.S. visa waiver program. The Governor’s office asked for the immediate issuance of a final rule for the Guam-CNMI visa waiver program that is consistent with congressional intent, unifies the program, and provides both Guam and the CNMI with access to China’s and Russia’s tourist markets. We are sending copies of this report to interested congressional committees. We also will provide copies of this report to the U.S. Secretaries of Homeland Security, the Interior, Labor, and State and to the Governors of Guam and the CNMI. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. In this report we describe (1) the steps that have been taken to establish federal border control in the Commonwealth of the Northern Mariana Islands (CNMI) and (2) the status of efforts to implement the Consolidated Natural Resources Act of 2008 (CNRA) provisions with regard to workers, visitors, and investors. We plan to issue a subsequent report regarding the impact of implementation of the CNRA on foreign workers, the tourism sector, and foreign investors in the CNMI. In conducting our work, we reviewed legislation that applies U.S. immigration laws to the CNMI, namely, CNRA, the U.S. Immigration and Nationality Act (INA), and related regulations. To examine the relationship between the CNMI and the United States, we reviewed the CNMI-U.S. Covenant, the lawsuit between the CNMI and the United States to overturn specific provisions of the CNRA, and the CNMI protocol for implementing U.S. immigration law. We also reviewed related studies by GAO and the Congressional Research Service. We interviewed officials in Washington, D.C., from U.S. Department of Homeland Security (DHS) components Customs and Border Protection (CBP), U.S. Citizenship and Immigration Services (USCIS), and Immigration and Customs Enforcement (ICE), as well as officials from the U.S. Departments of the Interior (DOI), Labor (DOL), and State. To describe the steps that have been taken to secure the border in the CNMI, we visited the commonwealth, where we interviewed officials in the CNMI Office of the Governor, Department of Labor, and the Marianas Visitors Authority. We also interviewed representatives of the CNMI private sector, including the Saipan Chamber of Commerce. In addition, we observed CBP operations at the Saipan and Rota airport facilities. We reviewed U.S. agreements with the CNMI regarding airport occupancy and detention space at the local correctional facility. In addition, we reviewed formal letters between DHS and the CNMI government, as well as the CNMI Department of Labor’s 2008 and 2009 Annual Report to the Legislature. In general, to establish the reliability of the data that CBP uses to document arrivals, that ICE uses to document aliens, and that USCIS uses to document benefits in the CNMI, we systematically obtained information about the ways that the components collect and tabulate the data. When possible, we checked for consistency across data sources. Although the data provided by CBP, ICE, and USCIS have some limitations, we determined that the available data were adequate and sufficiently reliable for the purposes of our review. We did not include the U.S. Department of Justice in our review, because the department has a limited role in implementing CNRA. We also did not assess the validity of federal agencies’ expected costs or operational needs in implementing the legislation. We did not review the extent to which U.S. laws were properly enforced. To describe the steps that DHS has taken to implement the CNRA provisions with regard to workers, visitors, and investors, we reviewed comments provided by the CNMI and Guam governments and organized private sectors regarding federal regulations. Specifically, we reviewed DHS’s interim rule for CNMI-only worker permits, the interim final rule for the Guam-CNMI visa waiver program, and the proposed rule for CNMI- only nonimmigrant treaty investor status. We also reviewed documents provided by agency officials that describe the operation of the parole authority used to allow Chinese and Russian nationals to visit the CNMI for pleasure on a case-by-case basis. We interviewed the Governor of Guam and representatives of the private sector regarding the differences between the Guam visa waiver program and the Guam-CNMI visa waiver program. The following are GAO’s comments to the CNMI government’s letter, dated April 21, 2010. see GAO-08-791 and GAO-10-333. under way for 10 months. On the basis of the CNMI’s comments as well as DHS technical comments, we revised our description of DHS’s effort to acquire space at the airports, focusing on the current lack of space rather than describing DHS’s process for seeking space. 22. The CNMI government states that CBP has not presented any spec requests for airport space to the responsible CNMI official. We followed up with CBP officials to discuss this point. CBP officials stated that the agency was working to define its space requirements and that it agreed with the CNMI regarding the need for discussion of identified options. We modified the report as appropriate. 23. The CNMI government states that it is not prepared to enter into negotiations unless it is assured that the request for space has been cleared at least at the assistant secretary level at DHS and that the department has received the necessary assurances from Congres the funds necessary to fulfill CBP’s space needs will be available. We modified the report as appropriate. 24. The CNMI government notes that the CNMI cannot responsibly give away public lands to a federal agency without a specific and demonstrated need and the availability of federal funds to achieve the agency’s objectives in seeking the land. The CNMI further observes that the Covenant imposes certain restraints on the ability of the federal government to acquire land for public purposes in the commonwealth. We modified the text in our report to convey more CBP is seeking an agreement with the CNMI to provide clearly that space for CBP operations but is not seeking to acquire land. 25. The CNMI govern ment comments that no CNMI government official could have stated in March 2010 that DHS was unwilling to share airline flight data, because CBP’s letter of March 31, 2010, was not received in the commonwealth un the text in our re information that officials. We also Governor of the CNMI’s letter to the Secretary of Homeland Security on February 18, in an interview in March 2010, said that DHS was unwilling to share airline flight data with the CNMI. til about April 10, 2010. We modified port to state that the CBP’s letter reiterated DHS officials had previously provided to CNMI modified the text in our report to state that the 2010, as well as the Governor’s Special Legal Counsel 26. The CNMI government states that our discussion of the issues rela to BMS and LIDS reflects a lack of understanding of the characteristics and limitations of both databases. In February 2010, we issued a report on the two databases that incorporated information from prior work and relevant documents from the CNMI government, DHS, and DOI . Our February 2010 report also incorporated technical comments that the CNMI provided on a draft of the report; however, the report notes ing to that the CNMI did not provide certain requested information ow insufficient staff resources. Subsequent to publication of the February 2010 report, the CNMI sent us additional technical commentary, which we incorporated in this report’s descriptions of the databases. 27. The CNMI government observes that we have reported elsewhere tha DHS does not have an effective digital exit control system. We have added references to several prior GAO reports that highligh concerns regarding the capacity of DHS to identify overstaying visitors. 28. The CNMI government describes as unacceptable the CBP decision not supply airline passenger data to the CNMI and states that it intends e to appeal the CBP decision to the Secretary of Homeland Security. Th report notes that CNRA requires, am government provide DHS with all commonwealth immigration record CNRA does not require DHS to share data with the CNMI and also do not preclude such data sharing. We modified the text of our report to reflect the CNMI’s stated intention to appeal the CBP decision. ong other things, that the CNMI 29. The CNMI government states that access to the CNMI point of con ect gives ICE access to more definitive information than would dir access to LIDS, because LIDS is not yet completely an online operation. The CNMI adds that we would have learned this if we had ng field spoken with operational personnel in Saipan. While conducti work in Saipan in January we attempted to speak with the individual designated as ICE’s point of contact; however, he said that he was no t allowed to speak with us unless authorized by the CNMI Departmen of Labor. We sought interviews through the CNMI Department of Labor and were granted one interview with a senior official. Although the that official agreed to provide answers to our questions regarding not LIDS system, we were later told that additional information could be provided owing to insufficient staff resources. 30. The CNMI government states that we did not examine ICE’s records of its transmission of inquiries to, and receipt of replies from, the CNMI We examined one ICE unit’s log of e-mail requests for CNMI immigration data, covering late December 2009 through March 2010, and found that CNMI response times ranged from 16 minutes to 23 hours and 19 minutes, averaging 4 hours and 24 minutes. The CNMI . government also notes that its Department of Labor has no record of any ICE request emanating from an after-hours operation. ICE officials told us that they recognize that the CNMI official responsible for answering their inquiries works normal business hours and that they limit their inquiries to that time period. However, the ICE unit’s log shows one inquiry sent at 10:54 PM and the CNMI response received in 16 minutes. 31. The CNMI government infers that our report claims that DHS has proceeded expeditiously to remove illegal aliens from the CNMI. The CNMI’s inference is not accurate; our report neither states nor im that DHS has proceeded expeditiously in this regard. Our report of the 72 aliens being processed for removal has been that none deported and that federal immigration hearings take place during 1 week of every month. In addition to the person named above, Emil Friberg, Assistant Director; Michael P. Dino, Assistant Director; R. Gifford Howland; Julia A. Roberts; Ashley Alley; and Reid Lowe made key contributions to this report. Technical assistance was provided by Martin De Alteriis, Ben Bolitzer, Etana Finkler, Marissa Jones, and Eddie Uyekawa. American Samoa and Commonwealth of the Northern Mariana Islands: Wages, Employment, Employer Actions, Earnings, and Worker Views Since Minimum Wage Increases Began. GAO-10-333. Washington, D.C.: April 08, 2010. U.S. Insular Areas: Opportunities Exist to Improve Interior’s Grant Oversight and Reduce the Potential for Mismanagement. GAO-10-347. Washington, D.C.: March 16, 2010. CNMI Immigration and Border Control Databases. GAO-10-345R Washington, D.C.: February 16, 2010. Poverty Determination in U.S. Insular Areas. GAO-10-240R. Washington, D.C.: November 10, 2009. Medicaid and CHIP: Opportunities Exist to Improve U.S. Insular Area Demographic Data That Could Be Used to Help Determine Federal Funding. GAO-09-558R. Washington, D.C.: June 30, 2009. Commonwealth of the Northern Mariana Islands: Coordinated Federal Decisions and Additional Data Are Needed to Manage Potential Economic Impact of Applying U.S. Immigration Law. GAO-09-426T. Washington, D.C.: May 19, 2009. Commonwealth of the Northern Mariana Islands: Coordinated Federal Decisions and Additional Data Are Needed to Manage Potential Economic Impact of Applying U.S. Immigration Law. GAO-08-791. Washington, D.C.: August 4, 2008. Commonwealth of the Northern Mariana Islands: Pending Legislation Would Apply U.S. Immigration Law to the CNMI with a Transition Period. GAO-08-466. Washington, D.C.: March 28, 2008. Commonwealth of the Northern Mariana Islands: Serious Economic, Fiscal, and Accountability Challenges. GAO-07-746T. Washington, D.C.: April 19, 2007. U.S. Insular Areas: Economic, Fiscal, and Financial Accountability Challenges. GAO-07-119. Washington, D.C.: December 12, 2006. U.S. Insular Areas: Multiple Factors Affect Federal Health Care Funding. GAO-06-75. Washington, D.C.: October 14, 2005.
In May 2008, the United States enacted the Consolidated Natural Resources Act (CNRA), amending the United States' Covenant with the Commonwealth of the Northern Mariana Islands (CNMI) to establish federal control of CNMI immigration in 2009, with several CNMI-specific provisions affecting foreign workers and investors during a transition. CNRA requires that GAO report on implementation of federal immigration law in the CNMI. This report describes the steps federal agencies have taken to (1) secure the border in the CNMI and (2) implement CNRA with regard to workers, visitors, and investors. GAO reviewed federal laws, regulations, and agency documents; met with U.S. and CNMI officials; and observed federal operations in the CNMI. The Department of Homeland Security (DHS) components Customs and Border Protection (CBP), Immigration and Customs Enforcement (ICE), and U.S. Citizenship and Immigration Services (USCIS) have each taken steps to secure the border in the CNMI in accordance with CNRA. From November 28, 2009, to March 1, 2010, CBP processed 103,565 arriving travelers at CNMI airports, and ICE processed 72 aliens for removal proceedings. In calendar year 2009, USCIS processed 515 CNMI applications for permanent U.S. residency and 50 CNMI applications for U.S. naturalization or citizenship. However, the DHS components face operational challenges and have been unable to negotiate solutions with the CNMI government. First, airport space available to CBP does not meet facility standards and CBP has not reached a long-term occupancy agreement with the CNMI. Second, ICE has not come to an agreement with the CNMI for access to detention space and as a result has transferred 3 of 30 aliens--convicted criminals under CNMI or U.S. law--to correctional facilities in Guam and Honolulu. Third, DHS efforts to gain direct access to the CNMI's immigration databases have been unsuccessful, hampering U.S. enforcement operations. DHS has begun to implement work permit and visa programs for foreign workers, visitors, and investors, but key regulations are not final and certain transition programs therefore remain unavailable. A lawsuit filed by the CNMI government challenging some provisions of the CNRA resulted in a court injunction delaying implementation of the CNMI-only transitional worker program until DHS considers public comments and issues a new rule. As a result this program is unavailable to employers as of May 1, 2010. DHS has established the Guam-CNMI visa waiver program. However, DHS did not include China and Russia, two countries that provide significant economic benefit to the CNMI. Currently, DHS allows nationals from these two countries into the CNMI temporarily without a visa under the DHS Secretary's parole authority. DHS is reconsidering whether to include these countries in the Guam-CNMI visa waiver program. Although DHS has proposed rules that apply temporary U.S. nonimmigrant treaty investor status to investors with CNMI foreign investor entry permits, the program is not yet available.
It is time for a fundamental rethinking of DOE’s missions. Created predominantly to deal with the energy crisis of the 1970s, DOE has changed its mission and budget priorities dramatically over time. By the early 1980s, its nuclear weapons production grew substantially; and following revelations about environmental mismanagement in the mid- to late-1980s, DOE’s cleanup budget began to expand, and now the task overshadows other activities. With the Cold War’s end, DOE has new or expanded missions in industrial competitiveness; science education; environment, safety, and health; and nuclear arms control and verification. Responding to changing missions and priorities with organizational structures, processes, and practices that had been established largely to build nuclear weapons has been a daunting task for DOE. For example, DOE’s approach to contract management, first created during the World War II Manhattan Project, allowed private contractors to manage and operate billion-dollar facilities with minimal direct federal oversight yet reimbursed them for all of their costs regardless of their actual achievements; only now is DOE attempting to impose modern standards for accountability and performance. Also, weak management and information systems for evaluating program’s performance has long hindered DOE from exercising effective oversight. In addition, DOE’s elaborate and highly decentralized field structure has been slow to respond to changing conditions and priorities, is fraught with communication problems, and poorly positioned to tackle difficult issues requiring a high degree of cross-cutting coordination. Experts we consulted in a 1994 survey support the view that, at a minimum, a serious reevaluation of DOE’s basic missions is needed. We surveyed nearly 40 former DOE executives and experts on energy policy about how the Department’s missions relate to current and future national priorities. Our respondents included a former President, four former Energy Secretaries, former Deputy and Assistant Secretaries, and individuals with distinguished involvement in issues of national energy policy. Overwhelmingly, our respondents emphasized that DOE should focus on core missions. Many believed that DOE must concentrate its attention more on energy-related missions such as energy policy, energy information, and energy supply research and development. A majority favored moving many of the remaining missions from DOE to other agencies or entities. For example, many respondents suggested moving basic research to the National Science Foundation, the Commerce or Interior departments, other federal agencies, or a new public-private entity; some multiprogram national laboratories to other federal agencies (or sharing their missions with other agencies); the management and disposal of civilian nuclear waste to a new public-private organization, a new government agency, or the Environmental Protection Agency; nuclear weapons production and waste cleanup to the Department of Defense (DOD) or a new government agency and waste cleanup to the Environmental Protection Agency; environment, safety, and health activities to the Environmental Protection Agency or other federal entities; arms control and verification to DOD, the State Department, the Arms Control and Disarmament Agency, or a new government nuclear agency; activities furthering industrial competitiveness to the Commerce Department or a public-private organization; and science education to the National Science Foundation or another federal agency. Recognizing the need to change, DOE has several efforts under way to strengthen its capacity to manage. For example, DOE’s reform of its contracting practices aims to make them more business-like and results-oriented; decision-making processes have been opened up to the public in an attempt to further break down DOE’s long-standing culture of secrecy, which has historically shielded the Department from outside scrutiny; and high-level task forces convened by DOE have made recommendations on laboratory and research management and on the Department’s missions. DOE is also developing a strategic plan aiming to arrange its existing missions into key “business lines.” While we have yet to evaluate how well DOE is reorganizing along these business lines, we did recently complete a review of DOE’s Strategic Alignment and Downsizing Initiative, which arose from the plan. We found that DOE’s planned budget savings are on target and that the Department is depending on process improvements and reengineering efforts to enable it to fulfill its missions under the reduced budgets called for by the Initiative. However, the cost-savings potential of DOE’s efforts is uncertain because most of them are just beginning and some are not scheduled to be completed for several years. For example, of DOE’s 45 implementation plans, 22 plans have milestones that delineate actions to be met after May 1996 and 5 of those plans have milestones that will not occur until the year 2000. Because these actions are in their early stages, it is not yet clear if they will reduce costs to the extent DOE envisioned. Although DOE’s reforms are important and much needed, they are based on the assumption that existing missions are still valid in their present forms and that DOE is the best place to manage them. Along with many of the experts we surveyed, we think a more fundamental rethinking of missions is in order. As we explained in an August 1995 report, two fundamental questions are a good starting point for developing a framework for evaluating the future of DOE and its missions: Which missions should be eliminated because they are no longer valid governmental functions? For those missions that are governmental, what is the best organizational placement of the responsibilities? Once agreement is reached on the appropriate governmental missions, a practical set of criteria could be used to evaluate the best organizational structure for each mission. These criteria—originally used by an advisory panel for evaluating alternative approaches to managing DOE’s civilian nuclear waste program—allow for rating each alternative structure on the basis of its ability to promote cost-effective practices, attract talented technical specialists, be flexible to changing conditions, and accountable to stakeholders. Using these criteria could help identify more effective ways to implement missions, particularly those that could be privatized or reconfigured under alternative governmental forms. Appendix II summarizes these criteria. Our work and others’ has revealed the complex balancing of considerations in reevaluating missions. In general, deciding the best place to manage a specific mission involves assessing the advantages and disadvantages of each alternative institution for its potential to achieve that mission, produce integrated policy decisions, and improve efficiency. Potential efficiency gains (or losses) that might result from moving parts of DOE to other agencies need to be balanced against the policy reasons that first led to placing that mission in the Department. For example, transferring the nuclear weapons complex to DOD, as is proposed by some, would require carefully considering many policy and management issues. Because of the declining strategic role of nuclear weapons, some experts argue that DOD might be better able to balance resource allocations among nuclear and other types of weapons if the weapons complex were completely under its control. Others argue, however, that the need to maintain civilian control over nuclear weapons outweighs any other advantages and that little gains in efficiency would be achieved by employing DOD rather than DOE supervisors. Some experts we consulted advocated creating a new federal agency for weapons production. Similarly, moving the responsibility for cleaning up DOE’s defense facilities to another agency or to a new institution, as proposed by some, requires close scrutiny. For example, a new agency concentrating its focus on cleanup exclusively would not have to allocate its resources among competing programs and could maximize research and development investments by achieving economies of scale in applying cleanup technology more broadly. On the other hand, separating cleanup responsibility from the agency that created the waste may limit incentives to reduce waste and to promote other environmentally sensitive approaches. In addition, considerable startup time and costs would accompany a new agency, at a time when the Congress is interested in downsizing the federal government. DOE’s task force on the future of the national laboratories (The Galvin Task Force) has suggested creating private or federal-private corporations to manage most or all of the laboratories. Under this arrangement, nonprofit corporations would operate the laboratories under the direction of a board of trustees that would channel funding to various labs to meet the needs of both government and nongovernment entities. DOE would be a customer, rather than the direct manager of the labs. The proposal raises important issues for the Congress to consider, such as how to (1) monitor and oversee the expenditure of public funds by privately managed and operated entities; (2) continue the laboratories’ significant responsibilities for addressing environmental, safety, and health problems at their facilities, some of which are governed by legal agreements between DOE, EPA, and the states; and (3) safeguard federal access to facilities so that national priorities, including national security missions, are met. Other alternatives for managing the national labs exist: each has advantages and disadvantages, and each needs to be evaluated in light of the laboratories’ capabilities for designing nuclear weapons and pursuing other missions of national and strategic importance. Furthermore, the government may still need facilities dedicated to national and defense missions, a possibility that would heavily influence any future organizational decisions. Finally, another set of criteria, developed by the National Academy of Public Administration (NAPA) in another context, could be useful for determining whether DOE should remain a cabinet-level department.These criteria, which are summarized in appendix III, pose such questions as the following: “Is there a sufficiently broad national purpose for the Department? Are cabinet-level planning, executive attention, and strategic focus necessary to achieve the Department’s mission goals? Is cabinet-level status needed to address significant issues that otherwise would not be given proper attention?” Although DOE’s strategic plan and Strategic Alignment and Downsizing Initiative address internal activities, they assume the validity of the existing missions and their placement in the Department. But DOE alone cannot make these determinations—they require a cooperative effort among all stakeholders, with the Congress and the administration responsible for deciding which missions are needed and how best to implement them. The requirements of the Government Performance and Results Act (GPRA) reinforce this concept by providing a legislative vehicle for the Congress and agencies to use to improve the way government works. The act requires, among other things, strategic plans based on consultation with the Congress and other stakeholders. These discussions are an important opportunity for the Congress and the executive branch to jointly reassess and clarify the agencies’ missions and desired outcomes. Our work has shown that to be effective, decisions about the structure and functions of the federal government should be made in a thorough manner with careful attention to the effects of changes in one agency on the workings of other agencies. Specifically, reorganization demands a coordinated approach, within and across agency lines, supported by a solid consensus for change; it should seek to achieve specific, identifiable goals; attention must be paid to how the federal government exercises its role; and sustained oversight by the Congress is needed to ensure effective implementation. Given both the current budgetary environment and other proposals to more extensively reorganize the executive branch, the Congress could judge the feasibility and desirability of assigning to some entity the responsibility of guiding reorganizations and downsizing. Even though there has been little experience abolishing federal agencies, officials with the Office of Personnel Management (OPM) articulated to us some lessons learned from their experiences: Agencies are usually willing to accept functions, but they are not necessarily willing to accept the employees who performed those functions in the abolished agency—doing so may put the receiving agency’s existing staff at increased risk of a reduction-in-force. Transferring functions that have an elaborate field structure can be very expensive. Transferred functions and staff may duplicate existing functions in the new agency, so staff may feel threatened, resulting in friction. Employees performing a function in the abolished agency may be at higher or lower grades than those performing the same function in the receiving agency. Terminating an agency places an enormous burden on that agency’s personnel office—it will need outside help to handle the drastic increase in paperwork due to terminations, grievances, and appeals. Regardless of what the Congress decides on the future of the DOE, a number of critical policy and management issues will require close attention regardless of their placement in the federal government or outside it. These issues include contract reform, major systems acquisitions, and environmental cleanup and waste management. DOE has a long history of management problems. At the core of many of these problems is its weak oversight of more than 110,000 contractor employees, who perform nearly all of the Department’s work. Historically, these contractors worked largely without any financial risk, they got paid even if they performed poorly, and DOE oversaw them under a policy of “least interference.” DOE is now reforming its contracting practices to make them more business-like and results-oriented. While we believe that these reforms, which we are currently evaluating, are generally a step in the right direction, at this time we are unsure whether the Department is truly committed to fully implementing some of its own recommendations. For example, in May 1996, the Secretary announced the extension of the University of California’s three laboratory contracts (currently valued at about $3 billion). DOE’s decision to extend, rather than “compete” these enormous contracts—held by the University continuously for 50 years—violates two basic tenets of the Department’s philosophy of contract reform. First, contracts will be competed except in unusual circumstances. Second, if current contracts are to be extended, the terms of the extended contracts will be negotiated before DOE makes its decision to extend them. DOE justified its decision on the basis of its long-term relationship with the University. However, the Secretary’s Contract Reform team concluded that DOE’s contracting suffered from a lack of competition, which was caused, in part, by several long-term relationships with particular contractors. DOE has historically been unsuccessful in managing its many large projects—those that cost $100 million or more and that are important to the success of its missions. Called “major acquisitions,” these projects include accelerators for high-energy and nuclear physics, nuclear reactors, and technologies to process nuclear waste. Since 1980, DOE has been involved with more than 80 major acquisitions. We currently have work underway for the Senate Governmental Affairs Committee examining DOE’s success with these acquisitions. Our work indicates that many more projects are terminated prior to completion than are actually completed. Many of these projects had large cost overruns and delays. This work will also address efforts to improve the acquisition process and contributing causes of these problems. The causes appear to include constantly changing missions, which makes maintaining support over the long term difficult; annual, incremental funding of projects that does not ensure that funds are available when needed to keep the projects on schedule; the flawed system of incentives that has sometimes rewarded contractors despite poor performance; and an inability to hire, train, and retain enough people with the proper skills. Another issue needing long-term attention is cleaning up the legacy of the nuclear age. This monumental task currently assigned to DOE includes both the environmental problems created by decades of nuclear weapons production and the management and disposition of highly radioactive waste generated by over 100 commercial nuclear power plants. Although the Department has made some progress on both fronts, major obstacles remain. One obstacle common to both efforts is the estimated total cost over the next half century. According to DOE, cleaning up its complex of nuclear weapons facilities could cost as much as $265 billion (in 1996 dollars) and disposing of highly radioactive waste from commercial nuclear power plants could cost another $30 billion (in 1994 dollars). Even though DOE received over $34 billion between 1990 and 1996 for environmental activities, it has made limited progress in addressing the wide range of environmental problems at its sites. In managing its wastes, DOE has encountered major delays in its high-level waste programs and has yet to develop adequate capacity for treating mixed waste (which includes both radioactive and hazardous components) at its major sites. Finally, DOE has begun deactivating only a handful of its thousands of inactive facilities. On the basis of our reviews over the last several years of DOE’s efforts to clean up its nuclear weapons complex, we have identified many ways to potentially reduce the cost. These methods can be applied regardless of who has the responsibility for the cleanup. For example, DOE has usually assumed that all of its facilities will be cleaned up for subsequent unrestricted use; however, because many of these facilities are so contaminated, unrestricted use of them is unlikely, even after cleanup. By incorporating more realistic land-use assumptions into its decision-making, DOE could, by its own estimates, save from $200 million to $600 million annually. Also, to reduce costs, DOE is now preparing to privatize portions of the cleanup, most notably the vitrification of high-level waste in the tanks at its Hanford facility. But key issues need to be considered, including whether DOE has adequately demonstrated that privatization will reduce the total cost and whether DOE is adequately prepared to assume management and safety oversight responsibilities over the private firms. Moreover, DOE cannot permanently dispose of its inventory of highly radioactive waste from the Hanford tank farms and other facilities until it has developed a geologic repository for this waste generated by the commercial nuclear power industry and DOE. Utilities operating more than 100 nuclear power plants at about 70 locations have generated about 32,000 metric tons of highly radioactive waste in the form of spent (used) fuel and are expected to have produced about 85,000 metric tons of spent fuel by the time the last of these plants has been retired in around 30 years. Although an operational repository was originally anticipated as early as 1998, DOE now does not expect to determine until 2001 if the site at Yucca Mountain, Nevada, is suitable and, if it is, to begin operating a repository there until at least 2010. Following a call from 39 Members of Congress for a presidential commission to review the nuclear waste program, this year legislation that includes reforms is pending in both the House and the Senate; and some experts, including DOE’s own internal advisory panel, have called for moving the entire program to the private sector. Mr. Chairman, this concludes our prepared statement. We would be pleased to respond to any questions that you or other Members of the Committee may have. The following criteria, adapted from a former DOE advisory panel that examined the Department’s civilian nuclear waste program, offers a useful framework for evaluating alternative ways to manage missions. These criteria were created to judge the potential value of several different organizational arrangements which included an independent federal commission, a mixed government-private corporation, and a private corporation. Mission orientation and focus: Will the institution be able to focus on its mission(s), or will it be encumbered by other priorities? Which organizational structure will provide the greatest focus on its mission(s)? Credibility: Will the organizational structure be credible, thus gaining public support for its action? Stability and continuity: Will the institution be able to plan for its own future without undue concern for its survival? Programmatic authority: Will the institution be free to exercise needed authority to accomplish its mission(s) without excessive oversight and control from external sources? Accessibility: Will stakeholders (both federal and state overseers as well as the public) have easy access to senior management? Responsiveness: Will the institution be structured to be responsive to all its stakeholders? Internal flexibility: Will the institution be able to change its internal systems, organization, and style to adapt to changing conditions? Political accountability: How accountable will the institution be to political sources, principally the Congress and the President? Immunity from political interference: Will the institution be sufficiently free from excessive and destructive political forces? Ability to stimulate cost-effectiveness: How well will the institution be able to encourage cost-effective solutions? Technical excellence: Will the institution attract highly competent people? Ease of transition: What will be the costs (both financial and psychological) of changing to a different institution? The following criteria were developed by the National Academy of Public Administration as an aid to deciding whether a government organization should be elevated to be a cabinet department. However, they raise issues that are relevant in judging cabinet-level status in general. 1. Does the agency or set of programs serve a broad national goal or purpose not exclusively identified with a single class, occupation, discipline, region, or sector of society? 2. Are there significant issues in the subject area that (1) would be better assessed or met by elevating the agency to a department and (2) are not now adequately recognized or addressed by the existing organization, the President, or the Congress? 3. Is there evidence of impending changes in the type and number of pressures on the institution that would be better addressed if it were made a department? Are such changes expected to continue into the future? 4. Would a department increase the visibility and thereby substantially strengthen the active political and public support for actions and programs to enhance the existing agency’s goals? 5. Is there evidence that becoming a department would provide better analysis, expression, and advocacy of the needs and programs that constitute the agency’s responsibilities? 6. Is there evidence that elevation to a cabinet department would improve the accomplishment of the existing agency’s goals? 7. Is a department required to better coordinate or consolidate programs and functions that are now scattered throughout other agencies in the executive branch of government? 8. Is there evidence that a department—with increased centralized political authority—would result in a more effective balance within the agency, between integrated central strategic planning and resource allocation and the direct participation in management decisions by the line officers who are responsible for directing and managing the agency’s programs? 9. Is there evidence of significant structural, management, or operational weaknesses in the existing organization that could be better corrected by elevation to a department? 10. Is there evidence that there are external barriers and impediments to timely decision-making and executive action that could be detrimental to improving the efficiency of the existing agency’s programs? Would elevation to a department remove or mitigate these impediments? 11. Would elevation to a department help recruit and retain better qualified leadership within the existing agency? 12. Would elevation to a department promote more uniform achievement of broad, cross-cutting national policy goals? 13. Would elevation to a department strengthen the Cabinet and the Executive Office of the President as policy and management aids for the President? 14. Would elevation to a department have a beneficial or detrimental effect upon the oversight and accountability of the agency to the President and the Congress? Department of Energy: A Framework For Restructuring DOE and Its Missions (GAO/RCED-95-197, Aug. 21, 1995). Department of Energy: Framework Is Needed to Reevaluate Its Role and Missions (GAO/T-RCED-95-232, June 21, 1995). Department of Energy: Alternatives for Clearer Missions and Better Management at the National Laboratories (GAO/T-RCED-95-128, Mar. 9, 1995). Nuclear Weapons Complex: Establishing a National Risk-Based Strategy for Cleanup (GAO/T-RCED-95-120, Mar. 6, 1995). Department of Energy: National Priorities Needed for Meeting Environmental Agreements (GAO/RCED-95-1, Mar. 3, 1995). Department of Energy: Research and Agency Missions Need Reevaluation (GAO/T-RCED-95-105, Feb. 13, 1995). Department of Energy: National Laboratories Need Clearer Missions and Better Management (GAO/RCED-95-10, Jan. 27, 1995). Department of Energy: Need to Reevaluate Its Role and Missions (GAO/T-RCED-95-85, Jan. 18, 1995). Nuclear Waste: Comprehensive Review of the Disposal Program Is Needed (GAO/RCED-94-299, Sept. 27, 1994). Energy Policy: Ranking Options to Improve the Readiness of and Expand the Strategic Petroleum Reserve (GAO/RCED-94-259, Aug. 18, 1994). Department of Energy: Management Changes Needed to Expand Use of Innovative Cleanup Technologies (GAO/RCED-94-205, Aug. 10, 1994). Department of Energy: Challenges to Implementing Contract Reform (GAO/RCED-94-150, Mar. 24, 1994). DOE’s National Laboratories: Adopting New Missions and Managing Effectively Pose Significant Challenges (GAO/T-RCED-94-113, Feb. 3, 1994). Financial Management: Energy’s Material Financial Management Weaknesses Require Corrective Action (GAO/AIMD-93-29, Sept. 30, 1993). Department of Energy: Management Problems Require a Long-Term Commitment to Change (GAO/RCED-93-72, Aug. 31, 1993). Energy Policy: Changes Needed to Make National Energy Planning More Useful (GAO/RCED-93-29, Apr. 27, 1993). Energy Management: High-Risk Area Requires Fundamental Change (GAO/T-RCED-93-7, Feb. 17, 1993). Nuclear Weapons Complex: Issues Surrounding Consolidating Los Alamos and Livermore National Laboratories (GAO/T-RCED-92-98, Sept. 24, 1992). Department of Energy: Better Information Resources Management Needed to Accomplish Missions (GAO/IMTEC-92-53, Sept. 29, 1992). Naval Petroleum Reserve: Limited Opportunities Exist to Increase Revenues From Oil Sales in California (GAO/RCED-94-126, May 5, 1994). High-Risk Series: Department of Energy Contract Management (GAO/HR-93-9, Dec. 1992). Comments on Proposed Legislation to Restructure DOE’s Uranium Enrichment Program (GAO/T-RCED-92-14, Oct. 29, 1991). Nuclear Waste: Operation of Monitored Retrievable Storage Facility Is Unlikely by 1998 (GAO/RCED-91-194, Sept. 24, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Department of Energy's (DOE) future, focusing on DOE efforts to restructure its missions and address policy and management issues. GAO noted that: (1) DOE is having a difficult time responding to its changing mission and organizational structure; (2) DOE is unable to evaluate its activities due to weak management and information systems; (3) DOE has a highly decentralized field structure that is unable to respond to changing conditions and priorities, fraught with communication problems, and ill-equipped to handle cross-cutting issues; (4) many former DOE officials and other experts believe that DOE should concentrate on several key issues such as energy policy, energy information, and energy supply research and development; (5) DOE is reforming its contracting practices to make them more business-like and results-oriented, opening up its decisionmaking processes to the public, and organizing high-level task forces on laboratory and research management; (6) DOE is on target with its planned budget savings under the Strategic Alignment and Downsizing Initiative and is depending on its process improvements and reengineering efforts to fulfill its mission under reduced budgets; (7) a governmentwide approach to restructuring DOE is desirable, since transferring any DOE mission will have a broad impact on other federal agencies; and (8) DOE will have to address contract reform, acquisitions, and environmental cleanup and waste management issues to effectively restructure its organization.
Through what is referred to as the 24-hour rule, CBP generally requires vessel carriers to electronically transmit cargo manifests to CBP 24 hours before cargo is loaded onto U.S.-bound vessels at foreign ports. Through the Importer Security Filing and Additional Carrier Requirements (known as the 10+2 rule), CBP requires importers and vessel carriers to provide data elements for improved identification of cargo shipments that may pose a risk for terrorism.CBP with 10 shipping data elements—such as country of origin—24 hours prior to loading, while vessel carriers are required to provide 2 data Importers are responsible for supplying elements—container status messages and stow plans—that are not required by the 24-hour rule. The data provided by carriers and importers in compliance with the 24- hour rule and the 10+2 rule are automatically fed into CBP’s Automated Targeting System (ATS)—an enforcement and decision support system that compares cargo and conveyance information against intelligence and other law enforcement data. ATS consolidates data from various sources to create a single, comprehensive record for each U.S.-bound shipment. Among other things, ATS uses a set of rules that assess different factors in the data to determine the risk level of a shipment. One set of rules within ATS, referred to collectively as the maritime national security weight set, is programmed to check for information or patterns that could be indicative of suspicious or terrorist activity. Each rule in the set has a specific weight value assigned to it, and for each risk factor that the rules identify, the weight values are added together to calculate an overall risk score for the shipment. ATS assesses and generates risk scores for every cargo shipment as the shipment moves throughout the global supply chain and new data are provided or existing data are revised. CBP classifies the risk scores from the maritime national security weight set as low, medium, or high risk. Shipments with connections to known or suspected terrorists, as well as those that include invalid information, are more likely to be classified as high risk; and shipments from shippers who participate in CBP’s C-TPAT program, or “trusted shippers,” are more likely to be classified as low risk. Because ATS collects and presents data on shipments, CBP targets shipments—rather than individual containers—for examination. ATS automatically places high-risk shipments on hold, and CBP officials (targeters) use information in ATS to identify (target) which high-risk shipments should be examined or waived. If a shipment is held for examination, a CBP Anti-Terrorism Contraband Enforcement Team (enforcement team) is to conduct the examination, which is to include scanning the cargo with NII equipment, among other things. Enforcement team officials are to review the images produced with the NII equipment to detect anomalies or shielding that could indicate the presence of weapons of mass destruction or other contraband. If an anomaly is detected, the shipment is to be transferred to a centralized examination station and the contents of the container are to be removed and physically examined. If contraband is discovered during the physical examination, the shipment is to be seized by CBP; otherwise it is to be released. The enforcement team is responsible for recording the examinations it conducts, as well as the results, in the Cargo Enforcement Reporting and Tracking System (CERTS)—a module within ATS. Examinations of high-risk shipments can be waived if a CBP targeter determines that a high-risk shipment meets a “standard exception” or an “articulable reason.” CBP policy lists the standard exceptions to mandatory examinations. Waivers based on articulable reasons are issued for reasons other than the standard exception categories. If a CBP targeter conducts analysis of available information on a high-risk shipment and determines there is no security risk, he or she is to seek approval from the port director or his/her designee(s) and record the waiver reason in CERTS within ATS. Figure 1 depicts possible targeting outcomes for high-risk shipments. In addition to obtaining manifest and shipping data (e.g., 10+2 data), CBP requires the importers of goods to file entry documents so CBP can assess and collect duties. Data provided in the entry documents are assessed by ATS and can result in a shipment’s risk score previously classified as low or medium risk based on manifest and shipping data to become high risk. Entry documents can also have the opposite effect. For example, entry information can confirm an entity is a C-TPAT member and, therefore, drop the shipment’s score below the high-risk threshold. Entry documents can be provided several days after a shipment’s arrival in the United States and after a shipment leaves the port. CBP targeters are assigned to targeting units located at or near selected domestic ports, and their targeting efforts are focused on shipments destined for ports within their respective regions. A targeting unit may be responsible for targeting shipments arriving at one port or multiple ports in its region. For example, targeters at the Port of Newark are also responsible for targeting shipments that are bound for ports in New York. CBP targeters at targeting units can review data as soon as carriers and importers submit the required data (in accordance with the 24-hour rule and the 10+2 rule), and the data are available in ATS. Once a shipment is loaded onto a U.S.-bound vessel, CBP targeters continue to review shipment data in ATS because shipment data can be updated with additional or amended information. Targeters use other sources, such as public records, open sources (e.g., Internet search engines), U.S. government systems, and local port knowledge to assess whether the shipment poses a high risk or whether the risk can be mitigated based on research. In August 2007, the Implementing Recommendations of the 9/11 Commission Act of 2007 (9/11 Commission Act) was enacted, which required, among other things, that by July 2012, 100 percent of U.S.- bound cargo containers be scanned at foreign ports with both radiation detection and NII equipment before being placed on U.S.-bound In May 2012, the then secretary of homeland security vessels.authorized a 2-year extension (until July 2014) of the deadline for implementing the requirement.Security renewed the extension (until July 2016) and stated that “DHS’s ability to fully comply with this unfunded mandate of 100 percent scanning, even in long term, is highly improbable, hugely expensive, and in our judgment, not the best use of taxpayer resources to meet this country’s port security and homeland security needs.” The Secretary also stated that he instructed DHS, including CBP, to do a better job of meeting the underlying objectives of the 100 percent scanning requirement by, in part, refining aspects of CBP’s layered security strategy. In May 2014, the Secretary of Homeland We have previously reported on the challenges CBP faces in implementing the 100 percent scanning requirement. In October 2009, we recommended, among other things, that CBP conduct feasibility and cost-benefit analyses of implementing the 100 percent scanning requirement and provide the results to Congress along with any suggestions of cost-effective alternatives to implementing the 100 percent scanning requirement, as appropriate. CBP partially concurred with the recommendations but did not implement them. We have also reported on the programs that compose CBP’s layered security strategy. Specifically, we have reviewed CBP’s efforts to collect additional data through the 10+2 rule and utilize these data to identify high-risk shipments; examine high-risk shipments before they depart CSI ports; and validate security measures taken by C-TPAT members. We made several recommendations in these reports, including that CBP establish milestones and time frames for including 10+2 data in its criteria used in the identification of high-risk shipments. In December 2010, CBP provided us with a project plan for integrating the data into its criteria, and in early 2011, CBP implemented the updates to address risk factors present in the 10+2 data. We determined that less than 1 percent of the maritime shipments arriving in the United States from fiscal years 2009 through 2013 were high risk; however, CBP does not have accurate data on the number and disposition of each high-risk shipment because of various factors. On the basis of our analyses, CBP’s data overstate the number of high-risk shipments, including those not examined/not waived. CBP is taking steps to improve its data on the disposition of high-risk shipments. On the basis of our analyses of CBP data for fiscal years 2009 through 2013, on average each year, approximately 11.6 million maritime shipments arrived in the United States, and less than 1 percent of those were determined by ATS to be high risk based on the maritime national security weight set. CBP, on average, examined the vast majority of these high-risk shipments, with less than 10 percent waived or not examined/not waived. The numbers and percentages discussed above represent CBP’s data on the number of high-risk shipments and their disposition, but our analyses suggest that CBP does not have accurate data on the disposition of each high-risk shipment because of various factors. In particular, CBP’s data overstate the number of high-risk shipments, including those not examined/not waived. On the basis of our analysis of selected waived shipments, CBP’s data may also overstate the number of waived shipments since not all shipments identified as waived were waived, but we were unable to determine the full extent to which some shipments identified as not waived/not examined were actually waived. CBP officials stated that the data include (1) shipments where the carrier deleted the bill of lading, meaning the shipments ultimately never arrived in the United States (referred to as deleted bills), and (2) shipments for which ATS assigned high risk scores only after entry was filed and the shipment had been released from the port. In further iterations of the data CBP provided us, CBP officials were able to identify the shipments in these two categories and therefore provide us with more accurate data on the number and disposition of high-risk shipments. On the basis of our analyses of CBP’s fiscal year 2009 through 2013 data, deleted bills and shipments’ risk scores not rising above the high- risk threshold until after entry accounted for 8 percent, on average, of the high-risk shipments identified in CBP’s data as waived. However, such shipments were not high-risk shipments requiring review by a targeter and therefore would not have been waived. Further, our analyses of CBP’s data showed that deleted bills and risk scores not rising above the high-risk threshold until after entry accounted for 48 percent, on average, of the high-risk shipments identified in CBP’s data as not examined/not waived. Therefore, nearly half of the shipments identified as not examined/not waived in CBP’s data were, in fact, not high-risk shipments requiring an examination or waiver even though they were identified as such. In addition to deleted bills and shipments with high risk scores only after entry was filed, we also identified other factors contributing to CBP not having accurate disposition data on high-risk shipments. We discussed a nonprobability sample of high-risk shipments with CBP officials at the four targeting units we visited. Specifically, we discussed two sets of shipments—40 high-risk shipments identified in CBP’s data as waived, We found that and 40 shipments identified as not examined/not waived.CBP did not have accurate disposition data for the 40 high-risk shipments identified as waived since 28 shipments were actually waived. For example, we determined that 3 shipments were examined, but the examinations were not recorded by CBP officials in CERTS within ATS. According to CBP officials at one targeting unit we visited, the 3 shipments were not recorded because of confusion over who was to record the examination. See table 1 for the actual disposition of the 40 high-risk shipments we analyzed that were identified in CBP’s data as waived. We determined that of the 40 high-risk shipments identified in CBP’s data as not examined/not waived, 1 should have been examined, and 1 waived. The remaining 38 shipments were incorrectly identified as not examined/not waived for various reasons, including 5 shipments that were examined or waived and properly recorded, but ATS did not link the records to the shipments. See table 2 for the actual disposition of the 40 high-risk shipments identified in CBP’s data as not examined/not waived. CBP data on gate out occurrences—cargo targeted for terrorism or enforcement that is released from CBP custody and departs a port without authorization or examination—also call into question the accuracy of CBP’s disposition data on high-risk shipments identified as not examined/not waived. CBP requires ports to have a process in place for identifying gate outs and, in 2006, developed a uniform process for ports to report gate outs to CBP’s OFO. According to CBP data collected through this process, the number of gate out occurrences is far less than the number of high-risk shipments identified as not examined/not waived, which would equate to a gate out. The number of not examined/not waived high-risk shipments should, in theory, be the same as the number of gate outs. In response to our findings, CBP officials acknowledged that the factors discussed above have contributed to CBP’s data not being accurate, and they noted that CBP is taking steps to improve its data on the disposition of high-risk shipments. For example, CBP has already developed a query to identify shipments in its data that are not truly high risk at the time of arrival, including deleted bills and shipments with high risk scores only after arrival and entry is filed. CBP officials added that they are updating the National Maritime Targeting Policy to include the requirement that cargo examinations and waivers be recorded in CERTS within ATS since not all officials are adhering to this requirement and the policy will be finalized after we complete our review. In addition, enhancing certain oversight mechanisms, as discussed later in the report, could help address the inaccuracies in CBP’s data. When determining the disposition of high-risk shipments, CBP’s targeting units are inconsistently applying criteria to make some waiver decisions and are also incorrectly documenting the reasons for waivers. On the basis of our review of CBP policy and visits to selected targeting units, we determined that CBP has not established uniform definitions for standard exception waiver categories; some CBP officials were unaware of existing waiver guidance for articulable reason waivers; and some CBP targeters across the targeting units we visited were inconsistently and inaccurately recording waiver reasons in ATS. As a result, CBP cannot accurately determine the extent to which standard exception waivers are used consistently or whether waivers issued for articulable reasons are being used judiciously, as required by policy. CBP’s National Maritime Targeting Policy lists several standard exception waivers, and we found inconsistencies regarding how certain standard exception waiver categories are defined across the four targeting units we visited. At these targeting units, we found that CBP targeters consistently review manifest and shipping data, including data provided through the 10+2 rule, to search for evidence that would indicate whether or not a shipment should be waived based on a standard exception. However, the criteria targeters at these targeting units are using to make these determinations are not uniformly established by any central CBP guidance or policy. Instead, they are developed locally by targeting unit officials primarily on the basis of their experience and institutional knowledge of the targeting process. Although CBP’s National Maritime Targeting Policy identifies what the standard exception categories are, it does not provide definitions for what specifically constitutes the various standard exception categories. Because of the lack of CBP-wide definitions for the standard exception categories, CBP targeters may be holding some shipments for examinations that should be waived. Alternatively, CBP targeters may also be waiving shipments that should have been examined. Differences in how frequently targeting units receive certain types of shipments may also affect the inconsistent interpretation of standard exceptions, and ultimately influence the variance seen in the proportion of high-risk shipments targeting units waive. Defining standard exception waiver categories and disseminating those definitions in policy would better allow targeting units and targeters to consistently apply criteria when making and recording waiver decisions, and could help ensure that CBP is examining shipments as intended. It is key that government agencies implement effective internal controls in order to minimize operational problems and achieve desired program results. According to Standards for Internal Control in the Federal Government, control activities help ensure that management’s directives The control activities should be effective and efficient in are carried out. accomplishing the agency’s objectives. Examples of control activities include establishment and review of performance measures and indicators, accurate and timely recording of transactions and events, and appropriate documentation of transactions and internal controls. CBP officials acknowledged that establishing definitions for standard exceptions could reduce the inconsistent interpretation and documentation of standard exception waivers. GAO/AIMD-00-21.3.1. National Maritime Targeting Policy, which states that the port director or his/her designee(s) is/are responsible for reviewing and approving high- risk shipment waiver requests based on articulable reasons. While the targeting units we visited had proper procedures in place for requesting and approving waivers, we found that CBP targeters at those targeting units are not correctly documenting waivers based on articulable reasons in accordance with CBP guidance. CBP issued a memorandum in February 2007 that provides guidance on how targeters are to record waivers in ATS based on articulable reasons. According to that guidance, targeters are to select a specific drop-down menu option in CERTS as the reason for every articulable reason waiver they issue and then targeters are to provide comments in CERTS to support the justification for the waiver. However, during our visits to the targeting units, and on the basis of conversations we had with CBP targeters, we found that targeters’ understanding of how to record waivers in CERTS within ATS varied. Our analysis of selected waived shipments (as previously discussed in this report) and discussions with CBP targeters indicate that some targeters are incorrectly recording some waiver reasons in ATS because they are not familiar with the February 2007 CBP memorandum that specifies how articulable reason waivers are to be recorded. According to a senior CBP official, the National Maritime Targeting Policy, which was last disseminated prior to the implementation of CERTS, has not been updated to address the process for recording articulable waivers in CERTS. Neither we nor CBP could easily determine the full extent to which articulable waiver reasons were recorded properly because waivers based on articulable reasons cannot be segregated from waivers issued for standard exceptions. The inconsistent recording of articulable reason waivers in ATS limits CBP’s ability to determine whether targeting units are following policy, since CBP’s National Maritime Targeting Policy states that waivers based on articulable reasons are to be used “judiciously.” In order to evaluate whether targeting units are judiciously making waiver decisions based on articulable reasons, CBP would need accurate records in ATS regarding the basis for why shipments were waived in order to be able to easily distinguish the two types of waivers. In particular, CBP is reliant on targeters’ selecting the appropriate waiver reasons from the drop-down menu selections in CERTS in order to be able to accurately analyze waiver data. Although the comments in the remarks section of CERTS justifying the waiver may provide further details regarding the reasons for waivers, it would be difficult for CBP to accurately determine waiver reasons on the basis of large-scale data queries of comments in the remarks section alone. Therefore, it is important for targeters to select the proper drop-down menu options when documenting waiver reasons. According to Standards for Internal Control in the Federal Government, management must continually assess and evaluate its internal controls to ensure that the control activities being used are effective and updated when necessary. Updating and disseminating guidance in policy on how to record articulable reason waivers will help ensure that they are correctly recorded. CBP has some mechanisms to provide oversight of its policies on the disposition of high-risk shipments, such as biannual self-inspections; however, these are not sufficient to fully identify whether officials are complying with policy on examinations and waivers. Further, CBP could enhance the quality of its reports on the disposition of high-risk shipments. CBP’s OFO has mechanisms to determine if CBP policies on the disposition of high-risk shipments are being followed. Specifically, OFO monitors compliance with policies on the disposition of high-risk shipments through three efforts: Self-Inspection Program: OFO requires port directors (or their designees) to complete self-inspection worksheets on cargo targeting every other year to determine whether CBP officials are following policy when it comes to examining high-risk shipments and recording those examinations. In addition to reporting any deficiencies, the inspection reports sent to OFO are to include the corrective actions taken to address the deficiencies identified. Quarterly performance reports: CBP compiles data from ATS on high- risk shipments to measure CBP’s performance in reviewing high-risk shipments in support of GPRA. Among other things, the quarterly reports provide data on the number of high-risk shipments that arrived at each U.S. seaport, including the number of shipments reviewed by a targeter and the number waived. Monitoring gate outs: CBP requires ports to uniformly report all gate out occurrences to OFO as soon as the gate out is discovered. The local port submits a notification worksheet to the Office of Cargo, Conveyance and Security (within OFO) that, in turn, determines if a gate out truly occurred. However, as discussed below, we found weaknesses in some of the mechanisms CBP uses to provide oversight of its policies. We found that CBP’s efforts are not sufficient to identify when officials are not following policies on high-risk shipments and, subsequently, deficiencies in how data are recorded in ATS, including examinations and waivers. The Self-Inspection Program requires port directors or their designees to analyze selected shipments and complete a worksheet composed of three questions to check compliance with policies on examining high-risk shipments and recording examinations. The three worksheet questions are as follows: 1. Were cargo examination findings input within CERTS and, when appropriate, within other systems? 2. Were all shipments that received an ATS score at or above the national security threshold score placed on hold? 3. Did all shipments that received an ATS score at or above the national security threshold undergo a mandatory examination utilizing, at a minimum, an NII imaging system and screening with radiation detection technology? The guidance provided on how to select shipments to answer the three questions, depending on the question, states that 10 percent of all shipments are to be randomly selected, with a minimum of 10 shipments and no more than 20 shipments, or 20 shipments are to be randomly selected among high-risk shipments. After port directors or their designees submit their completed worksheets, CBP OFO calculates the national compliance rate by dividing the number of worksheets that have at least 1 shipment that did not conform to policy by the total number of self-inspection worksheets submitted to OFO. Given that the sample size is generally the same for all port directors regardless of the number of shipments their ports receive, on the basis of our analysis, the sample does not provide CBP with an efficient estimate of compliance at the national level. For example, by allocating additional samples to ports with more arriving shipments, the national compliance estimate could have a reduced sampling error. Through its Self- Inspection Program, CBP has identified a minimal number of noncompliant shipments requiring corrective action. For example, five corrective actions were taken nationwide based on the 2011 self- inspection reporting cycle, and two corrective actions based on the 2012 cycle. CBP’s compliance rate was over 93 percent for both cycles, leading CBP to reduce the frequency of the self-inspections for maritime cargo targeting from every year to every other year, according to CBP OFO officials. However, as discussed previously, we determined that 3 of 40 shipments identified as waived were actually examined, but the examinations were not properly recorded and 6 of 40 shipments identified as not examined/not waived were examined or waived, but not properly recorded. Thus, CBP’s compliance estimates may be overstated. CBP could improve its ability to estimate national compliance rates for maritime cargo targeting practices through different methods. For example, CBP could increase the number of shipments sampled by ports with the greatest number of arriving shipments or consider a stratified sample with strata defined in terms of size of ports and other factors that might be related to compliance or risk, such as whether shipments appear in its data as not examined/not waived. By enhancing its methodology for selecting shipment samples, CBP could better identify any deficiencies and take appropriate corrective actions. According to a Director in OFO, CBP is considering changing the shipment sample size to a percentage of high-risk shipments, which would result in port directors of larger ports analyzing a greater number of shipments than the 20 shipments per question they currently sample. However, CBP has not finalized its plans for making this change. Further, CBP’s method for calculating the compliance rate does not accurately reflect compliance nationwide because it does not calculate the rate based on the number of shipments sampled. Rather, CBP calculates the rate based on the number of worksheets that contained shipments for which policy was not followed regardless of whether it was 1 shipment or all shipments included in the worksheet. According to guidance on design evaluations, the data should be analyzed in a way that allows valid conclusions to be drawn from the evaluation. CBP could improve its ability to estimate national compliance with maritime cargo targeting practices by changing its compliance rate calculation to divide the number of shipments (the unit of measure) that did not conform to policy by the total number of shipments sampled, rather than calculating the rate based on the number of worksheets. In addition to the limited nature of the self-inspections, we also found that CBP does not fully assess the reliability of the data it analyzes on the disposition of high-risk shipments for the quarterly reports it produces in support of GPRA. Through our reliability assessment of CBP’s data, we identified high-risk shipments that were not high risk and shipments for which the disposition (e.g., waived) was not recorded accurately (as previously discussed). An independent review team contracted by DHS to verify and validate the completeness and reliability of CBP’s performance data used for the GPRA measure on high-risk cargo determined that CBP only addressed significant anomalies in its data contained in the quarterly reports and recommended, in May 2014, that CBP develop a formal quality control process for field data. recommendation “with reservations,” adding that it believed reinforcing existing policies and procedures through the local CBP command structure would address the errors. However, in CBP’s data we also found errors not attributable to field data. Energetics, Independent Verification and Validation of Performance Measure Data: FY 2014 Review and Report of Findings (May 2014). statute, the Secretary of Homeland Security has instructed DHS, including CBP, to do a better job of meeting the underlying objectives of the 100 percent scanning requirement by, in part, refining aspects of CBP’s layered security strategy. Given that examining and waiving, if appropriate, high-risk shipments are critical aspects of CBP’s strategy, it is important for CBP to ensure that these practices are carried out consistently and that results of its targeters’ actions regarding the disposition of high-risk cargo shipments are recorded accurately. CBP is starting to take actions to correct errors in its data and revise its National Maritime Targeting Policy, but it needs to take further actions to ensure its policies are consistently being followed and to enhance the reliability of its high-risk shipment data. For example, without defining the standard exceptions in policy, CBP is not able to ensure that all high-risk shipments are being appropriately examined or waived. Further, without updating and disseminating policy on how to record such waivers, CBP will not be able to determine whether its targeting units are using articulable reason waivers judiciously, as called for in policy. Moreover, enhancing the methodology used in CBP’s Self-Inspection Program will allow it to better identify instances where policy is not being followed and implement corrective actions. Such actions, in turn, could help CBP better ensure that its policies on the disposition of high-risk shipments are being followed. To help ensure compliance with policies on waiving high-risk shipments, we recommend that the Commissioner of CBP direct OFO to take the following two actions: develop a definition for each of the standard exception waiver categories and include those definitions in policy to ensure that targeting units are consistently applying those definitions when making and documenting waiver decisions in CERTS, and update and disseminate policy to ensure that all targeting units are correctly documenting waivers based on articulable reasons in CERTS. To enhance oversight of the disposition of high-risk shipments— examinations and waivers—we recommend that the Commissioner of CBP direct OFO to take two actions: develop an enhanced methodology for selecting shipment samples used for self-inspection to increase the likelihood that any potential deficiencies will be identified so that corrective actions can be taken to reduce errors in the future, and develop a better national estimate of compliance with maritime cargo targeting policies by calculating the compliance rate based on individual shipments rather than worksheets. We provided a draft of the sensitive version of this report to DHS for its review and comment. DHS provided technical comments, which have been incorporated into this report, as appropriate. DHS also provided written comments, which are reprinted in appendix III. In its comments, DHS concurred with the report’s four recommendations and described actions it has under way or planned to address the recommendations by June 30, 2015. DHS concurred with the first recommendation and stated that CBP plans to develop a definition for each of the standard exception waiver categories. DHS concurred with the second recommendation and stated that CBP will provide guidance on issuing waivers based on articulable reasons in its updated National Cargo Targeting Policy. DHS concurred with the third recommendation and stated that CBP has updated its self- inspection worksheet for the 2015 inspection cycle in response to the recommendation. DHS concurred with the fourth recommendation and stated that CBP will develop the ability to generate reports on noncompliant high-risk shipments and require port directors or their designees to review the reports and take corrective actions based on noncompliant shipments. If implemented as planned, these actions should address the intent of the recommendations to improve CBP’s disposition of high-risk shipments. In its comments, DHS also referred to a fifth recommendation related to reviewing port codes. Because DHS deemed the details of this recommendation and its response as sensitive security information, they are not included in this public version of the report. If you or your staff have any questions about this report, please contact me at (202) 512-7141 or GroverJ@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This appendix describes the core programs related to U.S. Customs and Border Protection’s (CBP) strategy for ensuring the security of maritime cargo. CBP has developed this strategy to mitigate the risk of weapons of mass destruction, terrorist-related material, or other contraband being smuggled into the United States. CBP’s strategy is based on related programs that attempt to focus resources on high-risk shipments while allowing other cargo shipments to proceed without unduly disrupting the flow of commerce into the United States. The strategy includes obtaining advanced cargo information to identify high-risk shipments, using technology to inspect cargo, and partnering with foreign governments and the trade industry. Table 3 provides a brief description of some of the core programs that compose this security strategy. This report addresses U.S. Customs and Border Protection’s (CBP) disposition of high-risk maritime cargo shipments. More specifically, our objectives were to examine (1) the number of maritime shipments arriving in the United States from fiscal years 2009 through 2013 that CBP determined to be high risk and the extent to which CBP has accurate data on the disposition of each of those high-risk shipments, (2) the extent to which CBP is consistently applying standards and documenting reasons for waiving examinations of high-risk shipments, and (3) the extent to which CBP ensures that its policies on the disposition of high-risk shipments are being followed. To address all of these objectives, we reviewed CBP policies regarding the targeting and waiving of high-risk shipments, analyzed CBP data, and spoke with key CBP officials at both headquarters and selected targeting units. To determine the number of maritime shipments arriving in the United States that CBP determined to be high risk, we obtained data from CBP on the number of shipments and high-risk shipments that arrived in the United States by seaport during fiscal years 2009 through 2013—the 5 most recent fiscal years for which full-year data were available at the time of our review. To determine the extent to which CBP has accurate data on the disposition of each high-risk shipment, we analyzed CBP’s data to determine the number of high-risk shipments examined, and waived—the disposition options we identified in CBP’s National Maritime Targeting Policy—as well as those shipments not examined/not waived.We excluded foreign cargo remaining on board (FROB) shipments— cargo not discharged in the United States—from the scope of our review to focus on shipments unloaded at United States seaports. To assess the reliability of the data, we reviewed the data for obvious errors and discussed our observations with CBP officials who compiled the data. We also discussed with CBP officials how the data are entered and maintained and interviewed officials who enter the data in CBP’s Automated Targeting System (ATS). We also selected a nonprobability sample of shipments from CBP’s fiscal year 2013 data to determine the accuracy of the disposition data. Specifically, from the list of waived shipments in CBP’s fiscal year 2013 shipment data, we selected a nonprobability sample of 40 waived shipments that represented a variety of waiver reasons recorded in ATS. We also selected a second nonprobability sample of 40 not examined/not waived shipments from the same set of data, to include shipments CBP identified as deleted bills and shipments with high-risk scores only after entry was filed. These shipment samples were associated with the four Advance Targeting Units (targeting units) we visited (see below for site visit selection criteria). We discussed the accuracy of both sample sets with targeting unit officials. We determined that CBP’s data likely overstate the number of high-risk shipments, including those not examined/not waived, but the data are sufficiently reliable to illustrate the overall disposition of all high-risk shipments by category—examined, waived, and not examined/not waived—since a small percentage of shipments were waived and not examined/not waived relative to the number examined. In addition to analyzing CBP’s disposition data, we collected and analyzed gate out data for fiscal years 2009 through 2013 from CBP’s Fines, Penalties, and Forfeitures Division in order to determine the frequency of gate out occurrences relative to the overall number of not waived/not examined shipments. To assess the reliability of the data, we reviewed gate out case files and assessed whether the information contained in the files matched with fiscal years 2009 through 2013 gate out data we received from CBP. We also discussed the process for collecting and recording gate out data with CBP officials from the Fines, Penalties, and Forfeitures Division. We determined that the gate out data were sufficiently reliable for reporting the number of gate out occurrences. To determine the extent to which CBP is consistently applying standards and documenting reasons for waiving examinations of high-risk shipments, we analyzed CBP’s data on high-risk shipment waivers recorded during fiscal year 2013, reviewed CBP policies and guidance on waiving examinations of high-risk shipments, and observed and discussed waiver practices at the targeting units we visited. We conducted site visits to four targeting units that appeared to issue the greatest percentage of waivers relative to the total high-risk maritime cargo shipments that arrived in their respective seaports in fiscal year 2013. These targeting units also represent a variety of geographical locations within the United States, as they are situated on the Gulf, East, and West Coasts. Additionally, they encompass seaports of varying sizes based on arriving shipments, ranging from approximately 146,000 shipments to 4.4 million shipments. We reviewed selected samples of waived shipments in CBP’s data with officials at each targeting unit in order to gain an understanding of how targeters selected and documented waiver reasons, including standard exceptions for each of the shipments. Although the results from our visits to these four targeting units are not generalizable to all targeting units across the United States, the visits allowed us to understand whether waiver documentation practices are consistent across the targeting units we visited, and how such practices affect the reliability of CBP’s disposition data. We asked CBP officials at each targeting unit we visited to define certain standard exception waivers listed in CBP’s National Maritime Targeting Policy in order to determine whether targeting units were consistently using uniform criteria when applying standard exception waivers to high- risk shipments. Additionally, we asked targeters to comment on their awareness of any existing CBP policy or central guidance regarding the definition of the various standard exceptions. We compared CBP’s practices relative to standard exception waivers against standards in Standards for Internal Control in the Federal Government, which state that control activities should be effective and efficient in accomplishing the agency’s objectives.targeting units we visited about their processes for reviewing and approving articulable reason waivers for high-risk shipments and assessed their compliance with CBP policy on approving waivers documented in the National Maritime Targeting Policy. At each of the four targeting units, we also met with CBP targeters to discuss and observe their practices for recording examination waivers based on articulable reasons in ATS. We then compared those practices with CBP guidance We spoke with CBP officials at each of the four on how to record articulable reasons waivers, as prescribed in a February 2007 CBP memorandum. In order to determine the extent to which CBP ensures that its policies on the disposition of high-risk shipments are being followed, we met with CBP officials at both headquarters and targeting units responsible for the management of high-risk maritime shipments and discussed their oversight of CBP’s policies. We met with officials from CBP’s Office of Field Operations (OFO) to discuss their design and implementation of the Self-Inspection Program for maritime cargo targeting, including the shipment sample size used to assess compliance and method for calculating compliance. We reviewed CBP reports summarizing the results of self-inspections, including the compliance rate, as well as individual reports submitted by the four targeting units we visited. We then compared CBP’s methodology for conducting self-inspections with best practices outlined in guidance on design evaluations. reviewed quarterly reports produced by CBP outlining the disposition of high-risk shipments and discussed with officials from OFO how these reports are used to determine whether CBP is meeting a Government Performance and Results Act (GPRA) performance goal. We compared CBP’s efforts to assess the accuracy of its data used in support of GPRA with standards in Standards for Internal Control in the Federal Government.Forfeitures Division to discuss CBP’s procedures for identifying and reporting gate outs. We also met with officials from CBP’s Fines, Penalties, and We conducted this performance audit from January 2014 through January 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. GAO-12-208G. Jennifer Grover, Director, (202) 512-7141 or GroverJ@gao.gov. In addition to the contact named above, Stephen Caldwell (Director), Christopher Conrad (Assistant Director), Lisa Canini, and Daniel McKenna made key contributions to this report. Also contributing to this report were Frances Cook, Eric Hauswirth, Tracey King, Stanley Kostyla, Thomas Lombardi, Ruben Montes de Oca, Jessica Orr, and Mark Ramage. Maritime Critical Infrastructure Protection: DHS Needs to Better Address Port Cybersecurity. GAO-14-459. Washington, D.C.: June 5, 2014. Maritime Security: Progress and Challenges with Selected Port Security Programs. GAO-14-636T. Washington, D.C.: June 4, 2014. Maritime Security: Progress and Challenges in Key DHS Programs to Secure the Maritime Borders. GAO-14-196T. Washington, D.C.: November 19, 2013. Supply Chain Security: DHS Could Improve Cargo Security by Periodically Assessing Risks from Foreign Ports. GAO-13-764. Washington, D.C.: September 16, 2013. Supply Chain Security: CBP Needs to Conduct Regular Assessments of Its Cargo Targeting System. GAO-13-9. Washington, D.C.: October 25, 2012. Maritime Security: Progress and Challenges 10 Years after the Maritime Transportation Security Act. GAO-12-1009T. Washington, D.C.: September 11, 2012. Supply Chain Security: Container Security Programs Have Matured, but Uncertainty Persists over the Future of 100 Percent Scanning. GAO-12-422T. Washington, D.C.: February 7, 2012. Supply Chain Security: CBP Has Made Progress in Assisting the Trade Industry in Implementing the New Importer Security Filing Requirements, but Some Challenges Remain. GAO-10-841. Washington, D.C.: September 10, 2010. Supply Chain Security: Feasibility and Cost-Benefit Analysis Would Assist DHS and Congress in Assessing and Implementing the Requirement to Scan 100 Percent of U.S.-Bound Containers. GAO-10-12. Washington, D.C.: October 30, 2009. Supply Chain Security: Challenges to Scanning 100 Percent of U.S.- Bound Cargo Containers. GAO-08-533T. Washington, D.C.: June 12, 2008. Supply Chain Security: U.S. Customs and Border Protection Has Enhanced Its Partnership with Import Trade Sectors, but Challenges Remain in Verifying Security Practices. GAO-08-240. Washington, D.C.: April 25, 2008. Supply Chain Security: Examinations of High-Risk Cargo at Foreign Seaports Have Increased, but Improved Data Collection and Performance Measures Are Needed. GAO-08-187. Washington, D.C.: January 25, 2008.
The U.S. economy is dependent on a secure global supply chain. In fiscal year 2013, approximately 12 million maritime cargo shipments arrived in the United States. Within the federal government, CBP is responsible for administering cargo security, to include identifying “high-risk” maritime cargo shipments with the potential to contain terrorist contraband. GAO was asked to review CBP's disposition of such shipments. This report discusses (1) how many maritime shipments CBP determined to be high risk and the extent to which CBP has accurate data on the disposition of such shipments, (2) the extent to which CBP consistently applies criteria and documents reasons for waiving examinations, and (3) the extent to which CBP ensures its policies on the disposition of high-risk shipments are being followed. GAO analyzed CBP data on maritime shipments arriving in the United States during fiscal years 2009 through 2013. GAO also visited four CBP targeting units selected on the basis of the percentage of maritime shipments they waived, among other factors. From fiscal years 2009 through 2013, less than 1 percent of maritime shipments arriving in the United States were identified as high risk by U.S. Customs and Border Protection (CBP), but CBP does not have accurate data on their disposition (i.e., outcomes). CBP officials (targeters) are generally required to hold high-risk shipments for examination unless evidence shows that an examination can be waived per CBP policy. In particular, targeters at Advance Targeting Units (targeting units)—responsible for reviewing shipments arriving at ports within their respective regions—can waive an examination if they determine through research that (1) the shipment falls within a predetermined category (standard exception), or (2) they can articulate why the shipment should not be considered high risk (articulable reason), such as an error in the shipment's data. GAO found that CBP examined the vast majority of high-risk shipments, but CBP's disposition data are not accurate because of various factors—such as the inclusion of shipments that were never sent to the United States—and that the data overstate the number of high-risk shipments. On the basis of GAO's analyses and findings, CBP is taking steps to correct its data. When determining the disposition of high-risk shipments, CBP's targeting units are inconsistently applying criteria to make waiver decisions and are incorrectly documenting the reasons for some waivers. CBP policy lacks definitions for standard exception waivers. As a result, targeters are inconsistently applying and recording standard exception waivers. Because of these inconsistencies, some targeting units may be unnecessarily holding shipments for examination, while others may be waiving shipments that should be examined. Developing definitions for standard exceptions could help ensure that CBP examines shipments as intended. Further, some targeters at targeting units GAO visited were unaware of the guidance on articulable reason waivers and were incorrectly documenting these waivers. As a result, CBP cannot accurately determine the extent to which articulable waivers are being issued and used judiciously per CBP policy. Updating and disseminating guidance in policy could help ensure targeters correctly document such waivers. CBP has efforts in place, such as self-inspections, to provide oversight of its policies on the disposition (whether examined or waived) of high-risk shipments; however, these efforts are not sufficient. For example, the limited sample size of shipments used in self-inspections does not provide CBP with the best estimate of compliance at the national level. In addition, CBP's method for calculating the compliance rate does not accurately reflect compliance because it is not based on the number of shipments sampled. Developing an enhanced methodology for selecting sample shipments, and changing the method for calculating compliance, could improve CBP's estimate of compliance and its ability to identify and correct deficiencies. This is a public version of a sensitive report that GAO issued in November 2014. It does not include details that CBP deemed sensitive security information. GAO recommends, among other things, that CBP define standard exception waiver categories and disseminate policy on documenting articulable reason waivers. Further, CBP should enhance its methodology for selecting shipments for self-inspections and change the way it calculates the compliance rate. The Department of Homeland Security concurred with GAO's recommendations.
Federal agencies and aviation industry stakeholders gather and analyze aviation data for a variety of purposes. Federal agencies gather and analyze aviation data primarily to improve safety. To oversee aviation safety across the national airspace system, FAA maintains data on various aviation sectors, including passenger airlines, air cargo carriers, general aviation, and air ambulance operators. FAA also gathers and analyzes data on industry performance through its inspection and certification programs and uses these data to ensure that the industry complies with its safety regulations. In addition, FAA obtains information on safety events and incidents collected by other federal agencies, including NTSB, NASA, and USDA. The aviation industry gathers quantitative and narrative data on the performance of flights and analyzes these data to increase safety, efficiency, and profitability. Industry stakeholders also maintain historical data on equipment and maintenance issues. These stakeholders are required to report some data to FAA, such as data on accidents, engine failures, and near midair collisions, and they have agreements with FAA and other agencies to share other data voluntarily. The voluntarily shared data include both electronically recorded data from aircraft equipment under the Flight Operational Quality Assurance program (FOQA) and information on violations of federal regulations or on safety events self- reported by pilots, mechanics, and other airmen under three programs— Aviation Safety Action Program (ASAP), Aviation Safety Reporting System (ASRS), and Voluntary Disclosure Reporting Program (VDRP). FAA also recently established the Air Traffic Safety Action Program (ATSAP), modeled after the airlines’ ASAP program. It is a confidential, voluntary reporting system available to FAA’s approximately 17,000 air traffic control personnel, who can use the program to identify and report safety and operational concerns. Table 1 describes the 13 aviation safety databases that we reviewed. For decades, the aviation industry and federal regulators, including FAA, have used data reactively to identify the causes of aviation accidents and incidents and take actions to prevent their recurrence. Since 1998, for example, FAA has partnered with the airline industry through CAST with the goal of continuously improving aviation safety. Over the years, CAST has looked at the causes of past accidents—such as controlled flight into terrain—and various safety events—such as turbulence or runway incursions. CAST analyzes past instances of such accidents and events to identify precipitating conditions and causes. CAST then uses its analysis to formulate an intervention strategy designed to reduce the likelihood of a recurrence and validate the effectiveness of the intervention. According to CAST, its work has helped to decrease commercial airline fatalities— exceeding its goal to reduce fatal commercial accidents by 80 percent by 2007—and is an important aspect of FAA’s efforts to increase aviation safety by sharing and analyzing data. Table 2 provides examples of how FAA and industry have used CAST’s work. (Recent work by CAST to work with FAA’s Aviation Safety Information Analysis and Sharing initiative to develop safety enhancements and mitigate future threats is discussed later in this report.) Besides analyzing data on past safety events to develop intervention strategies, FAA staff perform such analyses to inform changes in agency policies. For example, to inform the rule-making requirement that the costs and benefits of a proposed regulation be determined under Executive Order 12866, FAA analysts identified the number of aircraft that could be certified as “light sport” aircraft and were involved in accidents. In some cases, FAA program managers request specific analyses to inform policy changes. For instance, in response to a 2007 recommendation by NTSB and a petition from Hawaiian air tour operators, FAA program managers requested an analysis of aircraft crashes associated with FAA’s requirement for tour aircraft to maintain a 1,500-foot separation from the ground. After analyzing air crashes involving Hawaiian tour aircraft 13 years before and 13 years after the requirement was implemented, FAA concluded that the requirement helped to reduce the number of crashes and significantly improved safety. While FAA will continue to use data to analyze past safety events, it is also working to use data proactively to search for risks and take actions to mitigate them before they result in accidents. FAA’s emphasis is shifting to a proactive approach to data analysis because as accidents have become increasingly rare, less information is available for reactive analyses of their causes. As a result, according to a study of FAA’s safety oversight by a 2008 independent review team, information that can be used to help identify accident and incident precursors, such as voluntarily reported data, has become more critical for accident prevention. In addition, several experts we spoke with said that proactively identifying risks is necessary to maintain the current level of safety and possibly achieve an even higher level of safety in the future. FAA is undertaking this transition in coordination with the international aviation community, working with the International Civil Aviation Organization (ICAO) to adopt applicable global standards for safety management. Senior FAA officials and ICAO agree that effective safety management is data driven and that data are essential to identifying emerging risks. Figure 1 illustrates the type of transition FAA plans as the agency shifts its emphasis to a proactive assessment of emerging safety risks, according to FAA officials. The new technologies and procedures that FAA will implement for NextGen, which are intended to increase the safety, efficiency, and capacity of the national airspace system, could also lead to consequences that have unintended effects on system safety. For example, NextGen changes to landing procedures, which are designed to allow more frequent landings, could reduce congestion in the air and improve fuel efficiency, but might have the unintended effect of increasing congestion and safety risks on airport taxiways. To avoid such unintended consequences, FAA plans, as it improves its ability to integrate data and analyze trends, to model the impact of changes planned for NextGen. To do so, it has begun to develop a baseline of current conditions and then expects to analyze how NextGen changes will affect those conditions, according to a senior FAA official. FAA is in the process of designing tools that will allow it to model the changes. For example, a project called the National Level System Safety Assessment is designed to allow FAA to assess risks across the national airspace system. Currently, FAA assesses risks for specific NextGen procedures and technologies, but cannot model the risks across the national airspace system in a comprehensive manner. This project will integrate data on past safety events from a number of FAA offices and external sources to proactively identify risks that might emerge with the introduction of changes planned for NextGen. FAA has begun to obtain some operational data for the project and has contracted with the Volpe National Transportation Systems Center, which will be responsible for integrating airport runway surface data, including surface radar, weather, aircraft, and other data. A senior FAA official told us that although safety assessments had been conducted on individual NextGen technologies, until the agency has finalized this modeling project, it cannot begin systemwide assessments of the safety of NextGen technologies and procedures that are already being deployed, including 700 new navigational procedures that had been deployed as of October 2009. SMS is an integrated, data-driven approach to managing safety risk that FAA expects will help it continuously improve aviation safety. FAA stated that successfully implementing SMS is critical to meeting the challenges of a rapidly changing and expanding aviation system. As stated earlier in this report, FAA’s traditional approach is to analyze data to determine the causes of an accident or incident after the fact. To achieve the next level of safety, FAA says, it now requires a more forward-thinki ng approach, which SMS provides, to identify systemwide trends and manage emerging risks before they result in incidents or accidents. To identify emerging risks, FAA plans to collect and analyze safety data, and it can then use the results of its analyses to make data-driven decisions about how to address safety risks. Issued in September 2008, FAA’s guidance on implementing SMS explains the importance of data collection and a to the execution of SMS. SMS: safety policy and objectives, safety risk management, safety assurance, and safety promotion. First, safety policy and objectives describe an organization’s requirements and oversight responsibilities f aviation activities. Second, safety risk management describes how an organization will identify hazards and safety risks in aviation operations, nalysis This guidance defines four main components for or including how it will develop rules, regulations, and safety performance measures. Third, the safety assurance component of SMS uses data analyses to discover emerging risks and to model the impact of safety changes. Fourth, safety promotion is an organization’s plan training, communication, and dissemination of safety information take place. According to FAA, SMS with these components will enable the agency to provide the air transportation system and the public at large with enhanced safety. FAA’s goal is for the Office of Aviation Safety to have initial operating capabilities in place for SMS by the end of fiscal year 2010. According to FAA officials, these initial operating capabilities include training employees and defining how to apply SMS to the agency’s overall oversight activities. To accomplish this, FAA’s guidance for implementing SMS requires the Office of Aviation Safety, the Office of Airports, and the Air Traffic Organization (ATO) each to develop implementation plans that include schedules, procedures for acquiring and analyzing data, and measures to track implementation progress. ATO has issued its SMS implementation plan and has also created an SMS manual that provides specific operational information and guidance regarding the daily activities of ATO employees. In addition, it met its target to implement its initial SMS operating capabilities in March 2010. The Offices of Airports and Aviation Safety have yet to issue their implementation plans. However, the Office of Aviation Safety has issued guidance and safety documents that provide a general discussion of SMS, but they do not include a schedule of specific activities or time frames for completion as called for in the agencywide SMS guidance. Senior FAA officials told us Aviation Safety has formed working groups and expects those groups that are charged with defining the various SMS activities to meet the guidance requirements. In addition, according to FAA officials, various offices within Aviation Safety will be responsible for implementing processes to fulfill SMS requirements. Additionally, FAA’s guidance for implementing SMS requires the formation of an agencywide SMS committee, which was chartered in July 2009. The committee includes an executive council—whose members are the FAA associate administrators from the offices of Aviation Safety, Airports, and Commercial Space Transportation, and the Chief Operating Officer of ATO—and a committee composed of SMS professionals from each of those FAA offices. Chaired by the Office of Aviation Safety, the agencywide committee is tasked with recommending policy and guidance to the implementing FAA organizations and management. Because SMS relies on data to identify emerging risks, FAA has an effort under way to enhance its access to industry data and improve its analysis capability. The ASIAS initiative is a collaborative government-industry effort to share and analyze data. The Office of Aviation Safety’s SMS implementation plan reiterates that data exchanges between ASIAS and other sources are crucial elements of emerging risk analysis. FAA’s draft plan for ASIAS notes that this effort will require access to existing and previously unattainable data sources, enhanced analytical methodologies, and technical advancements to support safety risk analysis that are not achievable with current databases and analytical strategies. The draft plan would cover ASIAS activities through 2022. FAA did not confirm when a final plan would be completed. While FAA has issued agencywide guidance on implementing SMS and has some efforts such as ASIAS under way, it does not have a way to measure or specific times to indicate full implementation. FAA officials told us that the current efforts would provide a foundation for the full implementation of SMS. But without a clear description of the activities to be completed and time frames for their completion, it may be years before SMS is fully implemented and its benefits are realized. In commenting on a draft of this report, FAA officials noted that even with a clear description of the activities to be completed and time frames for their completion, it will be years before SMS is fully implemented and its benefits are realized. We agree with FAA and note that specific time frames establish expectations for FAA’s implementation of SMS and provide a means of accountability for meeting those expectations. FAA has efforts under way to address two key challenges to using data more effectively to manage risk. First, data are not coded to permit electronic integration, analysis, and sharing. Second, data from two voluntary reporting programs lack identifying details needed for some types of analysis, and the data do not remain available for long-term analysis. FAA and the other federal agencies that gather the data FAA uses for analysis lack standard definitions, common identifiers, and common classification schemes for both quantitative and narrative data. For example, according to an NTSB official, databases used in ASIAS were not designed for searches of specific products, manufacturers, or airlines. As a result, FAA has had to develop common identifiers and standard classification schemes so that automation can be used to quickly integrate and analyze data from multiple external and internal sources. In addition, narrative information, in particular, poses challenges. Narrative reports from NTSB investigators and FAA inspectors, as well as narrative reports submitted voluntarily by pilots, air traffic controllers, mechanics, and others, constitute a rich source of information about safety events, but could not be coded to permit automated analysis until the ASIAS text analysis process was developed. Consequently, analysts are now able to automate coding, integration, and analysis of data from different databases—a time-consuming and costly process when manually performed. According to a senior FAA official, this automated analysis process is unique and pathbreaking and will allow for more efficient safety analyses. When FAA studied wrong runway departures, for example, data from six different databases—from NTSB, NASA, and its own offices— were extracted for analysis. FAA compared the cost and time needed to extract and integrate data from multiple databases using an automated versus a manual process for coding the data. FAA found that using the automated coding process would cost about $6,000 and take 52 hours, while the manual process was estimated to cost over $750,000 and take more than 6,500 hours to complete. However, NASA noted that by using a “data mapping” approach, it can conduct an automated analysis of ASRS and other data sources in about 2 to 8 hours, depending on the complexity of the analysis. Details of reported incidents are redacted from ASAP and FOQA data before an FAA contractor analyzes the data. These details include the date, time and flight number, and the names of the carrier or individuals involved. We previously reported that these identifying details are redacted to safeguard the participants in ASAP from enforcement or disciplinary action and participants in FOQA from public release of the data. Additionally, ASAP and FOQA data are retained for only 3 years. Without identifying details and without maintaining data for longer periods, opportunities for some analyses are limited. To allow the contractor to perform more detailed analyses for FAA, the agency and the industry have agreed on a process through which the ASIAS Executive Board provides permission for the contractor to perform a specific, defined analysis and to use data with the identifying details needed for that particular analysis. At the conclusion of the analysis and with the approval of the ASIAS Executive Board, a final report containing detailed results is given to CAST to develop safety enhancements to mitigate the identified safety issues. As of March 2010, three such analyses had been completed. The Department of Transportation (DOT) Inspector General (IG) and the 2008 independent review team found that FAA could improve its use of ASAP data by analyzing these data for national trends, noting that by not doing so, the agency is missing opportunities to help reduce recurrences of safety events or to identify national patterns indicative of risks. FAA concurred with this finding and said that it already has the ability to conduct national trend analyses. The IG reported, however, that FAA receives quarterly summaries of ASAP information from carriers of how many ASAP reports they received each month, and these summaries do not provide sufficient detail about ASAP events or corrective actions. According to the IG, these reports generally contain information about the number but not nature of ASAP submissions for that quarter and any resulting safety enhancement. While FAA’s contractor loses access to ASAP reports after 3 years, about 62 percent of ASAP reports appear in ASRS, along with other reports voluntarily submitted by industry personnel, according to a NASA official. NASA, which maintains ASRS for FAA, has access to the identifying details in the submitted reports for no more than 90 days so it can follow up as necessary to clarify any questions that the reports raise. After that time, NASA removes the identifying details and incorporates ASAP and ASRS reports into ASRS. ASRS is accessible to the public, and NASA performs special analyses of ASRS data for FAA at its request. NASA also publishes an online newsletter every month with summary statistics and examples of a few reports selected to illustrate certain safety issues, such as general aviation pilots’ use of new cockpit technology, and issues safety alerts regarding significant safety events identified by ASRS reports. However, NASA does not comment on these reports or respond to any questions or issues raised by the authors, since its role is to act as an “honest broker” as it collects and analyzes the data. By comparison, in the United Kingdom (UK), an advisory board reviews a selection of reports received, commenting on the appropriateness of the action taken and answering questions. These responses are useful to the aviation community because they communicate on commonly experienced safety issues. In commenting on a draft of this report, NASA indicated that the UK model would not be practicable for ASRS data because the UK receives only about 300 reports annually compared with about 50,000 reports for ASRS. However, NASA noted that in the past it had an ASRS advisory committee that had provided a forum for FAA and industry to discuss corrective action. The agency acknowledged the need to reestablish this committee. In addition, NASA noted that it receives feedback on its safety alerts, which indicated positive corrective actions for about 60 percent of the alerts. As part of its efforts to develop initial SMS capabilities, FAA expects to address how data and analysis will help it identify emerging aviation safety risks. Specifically, FAA has a draft plan—which it calls a safety management plan—that defines the agency’s analytical requirements and the role of safety analysis in improving safety, especially as NextGen is implemented. FAA’s draft plan—which covers the risk management and safety assurance components of SMS discussed earlier in this report— recognizes the agency’s future need for data and analysis, but does not specify requirements for them. For example, the draft safety management plan does not define the level of accuracy and completeness needed for the data, indicate what metrics and processes FAA will use to assess the data, or identify any specific data. According to senior FAA officials, as of February 2010, there was no specific date for finalizing the draft plan. To meet its data challenges and develop needed analytical approaches, FAA will also have to identify staff with both aviation operational experience and statistical expertise to effectively analyze aviation safety data in the future. While aviation safety experts we interviewed were generally satisfied with the qualifications of current FAA analysts, they expressed concerns about FAA’s capacity to meet future needs. Several experts we interviewed said FAA needs additional qualified analysts, but the draft safety management plan does not mention staffing requirements for implementing FAA’s analysis strategy. In our 2009 report on FAA’s human capital system, we made several recommendations to FAA on how it can help ensure the continued hiring, recruitment, and retention of staff needed to operate the national airspace system. According to FAA, it has not yet determined how many additional analysts it will need; however, the Office of Accident Investigation and Prevention has been approved to hire 11 additional analysts in fiscal year 2010. Without a plan that includes data and analysis requirements and staffing needs, the agency will not be able to link the resources it needs to the data capabilities it requires for its risk-based approach. prolem; withot them, the rik level wold go p. FAA’s voluntary reporting programs—ASAP, ASRS, FOQA, and VDRP— generate safety information that FAA does not identify through other means. Whereas data from other sources are derived from inspections, audits, and other agency reports, these voluntary programs rely solely on cooperation between FAA and industry personnel. To obtain voluntarily reported information that can be used to improve safety, FAA agrees not to take enforcement action against carriers or industry personnel who self- report violations through ASAP, ASRS, and VDRP. Similarly, carriers that operate ASAPs agree not to take disciplinary action against personnel who self-report violations of FAA regulations or carriers’ operating procedures. In both cases, this agreement holds only for actions that are reported in a timely manner, were not intentional or criminal, did not involve drugs or alcohol, did not result in accidents, and have not already been detected by FAA. Conversely, personnel who do not voluntarily report violations within the specified time face the threat of enforcement or disciplinary action if the violations are discovered later. This combination of promised immunity for self-reporting and threat of enforcement and disciplinary action for remaining silent creates an incentive for industry personnel to participate in the voluntary reporting programs. Through ASAP, ASRS, and VDRP, airspace users, including air carriers, air operators, and employees, self-report safety events and violations of their operating certificates or company procedures. For example, under ASAP, an employee reports an incident or event, which an event review committee, composed of representatives from FAA, the carrier, and the applicable employee group, then reviews. The ASAP event review committee assesses a report to determine (1) if it meets the criteria previously mentioned for inclusion in the program and (2), if included, what follow-up actions, enhancements, or mitigations should be implemented to address the safety concern. While ASAP, ASRS, and VDRP rely on individuals or the air carrier to file reports, FOQA data are generated by electronic equipment on aircraft, which continuously records more than a thousand parameters of data for individual flights. Vendors collect the data from carriers and can then, on their behalf, analyze the data and transfer files to the carriers’ internal analysis teams or forward the data files to an FAA contractor for inclusion in ASIAS and subsequent analysis if a carrier is partnering with FAA. The FAA contractor also receives ASAP reports approved by event review committees. The contractor aggregates and analyzes the data from participating carriers and, starting in 2010, will brief the ASIAS Executive Board on a quarterly basis. These briefings will consist of the status reports on directed studies, the number of FOQA and ASAP records, and industry benchmarks that will enable carriers to compare their individual safety performance relative to the national trends and prioritize their internal safety initiatives. According to FAA, 28 carriers participate in ASIAS. All ASIAS participants share ASAP data, and 13 carriers also share FOQA data. FAA estimates that airlines contributing ASAP data to ASIAS account for 80 percent of the flights of all commercial airlines with FAA- approved ASAP programs. FAA and industry officials, as well as experts we talked to, agreed that voluntarily reported data are critical to improving aviation safety. Moreover, according to ICAO, FAA officials, and safety experts we interviewed, voluntary reporting by operational personnel is a cornerstone of SMS, because, as ICAO has stated, operational personnel are in the best position to report the existence of safety hazards and to attest to what works and does not work during everyday operations. In 2007, NTSB reported that FOQA and ASAP programs are relevant to the safety assurance component of SMS because they provide a direct means for air carriers to evaluate the quality of their training and operations. Further, these programs can also be used in the safety risk management component of SMS because they can help identify emerging risks. In addition, FAA and industry officials agreed that voluntary programs help promote a healthy reporting culture and an increased awareness of safety by industry personnel. Furthermore, FAA officials told us they believe that the voluntary programs, such as ASAP, gather safety information that would not be discovered 95 percent of the time. In addition, in commenting on a draft of this report, NASA noted that voluntarily reported data are valuable in learning why an event occurred. Industry officials told us how their companies have used voluntarily reported data to implement changes that respond to safety concerns. For example, one airline analyzed ASAP data to decrease the number of unstable approaches. Safety officials from another airline told us that ASAP reports and FOQA data helped them to identify potential pilot issues, suggest additional training, and adjust processes and checklists based on human factors issues. They further commented that if pilots do not self-disclose potential safety issues, airlines may be limited in their ability to identify emerging safety trends. Because of the importance of voluntarily reported data to proactive safety analysis, NTSB and the DOT IG have also recommended that FAA further encourage participation in voluntary programs. For example, in 2010, NTSB recommended that FAA require the establishment of FOQA programs by carriers regulated under part 121. As another example, the DOT IG reported in May 2009 that FAA was not realizing the full benefits of ASAP and recommended that the agency develop a central database for all air carriers’ ASAP reports to be used for nationwide trend analysis. While voluntarily reported data have been used to enhance safety, they also have some limitations. First, the completeness of the data is unknown, since reporting is voluntary, and there is no way to know how many violations or safety situations are not reported. For example, according to FAA, factors such as an individual’s awareness of ASRS, motivation to report a situation, and perception of an incident’s severity may influence the decision to submit an ASRS report. Second, the completeness of the data is further limited because participation varies among airlines, with 73 airlines participating in ASAP and 36 participating in FOQA. Third, as discussed later in this report, inadequate data quality controls can also limit the completeness of the data. For example, controls may not be adequate to ensure that the data entered into a database have been accurately compiled or processed. Fourth, the accuracy of voluntarily reported data cannot always be verified. Voluntarily reported data are subjective and are not always accompanied by supporting documentation, such as statistics, measurements, or other quantifiable information related to the reported events. According to an FAA analyst, a variety of factors can influence the accuracy of the data, including the reporter’s experience, visibility conditions, the duration of the event, and any trauma experienced by the reporter. FAA notes, for example, that even senior pilots’ estimates of how far aircraft descend during encounters with turbulence often differ considerably from the actual distances recorded on aircraft flight data recorders. Acknowledging this limitation, NASA notes that the information is nonetheless valuable, since a reporter is providing information on how it perceived the situation, which in large part determines its reactions. Fifth, electronically collected data from FOQA also have limitations. Vendors that process FOQA data explained that software bugs and inaccurate sensors can affect data results. To mitigate these problems, vendors review anomaly reports and validate data prior to analysis. In an effort to address the limitations of voluntarily reported data, NASA developed a survey methodology project—NAOMS—in 1997 to systematically collect information on safety events by conducting telephone interviews with randomly selected airspace users such as pilots. In our 2009 assessment of the survey, we concluded that NAOMS was a successful proof of concept and that a similar project, adequately funded and appropriately planned, could enhance FAA’s safety knowledge. However, FAA maintained that FOQA provides better data by providing precise rates of occurrence on multiple parameters collected by flight data recorders that could obviate the benefits from NAOMS data. Nonetheless, we concluded that the NAOMS survey could be useful in complementing other databases, such as ASRS. The survey data, when properly analyzed, could be used to call attention to low-risk events that could serve as potential indicators for further investigation in conjunction with other data sources. Furthermore, in commenting on a draft of this report, both NTSB and NASA agreed on the usefulness of a survey similar to NAOMS in complementing other data. NASA pointed out that NAOMS-type data could provide the data for trends and explain “what” is happening in the system while ASRS provides “why” it is happening. NTSB further noted that its investigations of numerous serious incidents and accidents found that FOQA data gave no indication of underlying problems. Although FAA, carriers, and experts we interviewed agreed that voluntarily reported data are an important source of information for understanding and addressing safety issues, some carriers and industry personnel are not participating in FAA’s ASAP and FOQA voluntary reporting programs. According to airline and FAA officials, two factors have primarily affected participation: (1) the fears of employees that their employers will take disciplinary action to address self-reported violations and (2) the costs to the airlines of purchasing and installing FOQA equipment and analyzing the data. Specifically, we found that, partly because of employees’ fears of disciplinary action, from 2006 through 2008, four large air carriers and their pilot unions suspended their ASAP. According to safety officials at one airline, pilots had raised concerns about letters of reprimand or unpaid time off that directly resulted from ASAP reports. Pilot unions and air carriers that we spoke with agreed on the importance of confidentiality. However, pilot union officials we interviewed also expressed concern about whether airlines were ensuring the confidentiality of ASAP reports and suggested that, for an airline’s safety culture to improve, pilots, in particular, must be able report certain events without fear of reprisal. Despite these concerns, as of June 2009, the four carriers and pilot unions that had suspended their ASAP had restarted or were restarting their participation in the program. Additionally, according to industry officials and experts we interviewed, the costs of purchasing and installing equipment and analyzing data have deterred participation in FOQA, especially for smaller carriers. Several large carriers we interviewed said more than 50 percent of their fleets already have FOQA equipment and they plan to expand their fleets’ capability by retrofitting aircraft or ensuring that new aircraft include the equipment. By contrast, officials from smaller carriers were concerned about costs and estimated that installing FOQA equipment would cost an average of $12,000 for each new aircraft and up to $35,000 for retrofits of older aircraft models. As a result of cost concerns, according to airline officials, only 11 of 65 smaller carriers have approved FOQA programs, and according to FAA, an additional small carrier FOQA program is pending. Officials from these smaller carriers said that incentives to cover equipment costs, which are not currently available, would help increase participation. FAA officials noted that as carrier fleets age, newer replacement aircraft will already be fitted for FOQA equipment and, therefore, costs for participating will continue to decrease. However, the life span of an aircraft is usually at least 30 years for large carriers, so the transition to a fully equipped fleet could take decades. Currently, large carriers are the principal participants in FOQA and ASAP, and they provide service for the majority of passengers on domestic and international flights. Nonetheless, we found that 4 of the 25 large carriers do not have active FOQA programs and 1 large carrier did not have an ASAP. According to FAA officials, an additional 4 carriers have FOQA programs pending. To the extent that operators do not participate in the programs, they do not obtain information that they could use to monitor and improve the safety performance of their aircraft, related equipment, and personnel, and to the extent that they do not partner with FAA, opportunities to identify nationwide safety trends and improvements are lost. To encourage greater participation in FOQA and ASAP, FAA provides training to smaller carriers on how to develop versions of both programs that do not require as much capital investment but do allow the carriers to collect unique safety data. However, FAA lacks carrier-specific information on why air carriers are not participating in voluntary reporting programs. Having such information would allow FAA to identify further actions to encourage participation. FAA’s ability to monitor and manage risk for certain industry sectors, such as general aviation, air ambulance operators, and air cargo carriers, is limited by incomplete data. While FAA collects data on actual flight hours and numbers of departures for large air carriers that operate under part 121 and scheduled flights with fewer than 10 seats that operate under part 135, it does not collect actual flight activity data for smaller air carriers that provide on-demand service, such as air taxis and air ambulances, and general aviation operators—sectors that have had a higher number of fatal accidents in recent years than large air carriers. For instance, in 2008, large air carriers providing scheduled service had 20 accidents, none of which were fatal. By comparison, during that same year, there were 1,559 general aviation accidents, including 275 fatal accidents involving 495 fatalities. (Fig. 2 shows the trends in general aviation accidents.) Without information on the number of general aviation flights, FAA cannot compare the safety performance among industry sectors or assess trends. Additionally, the number of accidents for air ambulance and air cargo operators points to safety vulnerabilities in these areas. From 1998 through 2008, the air ambulance sector averaged 13 accidents per year. While the total number of air ambulance accidents peaked at 19 in 2003, the number of fatal accidents peaked in 2008, when 9 fatal accidents occurred. Without data on the number of flights or flight hours, FAA and the air ambulance industry are unable to determine whether the increased number of accidents has resulted in an increased accident rate, or whether it is a reflection of growth in the industry. According to FAA, it annually surveys a sample of potentially active general aviation aircraft, and in the latest survey in 2008, it surveyed all air ambulance operators. However, we noted in our April 2009 testimony that less than 40 percent of the air ambulance operators responded, raising questions about the reliability of the information collected. Similarly, our review of air cargo safety found that small cargo carriers had more accidents and fatal accidents than large cargo carriers, but the available information was not sufficient to assess the significance of this difference. We found that smaller air cargo carriers averaged 29 accidents per year from 1997 through 2008, while large cargo carriers averaged 8 accidents each year during this period. However, a lack of operations data for small cargo carriers makes it difficult for FAA to prioritize risks and better target safety improvements and oversight to the areas of highest risk. To address the lack of data, we previously recommended that FAA identify the data necessary to better understand the air ambulance industry and develop a systematic approach for gathering and using these data. Similarly, we recommended that FAA gather comprehensive and accurate data on smaller air cargo operations (those covered under part 135) to gain a better understanding of air cargo accident rates and better target safety initiatives. FAA agreed with both recommendations, but has not fully addressed either. In response to our recommendation on air ambulance data, FAA has surveyed all helicopter air ambulance operators to collect flight activity data. However, as mentioned earlier in this report, FAA’s survey response rate was low, raising questions about whether this information can serve as an accurate measure or indicator of flight activity. FAA plans to evaluate ways to collect the air cargo data over the long term. FAA, along with the international aviation community, recognizes that high-quality data—that is, reliable, valid data—are essential to the effectiveness of a data-driven approach to safety, such as SMS. To help ensure data quality, FAA has issued guidance, established procedures, and implemented controls. For example, FAA has issued an order that establishes an agencywide policy on data management. This policy applies to all information from FAA and other sources used to perform the agency’s mission. In accordance with the data management order, FAA’s Office of Aviation Safety has established a data management framework that includes a four-step process for importing data from other FAA offices and external sources. This process includes data acquisition—obtaining information from various data owners, data standardization—validating data by comparing a new data set with previous data sets to identify inconsistencies, data integration—translating data values into plain English and correcting data loading—importing data into the agency’s own systems. This four-step process applies to 10 of the 13 databases we reviewed— 8 maintained by FAA offices and 2 maintained by NTSB and USDA. (The process applies to 2 of the 4 voluntary reporting databases.) NTSB and USDA also have data quality assurance processes that apply to their databases. For example, NTSB conducts annual reviews of aircraft accident data, and according to USDA, airport managers and wildlife biologists are asked to check data from their respective airports and report errors. In addition, to help ensure the quality of its data, FAA applies various quality controls, such as validation and verification processes, to better ensure accuracy and completeness. We have identified some standard quality controls that an agency should employ to achieve these high- quality results. For example, agencies should have managers review data, have procedures in place to verify that data are complete and accurate, and correct erroneous data. We assessed 12 databases against these standard quality controls and found that the extent to which such controls were applied varied. (See fig. 3.) While NTSB’s aviation accident and incident database (NTSB), VDRP, and USDA’s wildlife strike database applied all five of the quality control activities we considered for this analysis to some extent, the remaining 9 databases lacked one or more quality control activity. In addition, we found that all of the databases we reviewed fully or to some extent had procedures in place to validate and edit data to help ensure that accurate data are entered into electronic systems and to help ensure that erroneous data are identified, reported, and corrected. To the extent that the databases lack various controls, FAA lacks assurance that the information it uses for oversight is accurate and complete. While the databases we reviewed varied in the extent to which they had standard quality controls, FAA has other data quality controls in place for some databases that we consider good practices for handling data, as shown in table 3. Furthermore, other data quality controls apply to the voluntary reporting systems we reviewed. For example, as previously discussed, an event review committee at each participating carrier is tasked with reviewing and analyzing reports submitted under ASAP. This committee determines whether such reports qualify for inclusion in the program, identifies and proposes solutions for actual or potential problems with information contained in the reports, and annually reviews the ASAP database to determine whether corrective actions have been effective in preventing or reducing the recurrence of targeted safety-related events. For ASRS, NASA officials told us that each individual ASRS report is reviewed by two expert analysts within 3 days of receipt. Each report captures data for seven criteria and data fields, which are screened to ensure accuracy. The analysts also evaluate the database to ensure that publicly released ASRS data do not include information that might identify the reporter. In addition, for FOQA, vendors have quality assurance procedures in place. For instance, one vendor’s procedures include automated checks and tests of each flight data file to detect parameter problems (for example, anomalies in the flight data attributable to faulty sensors), reports of anomalies created for the affected airline, and manual reviews of any data indicating warning-level risk events. These controls are designed to monitor the reliability and validity of FOQA data and to identify technical problems that affect data quality and need to be corrected. However, according to the vendor, some data will always be missing because of data-recording equipment failures or lost flight data cards, but such missing data do not affect the statistical validity of the large FOQA data set. FAA is taking steps to address data weaknesses identified by its analysts, the DOT IG, and us. For example, FAA officials told us that to mitigate problems from external data sources, they combine data from various sources to validate analysis results. According to FAA analysts, they typically combine AIDS data with wildlife strike, ASRS, or PDS data using a manual process to verify study findings. As another example, we previously reported the importance of aggregating data from multiple sources to understand icing-related incidents. We reported that the AIDS database included 200 icing-related incidents involving large commercial airplanes that occurred from 1998 through 2007. During this same time period, ASRS received over 600 icing- and winter weather-related incident reports involving large commercial airplanes. These incidents revealed a variety of safety issues such as runways contaminated by snow or ice, ground deicing problems, and in-flight icing encounters. This suggested that risks from icing and other winter weather operating conditions may be greater than indicated by the AIDS database. FAA analysts also communicate with other data providers and experts about quality concerns or sometimes make independent corrections to data. For example, one FAA official told us that analysts communicate with NTSB to report incorrect data in a field and then rely on NTSB to correct the data in its database. FAA analysts said that they retrieve data from public Web sites and then collaborate with subject matter experts to identify and correct any errors in those data. They make such corrections based on their knowledge of the data’s reliability and their own expertise in working with such data. Analysts also said they use information from the narrative sections of a report to correct data fields. In addition, the IG has identified weaknesses in the quality of specific data, which FAA is working to address. For instance, according to the IG, ATOS data are inconsistent and incomplete because the database has undergone multiple revisions since it was introduced in 1998 and some data fields have changed from one year to another. During these revisions, some data have been lost. Though designed to improve ATOS’s value and usability as an inspection tool, the revisions limit opportunities for analysis of long- term trends to the extent that data fields have changed over time. The revisions do not, however, affect FAA’s ability to analyze the data at a particular point in time. In addition, the process for reporting inspection findings is time-consuming and creates an incentive for inspectors to underreport their inspection results. To report, inspectors must complete a Yes/No checklist and, for every No check, provide a narrative explanation. According to the 2008 independent review team, inspectors have an incentive to check Yes so they can complete their reports in a timely manner. Consequently, the system may underreport problems that inspectors have identified but not taken the time to report. The IG has also found, and FAA agreed, that OEDS has some missing and incorrect data on operational errors and pilot deviations because personnel have intentionally or unintentionally misclassified these events. Such misclassification is problematic because it can lead to errors in FAA’s assessment and reporting of how well the agency is meeting its annual performance targets for operational errors and pilot deviations. In 2007, the IG investigated operational errors at the Dallas-Fort Worth terminal radar approach control facility and found that FAA air traffic managers had intentionally misclassified operational errors as either pilot deviations or nonoccurrences. On the basis of this finding, FAA agreed with the IG’s recommendation that the agency establish a follow-up mechanism to ensure compliance with guidance for investigating pilot deviations. Finally, over the years, we have identified weaknesses in the quality of aviation safety data that hinder FAA’s ability to oversee the industry. In response, the agency has taken steps to address many of the problems that we have identified. For example, in our 2007 review of runway safety, we found that FAA’s categorization of the severity of runway incursions involves a level of subjectivity, raising questions about the accuracy of the data. We reported that an internal FAA audit of 2006 runway incursion data found that the subjectivity of the severity classifications has the potential to affect the accuracy of the classifications. We also found that FAA did not systematically collect data on the number of runway overruns that do not result in damage or injury that could be used for analytical purposes to study trends in and causes of these incidents. In July 2009, FAA indicated that it was working to establish procedures that will ensure that all runway overruns and other excursions are reported. Aviation safety data are critical to FAA’s safety oversight and its planned implementation of SMS. To its credit, FAA has taken steps to help ensure the quality of the data it uses, such as implementing quality controls to help ensure that errors are identified, reported, and corrected, but these procedures are not applied consistently across all databases. Although FAA is developing a plan that will address how data fit into its new oversight approach, that plan lacks a description of the data that will be required to conduct proactive data analyses, an inventory of the skills personnel will need to perform such analyses and help ensure data quality, and a description of the steps needed to address continuing data quality problems. Unless the plan links FAA’s data requirements and staffing needs to the analyses that will drive its proactive safety management system and addresses the agency’s data quality problems, available data may not be as reliable and useful as they could be to support SMS. While NextGen technologies and procedures are intended to increase the safety, efficiency, and capacity of the national airspace system, their introduction could have unintended effects on system safety if not done in a comprehensive manner. As FAA improves its ability to integrate and analyze data from multiple sources, it plans to increase its capacity to model the impact of NextGen changes and to identify and manage risks. Because some NextGen changes are already taking place, it is urgent that FAA move with all deliberate speed to advance its analytical capability. The data that FAA obtains through voluntary reporting programs afford insights into safety events that are not available from other sources and are critical to improving aviation safety, but participation in these programs has been limited by concerns about the impact of disclosure and, especially in the case of smaller carriers, by cost considerations. Efforts such as the training FAA provides to smaller carriers on how to develop programs that require less capital investment have the potential to increase participation and improve safety. However, without carrier- specific information on why air carriers are not participating in these programs, FAA cannot determine if its efforts to increase participation are sufficient. To help improve and expand FAA’s capability to use data for aviation safety oversight, we recommend that the Secretary of Transportation direct the FAA Administrator to take the following four actions: develop and implement a comprehensive plan that addresses how data fit into FAA’s implementation of a proactive approach to safety oversight and ensure that this plan fully describes the relevant data challenges (such as ensuring data quality and continued access to voluntarily reported safety data), analytical approaches, and staffing requirements and integrates efforts to address them; given the importance of high-quality data, extend standard quality controls, as appropriate, to the databases that support aviation safety oversight to ensure that the data are as reliable and valid as possible; proceed with all deliberate speed to develop the capability to model the impact of NextGen changes on the national airspace system and manage any risks emerging from these changes; and systematically identify the reasons that carriers are not participating in voluntary reporting programs, such as through a survey, and identify and implement further steps to encourage greater program participation, especially by smaller carriers. We provided copies of a draft of this report to DOT, NASA, USDA, and NTSB for their review and comment. DOT agreed to consider our recommendations. DOT and NASA provided technical corrections and clarifications, which we incorporated as appropriate. USDA had no comments. NTSB generally agreed with our findings and recommendations to FAA and provided several comments. First, NTSB noted that our use of the terms “reactive” and “proactive” implied a new approach to aviation safety data analysis that is different from past analyses of accidents and incidents to improve safety. The agency noted that a more efficient, effective approach to safety analysis should continue to include FAA’s previous reactive approach as well as new, more predictive capabilities. We agree with NTSB’s comment and note that our report indicates that FAA plans to continue to use data to analyze past safety events as it also works to use data more proactively. NTSB further noted that the success of SMS will depend on the maturation of FAA’s data analysis capabilities. Second, NTSB agreed with our finding that the lack of a final plan for ASIAS and for SMS implementation, which are key elements of FAA’s planned proactive safety analysis capability, was a cause for concern. The agency noted that it had made several recommendations to FAA to require SMS programs for part 121, part 135, and part 91 carriers and that FAA had not yet taken action to require these programs. Third, regarding FAA’s access to voluntarily reported data, NTSB agreed with our finding that the redaction of flight details from ASAP and FOQA analyses is a serious constraint on the thoroughness and potential utility of ASIAS and other assessments of safety. If FAA does not address these data limitations, NTSB observed, such constraints are likely to pose serious and continuing threats to the broader use of voluntary reporting programs to support safety analysis. In NTSB’s view, our recommendations to FAA do not go far enough to recommend mechanisms besides redaction, such as statutory exemptions from disclosure, to protect these data from enforcement and disciplinary uses or public release. We did not revise our recommendations to FAA to include these issues because, while we found that participation was temporarily affected, in part, by employees’ fears of disciplinary action by their employers, we did not find evidence that participation was inhibited by the fear of enforcement action by FAA or public disclosure. In addition, our work indicated that the current mechanisms to protect the data appeared to be working. Fourth, regarding FAA’s access to data on various safety events, NTSB noted the importance of FAA collecting the necessary data to support its new approaches to data analysis rather than simply combining existing data sources into an analysis program. NTSB also agreed with our finding that independent survey efforts like NAOMS could provide a useful complement to other data sources, including FOQA, in providing improved data quality and analysis capabilities. Finally, NTSB agreed with our finding that the availability of operations data for sectors other than large commercial carriers (i.e., part 121 operators) is severely limited. The agency noted that accurate flight activity data are not available for most of these operations and must be estimated from FAA’s annual survey of a sample of active general aviation aircraft. NTSB also pointed out that FAA does not require reporting for the majority of equipment reliability or maintenance related events. To address these shortcomings, NTSB noted its recent recommendation to FAA to take steps to increase general aviation reporting to FAA’s service difficulty reporting system. To correct these and other data deficiencies, NTSB believes that FAA should explore the development of new aviation safety data collection techniques or methods to supplement current areas of data deficiency. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to relevant congressional committees, the Secretaries of Transportation and Agriculture, the Administrator of the Federal Aviation Administration, the Chairman of the National Transportation Safety Board, and the Administrator of the National Aeronautics and Space Administration. We will also make copies available to others on request. In addition, this report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or dillinghamg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Gerald L. Dillingham, Ph.D Director, Physical Infrastructure Issues . In this report, we assessed the Federal Aviation Administration’s (FAA) capacity to use available data to oversee aviation safety. To do so, we addressed the following questions: (1) How does FAA use data to oversee aviation safety, and what changes, if any, has it planned? (2) To what extent does FAA have access to data for monitoring aviation safety and the safety of various aviation industry sectors? (3) What does FAA do to help ensure the quality of the data it uses to oversee aviation safety? To perform our review, we selected 10 safety events that were among those previously identified as key by the National Aeronautics and Space Administration’s (NASA) National Aviation Operations Monitoring Service (NAOMS) or by the FAA-industry Commercial Aviation Safety Team (CAST). This selection allowed us to focus our review on a manageable subset of FAA oversight activities and data sources. (See table 4.) We then identified 13 databases available to FAA that contained data on these safety events and reviewed these databases. The databases are described in the background section of this report. To determine how FAA uses data to oversee aviation safety, we reviewed reports by FAA, the International Civil Aviation Organization, and industry and other published documents. In addition, we interviewed FAA, industry associations, and other selected industry groups. To determine the extent to which FAA has access to data for monitoring aviation safety and the safety performance of various aviation industry sectors, we interviewed FAA data analysts, contractors, and other officials responsible for data management. We also reviewed previous GAO reports on FAA’s access to data on certain aviation sectors, such as air ambulances and air cargo operations. To determine how FAA ensures the quality of its data, we interviewed FAA and industry officials (see table 5). In addition, we reviewed assessments of selected FAA data by the Department of Transportation Inspector General and an independent review team appointed by the Secretary of Transportation in 2008. We also derived a number of data quality principles from our previous work on internal controls, and we assessed the quality of 12 of the 13 selected aviation safety databases by comparing our data quality principles with FAA’s practices for ensuring data quality. These principles include ensuring that the data are complete and accurate, measure intended safety concerns, and are useful for their intended oversight purposes. To measure the extent to which FAA’s practices were consistent with these principles, we evaluated information and all other materials regarding the databases using a three-point scale. To validate the results, multiple reviewers independently scored each principle. When the initial scores differed, the reviewers collectively agreed on a final score for each principle. Further, we used the results of GAO studies that considered the availability, quality, and use of data in aviation safety oversight. In addition, to address all three research questions, we individually interviewed 10 aviation safety experts and asked them to identify challenges to using data for overseeing aviation safety, the reasonableness of FAA’s current and planned efforts to use aviation safety data, and ways that FAA could enhance its data collection and analysis processes to improve its oversight capabilities. We selected experts who represent a cross section of aviation stakeholders, including persons with general knowledge of aviation safety, aircraft operations, human factors, aircraft maintenance, and air traffic control. The experts have operational, academic, or other professional expertise in these areas. Those experts are Mr. Basil Barimo, Vice President, Safety and Operations, Air Transport Mr. James Burin, Director of Technical Plans and Programs, Flight Safety Kim Cardosi, Ph.D., U.S. Department of Transportation, Volpe Center; Todd Curtis, Ph.D., Director, The Airsafe.com Foundation; Mr. John Goglia, Senior Vice President for Aviation Operations and Safety Programs, JDA Aviation Technology Solutions, former board member of the National Transportation Safety Board; Mr. Keith Hagy, Director, Engineering and Air Safety, Air Line Pilots Brigadier General Leon Johnson (Air Force, retired), former Flight Operations Manager, United Parcel Service; Mr. Bruce Landsberg, Executive Director, Aircraft Owners and Pilots Thomas Weitzel, Ed.D., Associate Professor, Embry Riddle Aeronautical Mr. Dale Wright, Director, Safety and Technology, National Air Traffic Controllers Association. In addition to the person named above, Teresa Spisak, Assistant Director; Elizabeth Eisenstadt; N’Kenge Gibson; H. Brandon Haller; Erica Miles; and Richard Scott made key contributions to this report.
To improve aviation safety, the Federal Aviation Administration (FAA) plans to have in place the initial capabilities of a risk-based approach to safety oversight, known as a safety management system (SMS), by the end of fiscal year 2010. FAA is also implementing new procedures and technologies to enhance the safety, capacity, and efficiency of the national airspace system. Data are central to SMS and FAA's ability to test the impact of these changes on safety. This congressionally requested report addresses FAA's (1) current and planned use of data to oversee aviation safety, (2) access to data for monitoring aviation safety and the safety performance of various industry sectors, and (3) efforts to help ensure data quality. To perform this work, GAO reviewed 13 databases that contain data on key aviation safety events, assessed data quality controls for the databases, and interviewed agency and industry officials, as well as 10 experts in aviation safety and data. FAA analyzes data on past safety events, such as engine failures, to prevent their recurrence and plans to use data to support a more proactive approach to managing risk. For example, weather and air traffic control data helped identify factors associated with injuries from turbulence. As part of SMS, FAA plans to analyze data proactively to support a risk-based approach to safety oversight. For example, FAA plans to use data to model the impact of proposed changes in procedures and technologies on the safety of the national airspace system. Experts said that identifying risks is necessary to maintain the current level of safety and possibly achieve a higher level of safety in the future. Because SMS relies on data to identify emerging risks, FAA has an effort under way to enhance its access to industry data and improve its capability for automated analysis of multiple databases. According to FAA, this effort will allow for more efficient safety analyses. FAA is also developing a plan for managing data under SMS, but the plan does not fully address data, analysis, or staffing requirements. Without such requirements, the plan will not provide timely guidance for implementing SMS. FAA has access to some voluntarily reported data, which are important for SMS, but not all carriers and aviation personnel participate in FAA's voluntary reporting programs. Such data are gathered electronically by equipment on aircraft or reported by aviation personnel or carriers following noncriminal, unintentional violations or safety events. Industry personnel have some incentives to participate in voluntary programs, such as promised immunity from disciplinary action, but concerns about sanctions and the cost of equipment have deterred full participation, especially by smaller carriers. While FAA has some information on reasons for nonparticipation and has taken some steps to promote greater participation, it lacks carrier-specific information on why air carriers are not participating. FAA also lacks data to assess the safety performance of certain industry sectors, such as air cargo and air ambulance operators. GAO has previously made recommendations to address this lack of data. FAA concurred with GAO's prior recommendations and is taking actions to address them. To help ensure data quality--that is, data that are reliable (complete and accurate) and valid (measure what is intended)--FAA has implemented a number of data quality controls that are consistent with GAO's standards for data quality, but some weaknesses exit. For example, all the databases GAO reviewed had at least some controls in place to ensure that erroneous data are identified, reported, and corrected. However, about half the databases lack an important control--managers do not review the data prior to entry into the data system. FAA is taking steps to address its data weaknesses, but vulnerabilities remain, potentially limiting the usefulness of FAA's data for the safety analyses planned to support SMS.
SMS provides a top-down approach to managing safety risk, which FAA expects will improve aviation safety. SMS is not an additional safety program that is distinct from existing activities that accomplish an entity’s safety mission, but rather, a process for safety management that incorporates systematic procedures, practices, and policies. According to FAA, the overarching goal of SMS is to improve safety by helping ensure that the outcomes of any management or system activity incorporate informed, risk-based decision making. We reported in 2010 that FAA officials believe that successfully implementing SMS is critical to meeting the challenges of a rapidly changing and expanding aviation system. To achieve a higher level of safety in an already very safe system, FAA requires a more forward-thinking approach, which SMS provides, by addressing cultural and organizational problems that lead to safety hazards, identifying system-wide trends in aviation safety, and managing emerging hazards before they result in incidents or accidents. SMS implementation should bring about a fundamental shift in aviation safety oversight. For decades, the aviation industry and federal regulators, including FAA, have used data reactively to identify the causes of aviation accidents and incidents and take actions to prevent their recurrence. While FAA plans to continue to use data to analyze past safety events, it is also working to use data proactively to search for risks. FAA’s shift to the proactive approach of SMS is important because, as accidents have become increasingly rare, less information is available for reactive analyses of their causes. As a result, information that can be used to help identify accident and incident precursors has become more critical for accident prevention. Thus, the open sharing of safety information among aviation stakeholders and how FAA’s policies and procedures govern the reporting of safety information are essential to the success of SMS. SMS consists of four key components: (1) safety policy, (2) safety risk management, (3) safety assurance, and (4) safety promotion (see fig.1). Together, these four components are intended to provide a systematic approach to achieving acceptable levels of risk. FAA provides to its personnel detailed guidance on the principles underpinning these components and the application of these components to aviation oversight in its official orders and other internal FAA guidance. To the industry, FAA provides this SMS guidance via advisory circulars and a dedicated page for the SMS program office on the FAA website. FAA is undertaking the transition to SMS in coordination with the international aviation community, working with ICAO to adopt applicable global standards for safety management. ICAO requires SMS for the management of safety risk in air operations, maintenance organizations, air traffic services, and airports as well as certain flight-training operations and for organizations that design or manufacture aircraft. Further, ICAO has published safety management requirements for its member countries that mandate that civil aviation authorities—such as FAA—establish SMS. ICAO first mandated SMS worldwide for air traffic service providers, such as air carriers and certified aerodromes, in 2001. ICAO later specified that member states should mandate SMS implementation for airports, air carriers, and others by 2009. FAA began SMS implementation in 2005, but FAA officials informed ICAO that the agency and industry would not be able to meet the 2009 deadline. ICAO is allowing FAA to take additional time in its efforts to implement SMS, with the understanding that implementation is under way and that FAA is in the midst of a rulemaking to require SMS for commercial air carriers. ICAO officials stated that the United States is one of the leading implementers of SMS worldwide and acknowledged that SMS implementation in the U.S. aviation system may be more complicated than in other countries because of the size and complexity of the U.S. aviation industry. ICAO has not specified a date by which FAA is expected to comply with the requirements to implement SMS in the aviation system. There have also been actions within the United States to encourage implementation of SMS. For instance, in 2007, NTSB recommended that FAA require all commercial air carriers to establish an SMS and, in 2011, added SMS for all modes of transportation to the NTSB’s Most Wanted List, identifying SMS as one of the most critical changes needed to reduce the number of accidents and save lives. Partially in response to the ICAO requirement, FAA added goals related to SMS implementation to its 2009-2013 Flight Plan. a requirement to implement SMS in three of FAA’s business lines—the Air Traffic Organization (ATO), the Aviation Safety Organization (AVS), and the Office of Airports (ARP)—and a goal to implement SMS policy in all appropriate FAA organizations, which include the Office of Commercial Space Transportation (AST) and the Office of NextGen (ANG). FAA is in the process of implementing SMS within these business lines and offices as well as in industry through rulemakings to require airports and commercial air carriers to implement SMS. FAA designated AVS as the lead for SMS implementation in September 2008. Within AVS, the Office of Accident Investigation and Prevention’s (AVP) Safety Management and Research Planning Division coordinates and manages SMS implementation and operation across the agency, and so AVP serves as the official SMS lead for the agency. Federal Aviation Administration, 2009-2013 Flight Plan is the agency’s strategic plan. Administrators, their deputies, and other high-level FAA officials from each business line or office (see fig. 2). Within some of the business lines, there are offices devoted to specific aviation oversight functions that are responsible for overseeing detailed implementation of SMS for those functions. For example, the Flight Standards Service (AFS), a division of AVS that provides safety oversight of commercial air carriers and others, is taking steps to require SMS implementation by commercial air carriers and is also working to integrate SMS into its internal activities. In addition, the Aircraft Certification Service (AIR), a division of AVS that provides safety oversight to aviation design and manufacturing firms, is leading agency efforts to encourage SMS implementation for that industry sector, while ARP is leading agency efforts to require SMS implementation for certificated airports. SMS implementation will require changes to many of FAA’s operations. As the agency and industry implement SMS, shifts will be necessary in both the skills of FAA and industry staff and the tools that the agency uses to monitor safety. FAA’s integration of SMS into its business practices will also affect how the agency provides air navigation services and oversees the aviation industry. Historically, FAA oversight of airlines, airports, and other regulated entities has involved oversight of such things as operations and maintenance. FAA will continue this oversight, but will also apply SMS principles to its processes for oversight. The agency will provide oversight of the safety management systems of service providers such as air carriers and airports to help ensure that they are managing safety within their operations through SMS. For example, AFS currently provides oversight of the operations, maintenance, and safety data of commercial air carriers and others. Once SMS is fully implemented, AFS will continue to provide this oversight and will also conduct oversight of the safety management systems that commercial air carriers and others put in place. ATO completed its implementation of SMS, but FAA and several of its other business lines and offices are in the early stages of implementation. Most FAA business lines and offices have guidance and plans for SMS implementation in place and have begun to integrate SMS-related practices into their operations, but many tasks remain and aviation officials and experts with whom we spoke project that full SMS implementation will take many years. FAA finalized its agency-wide plan for SMS implementation in April 2012. The plan provides a road map for SMS implementation across the agency and describes the activities that FAA business lines and offices will need to complete by the end of 2015 to integrate SMS into their operations. These activities will lead to outcomes including: revising and standardizing safety policies and safety risk management methodologies across FAA to ensure SMS principles are consistently addressed; improving organizational processes so that FAA business lines and offices can share safety data and information more easily; and coordinating communications to ensure a common understanding of SMS across the agency. FAA began its agency-wide SMS implementation efforts in 2008, and in September of that year issued a policy for implementation of a common SMS within FAA. Among other things, the policy sets forth management principles to guide all of FAA in safety management and safety oversight activities and requires AVS, ARP, and ATO to develop and execute business line-specific plans for SMS implementation. In late 2008, FAA formed the agency-wide FAA SMS Committee to coordinate implementation efforts across FAA business lines and offices. Overall, the agency has taken a bottom-up approach to implementation, with some individual business lines and offices beginning implementation prior to agency-wide efforts. FAA has also taken steps to ensure that its plans for SMS implementation and policies align with international and government-wide requirements and technical guidance on SMS implementation, including ICAO’s Standards and Recommended Practices, the ICAO Safety Management Manual, and the JPDO SMS Standards. For instance, officials stated that they consulted international and government-wide guidance on SMS implementation when drafting agency implementation plans. (See fig. 3 for more information on alignment of FAA requirements with international and government-wide requirements and guidance on SMS.) Although FAA has made progress, completion of SMS implementation across FAA is likely to take many years. FAA’s agency-wide SMS implementation plan includes tasks with estimated completion dates through 2015, and some implementation tasks may take even longer to complete. For instance, a project plan that AVS officials developed to track status of AVS SMS implementation tasks contained in its implementation plan includes task completion dates through 2016. According to FAA, the overall SMS implementation effort is an evolutionary process that will not have a specific completion date. The current implementation time frame is consistent with experts’ estimates of how long it may take to implement SMS and with other large-scale organizational transformations. For example, representatives from The MITRE Corporation, which manages a federally funded research center for FAA and assisted FAA in selected SMS implementation efforts, stated that organizational transformations like SMS can take from 6 to 10 years. ATO is the only entity among FAA and its business lines to have completed SMS implementation. ATO issued its internal SMS guidance in March 2007 and finalized both its SMS implementation plan and its updated SMS Manual in 2008. According to ATO officials, ATO completed SMS implementation in March 2010, and the FAA Air Traffic Safety Oversight Service validated that ATO’s implementation of SMS was complete. Officials stated that implementation within ATO was simpler, in part, because it is the only branch of FAA that is considered an aviation service provider and therefore did not have to conduct a rulemaking for external entities as part of its SMS implementation.the implementation phase complete, ATO is currently in the continuous improvement phase of SMS. This means that ATO will continuously use the SMS-based processes now in place to identify hazards, enact strategies to mitigate the risks associated with those hazards, and assess the extent to which the mitigations are working effectively. In addition, FAA officials stated that ATO is working to improve its SMS operations, will update guidance on SMS, and plans to perform audits of its SMS functions on a regular basis. ATO officials added that they are working to With share lessons learned from their implementation efforts with other FAA business lines and to develop SMS tools and processes that can be commonly implemented across all FAA business lines. With the exception of ATO, most FAA business lines and offices are in the early stages of implementation, either in terms of integrating SMS into their internal processes or in terms of their efforts to prepare to provide oversight for proposed requirements for industry implementation of SMS. To date, much of the work of the FAA business lines has focused on efforts to draft implementation policies and guidance, train employees, and create tools for applying safety analyses and risk-based decision- making to safety oversight. (See fig. 4 for more information on the status of key SMS implementation efforts across FAA.) AVS began its SMS implementation efforts in August 2006 and finalized its SMS implementation plan in January 2012, which was then incorporated into FAA’s overall plan for SMS implementation. Since 2006, AVS and its seven services and offices have issued orders and other guidance on SMS implementation; developed SMS training courses; conducted voluntary pilot projects and rulemaking efforts on SMS implementation for industry; and worked to begin integrating elements of SMS into their operations. For example, AIR officials, who provide oversight of aviation design and manufacturing firms, have developed a central database that provides standard criteria for analyzing service data in a risk-based manner. This should allow AIR inspectors and engineers to rate the risk of potential safety issues and prioritize oversight to high risk issues. Some services and offices within AVS are in the midst of efforts to require SMS for industry and are also operating voluntary pilot programs to promote SMS implementation within industry. A final rule to require SMS for commercial air carriers is expected to be issued in September 2012. In 2007, AFS launched a pilot program to encourage voluntary implementation of SMS by industry. According to FAA officials, as part of its rulemaking efforts for commercial air carriers, FAA and AVS are developing a new part in the Code of Federal Regulations (CFR)—Part 5—that will describe SMS implementation requirements for Part 121 certificate holders. In the future, FAA may conduct rulemakings to require additional sectors of the aviation industry to meet Part 5 requirements (see fig. 5). AVS officials stated that efforts to establish SMS requirements more broadly across the aviation industry will likely take many years. Though FAA has not yet required SMS for air carriers or other parts of industry, FAA has acted to encourage SMS implementation by industry through voluntary pilot projects, and some aviation stakeholders have chosen to implement SMS in advance of any federal requirement. Some sectors of the aviation industry are farther along in their implementation of SMS than others. For instance, FAA officials stated that a large majority of commercial air carriers are in the process of implementing SMS. As of June 2012, over 90 percent of commercial air carriers operating under Part 121 were participating in the AFS pilot program, which provides air carriers with direct implementation support from FAA officials under a more relaxed implementation time frame than is anticipated under an eventual implementation regulation. Of these air carriers, three have reached the final stage of SMS implementation. However, most small air carriers have not yet begun implementing SMS. In contrast to AFS, AIR is at an earlier stage in its efforts to require SMS for the approximately 3,000 design and manufacturing firms it oversees. AIR began a voluntary pilot project for SMS implementation by design and manufacturing firms in 2011 and has 11 pilot project participants. AIR officials stated that they are in the process of launching a second aviation rulemaking committee to continue to explore options to require SMS for design and manufacturing firms. Officials also noted that AFS and AIR are working together to share lessons learned and assist one another in their implementation efforts. ARP is in the early stages of working to integrate SMS principles into its oversight of airports, and recently took steps to reduce the scope of that oversight. ARP initially planned to apply SMS-based oversight to all certificated airports. Officials stated that ARP is currently limiting its SMS- based oversight to large hub airports because of budget constraints and will reassess its capacity to expand oversight to smaller airports in 2013. ARP began its SMS implementation in 2010 and issued an internal order to provide a basis for the integration of SMS into its operations later that year. The office finalized its SMS implementation plan in September 2011 and has begun to make changes to its oversight. For instance, In June 2011, ARP began to apply SMS-based oversight to construction projects at the 29 large hub airports in the United States. Under this new oversight framework, ARP staff assess proposed airport construction projects using risk-based SMS principles, and airports need to incorporate strategies to mitigate identified risks into their construction plans prior to receiving ARP’s approval for the project. Like AVS, ARP is also in the midst of a rulemaking to require SMS for all certificated airports and has completed three voluntary SMS pilot projects for airports from 2008 to 2011. Thirty-one airports participated in at least one of ARP’s SMS pilot projects. ARP is using information gathered through the pilot projects to inform a planned advisory circular that will provide additional guidance to airports on SMS implementation. The pilot projects also allowed airports to share their SMS implementation practices with other airports. The final rule to require SMS for Part 139 certificated airports is expected to be issued in April 2013 and, if implemented as proposed, would require over 500 airports to implement SMS. Other FAA business lines are in varying stages of implementation. AST is not currently required to implement SMS; however, AST is taking initial steps toward integrating SMS into an existing set of safety management processes. ANG is farther along in its implementation of SMS because of its previous status as a part of ATO. basing its implementation of SMS on policies and processes established during ATO’s implementation of SMS. The officials stated that since ANG will provide the systems and components that will be used by ATO to manage air traffic, it made sense for ANG to develop its SMS based on policies, processes, and systems established by ATO. Officials stated that ANG completed its implementation plan in June 2012 and estimated that ANG’s SMS implementation is about 70 percent complete. There are a number of key practices and implementation steps that can help agencies successfully plan for and implement new projects, including large scale transformative ones, such as FAA’s implementation of SMS. As we have previously reported, addressing these key practices can help an agency improve its efficiency, effectiveness, and accountability. FAA currently has many of these key factors in place, such as established support from top leadership and a clear project mission; however, it has only partially addressed other key practices, such as providing needed expertise and technology, and has yet to establish SMS performance measures (see fig. 6). In 2011, FAA reorganized some of its offices and, as part of the reorganization, separated NextGen efforts from ATO. FAA has instituted many key practices that will help it prepare for and implement SMS across its business lines and offices. Top leadership: Top leaders from each FAA business line provide support for and actively participate in SMS implementation. As previously mentioned, FAA established the SMS Executive Council, a group of high-ranking FAA officials that provides executive-level guidance and conflict resolution for SMS-related issues across the agency. In accordance with our key practices, the SMS Executive Council has the authority to make resource allocation decisions, but also confers decision-making authority where appropriate to the FAA SMS Committee. For instance, FAA officials told us that the SMS Executive Council retains the authority to make final decisions about changes to FAA’s implementation plan that affect policies or procedures for multiple business lines; the FAA SMS Committee has the authority to make decisions that relate to daily concerns that fall within the purview of its members. For example, committee members settled a disagreement between ATO and airport officials over whether an airport should conduct certain components of a safety risk management panel. At the time, FAA had not yet issued its safety risk management policy clarifying terms and requirements, so the airport and ATO each had its own distinct safety risk management definitions and processes. Working with ARP and ATO officials, committee members identified a compromise in which ATO protocols were followed, but any disagreements on terms or procedures were documented. ARP officials told us that FAA’s safety risk management policy, issued in April 2012, should help prevent this type of disagreement from occurring. Clear project mission: FAA’s internal order requiring SMS implementation for ARP, ATO, and AVS clearly describes that FAA’s mission is to improve aviation safety and that implementing SMS and its components supports that mission. Each business line also has its own internal order requiring SMS implementation that mirrors this mission and goals. Implementation team: AVP’s safety management division and the FAA SMS Committee, function jointly as FAA’s dedicated SMS implementation team. The team’s structure and actions align with our criteria for a strong and stable team because it is composed of senior-level program managers from each business line, all of whom had received SMS training according to FAA officials. Despite some recent departures, its membership has been largely stable. Leading practices: FAA shares information across business lines to identify lessons learned related to SMS implementation. For example, ATO assembled lessons learned from its SMS implementation into a presentation for the other business lines, and included tips such as encouraging others to implement a training program and monitor mitigations. According to FAA’s implementation plan, the agency plans to systematize the sharing of lessons learned by creating a central repository to collect and communicate safety lessons learned among its business lines and offices by September 30, 2013. Troubleshooting: FAA has processes in place to manage SMS implementation across FAA, including troubleshooting unexpected problems. For example, the FAA SMS Committee meets monthly and manages agency-wide SMS implementation and any challenges that arise, and regularly briefs the SMS Executive Council, a briefing that includes a discussion of any issues or unexpected problems that could not be resolved at the committee level. For instance, when the Air Traffic Manager at an airport disagreed with airport officials regarding how to handle a potential safety issue with planes that were taking off on runways that were temporarily closed, the FAA SMS Committee elevated the issue to the SMS Executive Council, which resolved it. As we have previously reported, instituting practices like these can help an agency become more results-oriented, customer- focused, and collaborative. Although FAA is still in the process of finalizing new requirements for airports and air carriers to implement SMS, it has already taken some steps to institute key practices for those efforts. For example, FAA officials stated that the agency has taken steps to identify leading practices during pilot projects by soliciting information from participating airports and air carriers, and FAA officials told us they plan to incorporate these lessons learned into rulemaking and guidance. ARP officials reported that they encouraged pilot project participants to share lessons learned directly with one another through studies and roundtable discussions, and incorporated some of the lessons learned into FAA advisory circulars. FAA has also made efforts to troubleshoot and manage unexpected problems with pilot participants through meetings, calls, and conferences with airport and air carrier officials to understand their experiences. For example, AFS officials reported that they helped officials from air carriers to understand when certain safety risk management documentation and processes are necessary, and how they could be adapted for a variety of changes made to carrier operations, including smaller day-to-day changes. However, despite this assistance, officials from some airports that participated in pilot projects reported that they could have benefited from additional assistance from ARP, such as clarification on the safety risk management component of SMS. In addition, an official at one airport told us that he would have liked FAA to facilitate conversations between airports of similar size to help them share lessons learned. Other steps FAA has taken in its SMS implementation efforts partially align with key practices for implementing a new program. Project plan: Currently, the agency-wide project plan for SMS implementation is a single page of high-level milestones, which AVP officials monitor and report on to the SMS Executive Council. Also, AVS has a detailed project plan for its own SMS implementation and elements of agency-wide implementation for which AVP, as the agency SMS lead, has responsibility. Officials stated that they have plans to develop a system to monitor and track the progress of activities needed to implement SMS, but FAA does not currently have a system for tracking agency-wide SMS implementation, a key practice particularly important during the initial planning phase of project implementation. However, given the scope and complexity of SMS, a detailed, agency-wide project plan could help FAA track and monitor the interim steps of SMS implementation across the agency. Without such a plan, it may be more difficult for FAA to identify problems or deviations from planned activities, putting both the timeliness and effectiveness of SMS implementation at risk. Consulting with stakeholders: FAA has made efforts to consult with employees and stakeholders regarding its SMS implementation, but it has not yet developed a communications plan. Agencies should involve employees in planning, and incorporate employee feedback into new policies and procedures. FAA involved its business line program managers and some of the managers’ staff by assigning them responsibility for the day-to-day tasks related to implementing SMS across the agency. FAA has involved other employees by soliciting questions and comments on SMS in town hall meetings and the online DOT site called “IdeaHub,” and by offering SMS training through each business line. ATO, ARP, and AVS all offer introductory SMS courses for their staff as well as additional related courses, such as an SMS course specifically for managers and ATO’s safety risk management course. FAA has been working to implement SMS for the last 4 years, but the agency does not have a communications plan or strategy for ensuring that the SMS messages communicated to staff are consistent across the agency. Instead, FAA relies on a more informal communications structure in which each program manager staffed to the implementation team communicates relevant information back to their respective business line. The implementation team does not communicate any information directly to employees, which could hinder the team’s ability to ensure consistency in its message across FAA. ATO officials reported experiencing this challenge at the beginning of ATO’s SMS implementation, when a lack of clear requirements for communicating SMS information resulted in variation in staff’s understanding of guidance. We have previously reported that a communication plan or strategy can ensure consistency of message, provide information to meet the specific needs of employees, encourage two-way communication, and build trust. FAA plans to begin working on a communications plan in September 2012, and is scheduled to issue the plan at the end of February 2013. FAA officials also said they are in the process of developing an internal SMS website for employees to share information and ideas, which could enhance SMS communications. However, until the communications plan is developed and implemented, FAA’s employees may not receive timely or consistent information on SMS or be as invested in its implementation as they might otherwise be. FAA’s approach to overseeing industry SMS implementation allowed for additional two-way communication. For example, FAA solicited views on SMS implementation from airport and air carrier officials through voluntary pilot projects described previously, and learned more about industry perspectives through the formal rulemaking process—whereby an agency issues a Notice of Proposed Rulemaking and is required to notify the public and give them an opportunity to submit comments. Providing technology and expertise: FAA has provided some SMS training and tools to its employees; however, it has not yet provided other tools important for SMS implementation. FAA officials reported that each business line has provided SMS training to staff. In addition, FAA recently developed a standardized Safety Risk Management (SRM) policy, which will assist employees across FAA by standardizing SRM terminology and clarifying confusion on the conduct of SRM across the agency. FAA plans to create a simple version of an agency-wide hazard-tracking system in the next 3 to 6 months, but does not have plans to create a more complex system until August 2015, according to FAA’s SMS implementation plan. The simple version will draw from hazard-tracking systems already in place in some business lines, and summarize information from them to highlight broader hazards such as those that would affect multiple business lines. For instance, FAA officials stated that if ATO wanted to make a change to its operations at a particular airport, then ATO would be responsible for identifying associated hazards, risks, and risk mitigations and would also be responsible for assuming responsibility for the risk. However, if ATO determined that the airport was better equipped to mitigate the identified risks, then the airport and ARP would become more involved in designing risk mitigations and overseeing their implementation. FAA’s efforts to provide tools to help in SMS implementation are affected by differences in how data are collected and assessed across the agency. For example, these differences have held back agency efforts to model how changes to the national airspace system, such as increases to air travel, can affect safety. We have previously reported on and made recommendations related to FAA’s data challenges, and also discuss them later in this report. These data challenges mean that FAA is not always able to perform comparisons across databases, a challenge that that limits the usefulness of the data in identifying possibly dangerous hazards. Identifying, monitoring, and mitigating hazards is a key tenet of SMS, and without the proper technologies and tools, FAA may not be able to do this as effectively. FAA’s efforts do not align with two key practices for implementing a new program. Integrating SMS into employee performance plans: FAA does not consistently evaluate employees’ performance on SMS-related tasks. We have previously reported that effective performance management systems create a clear linkage between individual performance and organizational success, and include aligning individual performance expectations with organizational goals. FAA’s organizational mission and goal, and that of SMS, is to improve safety, yet FAA officials told us that the agency does not require employee performance plans to include SMS-related tasks. Although officials reported that some employees’ performance plans explicitly include SMS items, such as providing SMS training or developing SMS policy, it is left to the discretion of each business line whether SMS items are included. FAA officials told us that SMS principles and methodologies will be included in the performance plans of employees involved in writing SMS policy and revising SMS processes, and will be incorporated into the tasks of others once SMS implementation reaches those individuals. However, currently, none of the business lines require this. As such, FAA does not have a system for assessing the extent to which staff are effectively supporting SMS, and FAA may not be able to determine if staff are completing tasks and responsibilities necessary for the successful implementation of SMS. Measuring performance: FAA does not have performance measures in place to assess whether the SMS goals of improving safety are being achieved. FAA has broader safety-related performance measures, such as tracking rates of runway incursions and losses of separation, but SMS-related performance measures could address intermediate safety issues, such as precursors to incursions or incidents. Such measures could help FAA track progress toward its broader safety measures. FAA officials told us that AVS is a member of the Safety Management International Collaboration Group, a group formed in 2009 to address safety management-related topics, including performance measures. Most recently, FAA formed an agency-wide working group to study performance metrics for SMS implementation, and FAA’s implementation plan states that such metrics will be finalized in October 2014. However, FAA officials we spoke with acknowledged that they are at the very beginning phase of this process and, although already in the process of implementing SMS, have not yet identified metrics to measure safety results under an SMS system. We have previously reported that performance information is critical for achieving results and maximizing the return on federal funds. Performance measures should help FAA identify the extent to which SMS implementation will contribute to increased aviation safety—FAA’s stated overall goal for SMS—as well as help identify what changes could be made to improve SMS performance over time. As previously mentioned, FAA has taken steps to address many of the practices associated with planning and implementing a new program. However, we identified six challenges that could negatively affect FAA’s efforts to implement SMS in a timely and efficient manner: 1) the large scope and complexity of SMS implementation, 2) resource and capacity constraints, 3) standardization of policies and processes, 4) data sharing and protection, 5) data quality and usefulness, and 6) development of performance measures to evaluate SMS effectiveness. Implementing SMS is one of several major initiatives FAA has under way, and its sheer scope and complexity could affect, or be affected by, concurrent FAA efforts such as NextGen or Unmanned Aircraft Systems. SMS requires changes in many of FAA’s operations: from the way the agency tracks hazards to the way it oversees industry. SMS will also require a transformation of FAA’s and the aviation industry’s safety culture to one in which information and safety data are shared openly, and errors are addressed through whatever action is necessary to prevent them from happening in the future. FAA is making efforts to move toward this new approach to safety, for instance by using data-sharing systems that are protected from public disclosure to encourage voluntary reporting of safety issues and enable more robust analysis of safety data among FAA and air carriers. Moreover, as previously stated, each of FAA’s business lines has its own role in implementing SMS that must be coordinated across the agency. This is particularly challenging because the business lines are at different stages of implementation and, according to FAA officials, have historically operated independently. The scope and complexity of SMS implementation may also be a challenge for the aviation industry, and some stakeholders expressed concerns both in interviews and in official comments on FAA’s Notices of Proposed Rulemaking that eventual FAA requirements to implement SMS need to allow for variation in airport and air carrier operations. For example, officials from some smaller airlines and airports noted that SMS implementation could require additional resources, such as staff and software, which may not be readily available. In addition, officials from some airports and air carriers were concerned that FAA’s final requirements would be too prescriptive to allow entities to implement an SMS program that best fit their organizational type, management practices, and resources. Most stakeholders and experts we interviewed stated that FAA could design SMS requirements for airports and airlines that are scalable and flexible to accommodate this variation, which would address these concerns. For instance, airport officials from smaller airports told us that staff size limits their ability to assign a dedicated SMS employee or safety director, while some officials at larger airports said they were able to hire a SMS safety director or already had an established safety director in place. Also, FAA’s SMS implementation pilot project for airports found that 35 percent of participants planned to hire additional staff to support SMS and 15 percent were not sure. FAA officials have noted that they understand these scalability concerns, and are taking them into consideration as they develop final SMS rules for industry. SMS implementation across FAA will require some skills that agency employees currently do not have, yet FAA has not formally assessed the skills of its workforce to identify any gaps in the expertise required to implement SMS or determined how to fill those gaps. In addition, FAA officials stated that existing staff may not be able to be trained to fill SMS implementation needs in all cases. For instance, FAA officials noted that SMS implementation will require some engineers and other technical employees to understand certain terminologies and have certain knowledge, skills, and abilities, such as an enhanced ability to perform complex modeling and analysis of aviation safety data to identify potential safety hazards. AVS officials stated that to implement SMS, additional employees with skills in analyzing data for hazards and associated risks would be needed, along with additional training for existing staff. ARP officials stated that the office might need program analysts with specific data analysis skills to implement SMS.not expect to receive significantly more resources and, as previously mentioned, have already had to reduce the scope of the office’s SMS- based oversight because of insufficient staff. Stakeholders and experts also questioned whether FAA currently has the resources and capacity needed to fully implement SMS. For example, experts noted that FAA may not have the requisite engineers and other staff to participate in safety risk management efforts, or FAA inspectors to oversee individual airport and air carrier SMS programs. ARP officials stated that they do Despite these concerns, FAA has not yet conducted a strategic workforce assessment to accurately determine the skills and staffing levels it needs to manage SMS. Although FAA’s SMS implementation plan recommends that business lines create such staffing analyses, none have done so. Nor has FAA conducted an agency-wide workforce assessment for SMS. Our internal control standards state that agencies should ensure that skill needs are continually assessed to ensure workforces have the skills necessary to help the agency meet its goals. We have reported that strategic workforce planning is an integral part of human capital management and helps an agency, among other things, determine the critical skills and competencies that will be needed to achieve current and future programmatic results, and then develop strategies tailored to address any gaps identified. A workforce analysis could help FAA determine how to best address its most critical needs in ways that account for budget limitations, such as through retraining or shifting staff, rather than hiring additional employees. Without conducting an agency-wide SMS workforce analysis, FAA cannot be sure that it has sufficient staff, skills, or competencies to implement SMS, thus putting its SMS implementation efforts at risk. SMS standardization across FAA business lines and offices is central to implementation success, yet developing common systems for distinct FAA business lines and offices has proved challenging. For example, FAA realizes that the agency needs a common hazard-tracking system in order to maximize SMS effectiveness, yet FAA officials and stakeholders stated that it is difficult to develop such a system because each of FAA’s business lines uses different hazard-related terms and definitions, and often different data systems. These differences, in turn, prevent the agency from performing simple comparisons across databases and have delayed advances in using data analysis to proactively identify potential safety hazards. FAA officials stated that the agency has recently taken steps to make its databases interoperable, and also recently issued a standardized policy for the safety risk management component of SMS. Both of these steps may enhance FAA’s hazard-tracking and analysis capabilities. The agency is also working with ICAO to address issues related to standardization, such as adopting a collaborative approach to increase the sharing of safety information internationally. Industry officials are also concerned that FAA inspectors and certificate management offices may have different interpretations of SMS and other regulations. We and others have previously reported that variation in FAA’s interpretation of standards for certification and approval decisions is a long-standing issue. Industry stakeholders we interviewed expressed concerns that a similar result could occur once final rules are issued requiring airports and air carriers to implement SMS, and could lead to airports or air carriers of similar size being held to different standards of SMS implementation. FAA officials acknowledged that this is a challenge for the agency and noted that the agency plans to provide additional training to inspectors related to oversight of SMS. Additionally, based on our 2010 recommendation, recent legislation directs FAA to establish an advisory body of government and industry representatives to address the issue of inconsistent interpretation of regulations. FAA’s organizational structure for SMS implementation may pose challenges to standardization as well. For example, as previously mentioned, AVP’s safety management division is the lead for SMS, and AVP and the FAA SMS Committee share responsibility for implementing SMS across the agency. Despite AVP’s role as lead for SMS implementation, it does not have any additional authority compared to the other business lines’ committee representatives, something that AVP officials noted can make SMS implementation difficult. This could slow decision-making, particularly around issues that require business lines to come to a single decision, such as how to standardize policies. Nevertheless, FAA officials acknowledged that having to collaborate to implement an agency-wide SMS has improved communication among the business lines. FAA will likely continue to face challenges standardizing its policies and processes as standardization of this scale is not something the agency has previously undertaken, and the need to negotiate solutions across FAA business lines could take time. Airport officials’ concerns about sharing and protecting their safety data may reduce SMS effectiveness by limiting the ability of airports and FAA to analyze safety data and identify trends. Although FAA has some data protections in place, such as those established by the FAA Modernization and Reform Act of 2012, which protects data that airports and air carriers submit to FAA for SMS from federal Freedom of Information Act (FOIA) requests, any data airports collect and any data air carriers share with airports could be subject to state-specific FOIA laws. Most certificated U.S. airports are either owned by a state, a subdivision of a state, or a local government body, and thus are subject to state laws, including state FOIA laws. This means that data airports collect and submit to FAA for SMS—such as information on hazards or other safety data—is protected from federal FOIA public disclosure requests, but, according to officials and experts, may be subject to public disclosure under state FOIA laws. Air carriers are not directly subject to state FOIA laws because they are privately owned. Nevertheless, officials and experts stated that these laws could affect air carriers because any data they choose to share with airports could then be subject to state FOIA laws. As a result, air carrier officials told us they may be less likely to share safety information with airports. Airport and airline officials’ primary concern is that the public disclosure of such information could result in negative publicity or expose them to legal liability in the event of an incident or accident. FAA officials said that data protection and legal liability are two of the major concerns throughout the aviation industry that could hinder the implementation of SMS. FAA officials told us that they intend to continue to promote and expand safety information sharing efforts, but that airports could find ways to structure their SMS implementation so that they realized safety benefits while limiting the public release of air carrier safety information. In FAA’s official response to comments on two Notices of Proposed Rulemaking, FAA stated that airport officials are best situated to understand how to comply with state laws. Nonetheless, we found consensus among NTSB and many aviation stakeholders that FAA should seek congressional action regarding the protection of airport data from state FOIA laws. Data sharing can also be challenging within FAA. In 2011, we recommended that FAA improve information sharing among its programs because not doing so could limit the ability of FAA and others to analyze safety data and understand safety trends. The Department of Transportation agreed that it must continue to promote and expand safety information sharing efforts and safety practices in order to maximize the effectiveness of safety data mining to analyze trends and prioritize safety efforts to address hazards before they lead to incidents or accidents. However, our recommendation remains open. According to officials, ICAO has also formed the Safety Information Exchange Study Group to help enhance data protection and identify potential international solutions. Long-standing issues with data quality and usefulness could negatively affect FAA’s understanding of aspects of the safety of the aviation industry and, consequently, affect SMS’s effectiveness. Obtaining relevant data and understanding how to analyze those data to identify potential hazards are major challenges that FAA will need to overcome. In recent GAO reports, we commented on FAA’s lack of data to effectively assess aviation trends for certain types of events and the safety performance of certain industry sectors. For instance, in April 2012, we reported that for such events as runway excursions (when an aircraft veers off or overruns a runway) and ramp accidents (incidents or injuries that occur off the runway), a shortage of FAA data exists for analysis. The Department of Transportation concurred with this and our recommendations, and stated that the agency has taken steps to improve to its data quality and usefulness. For example, the FAA SMS Committee directed a working group to determine what safety data the agency is going to collect and track and to recommend what kind of system will be needed. However, FAA has not yet fully implemented several of our recommendations aimed at improving its capability to use data for aviation safety oversight, or several data-related NTSB recommendations from recent years. For example, we recommended that FAA extend standard quality controls, as appropriate, to the databases that support aviation safety oversight to ensure that the data are as reliable and valid as possible. By not fully addressing these challenges and recommendations, FAA’s ability to comprehensively and accurately assess and manage hazards and risk will be compromised, reducing the ability of SMS to prevent incidents and accidents. The aviation community has widely acknowledged that developing SMS performance measures is difficult, but without them, FAA will not be able to gauge the direct impact of SMS on aviation safety. Some stakeholders told us about ways in which SMS improved their organization’s operations, and these examples could provide insight into possible SMS performance measures. For instance, some airports and air carriers that participated in FAA’s SMS pilot projects reported that SMS implementation improved communication across their organizations, helped them identify organizational gaps—such as those in internal auditing and training—and decreased employees’ injuries, aircraft damages, and insurance costs. Officials from the Flight Safety Foundation recommended that the extent to which SMS informs management decision making, such as by redirecting resources or shifting priorities, may be one way to measure SMS effectiveness. An FAA official suggested that performance measures could be directed to specific components of SMS, for instance tracking the number of risks mitigated as a measure of safety risk management efficacy. We have previously reported that agencies need to set quantifiable outcome-based performance measures for significant agency activities, such as SMS, to demonstrate how they intend to achieve their program goals and measure the extent to which they have done so. Performance measures allow an agency to track its progress in achieving intended results, which can be particularly important in the implementation stage of a new program such as SMS. In our prior work we recommended that agencies develop methods to accurately evaluate and measure the progress of implementation, and develop contingency plans if the agency does not meet its milestones to complete tasks. FAA has established a working group to study the issue and participates on two international performance measures work groups: the Safety Management International Collaboration Group and the aforementioned Safety Information Exchange Study Group. FAA is making progress implementing SMS, both within the agency and for the aviation industry. However, SMS implementation represents a significant cultural and procedural shift in how the agency will conduct business internally and provide oversight to aviation stakeholders such as air carriers and airports, and by all estimates, this transformation will take many years to complete. Going forward, if FAA is to attain the full benefits of SMS, it will be important for the agency to remain committed to fully implementing SMS across its business lines. FAA has taken a number of steps that align with practices we identified as important to successful project planning and implementation, but has not addressed or has only partially addressed other key practices. These practices are important for large-scale transformative projects such as SMS, which require a dramatic shift in FAA’s approach to safety oversight and management. In the absence of these key practices, it may be difficult for FAA to prioritize projects or monitor SMS implementation and progress toward improving safety. Aviation safety is a shared responsibility among FAA, air carriers, airports, and others in the aviation industry, and efforts to improve safety will require the agency to overcome several challenges. The magnitude of SMS’s potential impact on aviation oversight and the complexity of implementation are both a benefit and a drawback for FAA, as SMS implementation could help ensure the continued safety of the U.S. aviation system, but could also affect implementation time frames for other large initiatives as the agency works in a resource-limited environment. FAA officials believe that SMS implementation will require some skills that employees do not currently have; however, FAA has not conducted an agency-wide workforce assessment. With agency resources and capacity in great demand, it will be important for the agency to maximize the efficiency of SMS implementation, both through efficient use of its workforce and creation of policies and systems that standardize and streamline implementation. In addition, data protection concerns from airport officials and others could prevent aviation stakeholders from fully embracing SMS implementation, thus hindering its effectiveness. Without assurance of protection from state FOIA laws, some aviation stakeholder may choose to collect only the bare minimum of safety-related data or may choose to limit the extent to which collected information is shared among aviation stakeholders. The agency also lacks sufficient data to effectively assess aviation trends for some events as well as the safety performance of certain industry sectors. The ability of FAA to identify safety risks, develop mitigation strategies, and measure outcomes is hindered by limited access to complete and meaningful data. To enhance the effectiveness of efforts to implement SMS and maximize the positive impact of SMS implementation on aviation safety, we recommend that the Secretary of Transportation direct the FAA Administrator to take the following five actions: 1. To better evaluate the effectiveness of the agency’s efforts to implement SMS, develop a system to assess whether SMS meets its goals and objectives by identifying and collecting related data on performance measures. 2. To align strategic goals with employee efforts, develop a system to evaluate employees’ performance as it relates to SMS. 3. To better manage implementation, develop a system to track and report on SMS implementation across business lines. 4. To better leverage existing resources and facilitate SMS implementation, conduct a workforce analysis to inventory existing employee skills and abilities and develop strategies for addressing any SMS-related gaps identified. 5. To maximize the positive impact of SMS implementation on aviation safety, consider strategies to address airports’ concerns that may negatively affect data collection and data sharing, including asking Congress to provide additional protections for SMS data collected by public entities. We provided the Department of Transportation and NTSB with a draft of this report for review and comment. DOT and NTSB officials provided technical comments, which we incorporated as appropriate and DOT agreed to consider the recommendations. In addition, DOT officials stated there is a need for FAA to have a common hazard-tracking system. FAA has taken initial steps towards standardization by publishing FAA Order 8040.4A, Safety Risk Management Policy, which identifies terms and definitions used for safety risk management. DOT also reinforced its dedication to the success of SMS and noted its continued efforts to improve its implementation plans with a measured, structured approach to implementation. We are sending copies of this report to the appropriate congressional committees, DOT, NTSB, and interested parties, and others. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me on (202) 512-2834 or at dillinghamg@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Our objective was to assess the Federal Aviation Administration’s (FAA) implementation of Safety Management Systems (SMS) and provide information on potential implementation challenges. To do so, we addressed the following questions: (1) What is the status of FAA’s implementation of SMS? (2) To what extent have FAA’s SMS efforts been consistent with key practices for successful planning and implementation of a new program? (3) What challenges does FAA face in implementing SMS? To perform our review, we focused primarily on FAA’s implementation of SMS for its business lines as well as its preliminary efforts to require and oversee SMS implementation by industry. We conducted background research to identify literature related to SMS in aviation, and any challenges that agencies might face when implementing SMS. We also attended parts of a safety risk management panel on runway status lights conducted by FAA’s Air Traffic Organization (ATO) at Seattle-Tacoma International Airport in March 2012 as a means of learning more about SMS and related processes. During the data collection and drafting phases of this report, FAA was in the midst of rulemaking efforts to require SMS of Part 121 air carriers and Part 139 airports, so we did not comment on any draft or proposed regulatory guidance. To determine the status of FAA’s implementation of SMS, we reviewed FAA’s SMS orders and pilot project guidance, implementation plans, and Notices of Proposed Rulemaking for Part 121 air carriers and Part 139 airports. We also reviewed international and FAA guidance on SMS issued by the International Civil Aviation Organization (ICAO) and the Joint Planning and Development Office (JPDO), respectively, and National Transportation Safety Board (NTSB) recommendations to FAA related to SMS. Finally, we interviewed FAA SMS program managers across FAA business lines and offices; industry experts we identified based on their knowledge and experience in industry, recommendations from aviation industry officials, and a search of SMS literature; and ICAO and NTSB officials. To assess the extent to which FAA’s efforts have been consistent with key practices, we reviewed our reports and other literature on successful project planning and implementation, particularly for large-scale transformative projects, and condensed the resulting list to eliminate duplication and overlap. To do this, we reviewed previous GAO reports that highlighted practices associated with successful planning and implementation of a new program. We removed or consolidated any duplicate items across the reports to create a single list of 10 criteria. We then identified FAA’s actions related to these practices by reviewing FAA guidance and agency documentation such as its SMS implementation plans, conducting interviews with FAA officials across its business lines, and using that information to assess the extent to which FAA had addressed each practice. We determined whether each key practice was addressed, partially addressed, or not addressed by using criteria developed for prior GAO reports. As such, we considered a practice “addressed” if FAA had instituted the practice; “partially addressed” if FAA had shown some progress toward instituting, or started but not completed the practice; and “not addressed” if FAA had made minimal or no progress toward instituting the practice. The team made these coding decisions together, with two analysts making initial judgments and team management reviewing and confirming them. To identify challenges FAA faces in implementing SMS, we reviewed our prior work on long-standing FAA challenges, such as those related to training and data, and interviewed aviation industry experts and FAA officials mentioned above. We reviewed prior GAO work on performance measurement and workforce analysis, Department of Transportation Inspector General reports and NTSB recommendations related to SMS. To obtain industry views on challenges, we interviewed officials from selected airports and air carriers, industry associations representing airports, air carriers, and pilots, and individuals with SMS experience described above. We also reviewed and analyzed documents, including language in the FAA Modernization and Reform Act of 2012 related to data protection, and associated scholarly work. To supplement comments received from the individuals we interviewed, we also reviewed comments made by aviation stakeholders on the two Notices of Proposed Rulemaking related to SMS. To obtain industry views on both SMS implementation practices and associated challenges, we interviewed officials from selected airports and air carriers, which we selected for diversity in size, location, participation in FAA SMS pilot projects, and submission of comments on FAA’s two Notices of Proposed Rulemaking related to SMS. (See table 1 for a list of selected airports.) We also interviewed officials from six air carriers: Delta, GoJet, United, Pinnacle, Southwest, and US Airways. Finally, we interviewed officials with SMS knowledge and expertise, including experts from the Flight Safety Foundation, Embry-Riddle Aeronautical University, John A. Volpe National Transportation Systems Center, and MITRE Corporation. In addition to the contact named above, Heather MacLeod (Assistant Director); Elizabeth Curda; Leia Dickerson; Sarah Farkas; David Hooper; Delwen Jones; Brooke Leary; Josh Ormond; Larry Thomas; and Elizabeth Wood made key contributions to this report.
The nation's aviation system is one of the safest in the world, but with air travel projected to increase over the next 20 years, efforts to ensure the continued safety of aviation are increasingly important. The FAA is seeking to further enhance safety by shifting to a data-driven, risk-based safety oversight approach--referred to as SMS. SMS implementation is required for FAA and several of its business lines and the agency is taking steps to require industry implementation. As requested, this report addresses (1) the status of FAA's implementation of SMS, (2) the extent to which FAA's SMS efforts have been consistent with key practices for successful planning and implementation of a new program, and (3) challenges FAA faces in implementing SMS. To address these issues, GAO reviewed FAA SMS documents, compared FAA efforts to key practices, and interviewed agency and industry officials. The Federal Aviation Administration (FAA) and its business lines and offices are in different stages of their implementation of Safety Management Systems (SMS). FAA finalized its agency-wide implementation plan in April 2012, and the Air Traffic Organization (ATO) has completed its SMS implementation, but other FAA SMS efforts are in the early stages. FAA business lines, such as the Aviation Safety Organization (AVS) and the Office of Airports (ARP), have SMS guidance and plans largely in place and have begun to integrate related practices into their operations, but many implementation tasks remain incomplete, and officials and experts project that full SMS implementation could take many years. There are a number of key practices that can help agencies plan for and efficiently implement new projects, including large scale transformations such as FAA's SMS implementation, and FAA has many in place. For example, FAA has support from top leadership and a clear project mission. However, FAA has only partially addressed other key practices such as developing a project plan to track SMS implementation, and FAA has not addressed performance-related practices such as establishing SMS performance measures or links between employees' performance standards and SMS. Several challenges remain that may affect FAA's ability to effectively implement SMS. FAA is taking steps to address some challenges and stakeholder concerns, but challenges related to data sharing and data quality; capacity to conduct SMS-based analyses and oversight; and standardization of policies and procedures could negatively affect FAA's efforts to implement SMS in a timely and efficient manner. Further, FAA officials stated that SMS implementation will require some skills that agency employees do not have, but FAA has not yet assessed the skills of its workforce to identify specific gaps in employee expertise. In addition, while existing federal law protects any data collected for SMS, any data airports collect could be subject to state-specific Freedom of Information Act laws, a gap that could create a disincentive for airports to fully participate in SMS implementation. GAO recommends that FAA develop systems to: track SMS implementation, evaluate employee performance as it relates to SMS, and assess whether SMS meets its goals and objectives; conduct a workforce analysis for SMS; and consider strategies to address airports' data concerns. The Department of Transportation agreed to consider the recommendations and provided clarifying information about SMS, which GAO incorporated.
The Unfunded Mandates Reform Act of 1995 was enacted to address concerns expressed about federal statutes and regulations that require nonfederal parties to expend resources to achieve legislative goals without being provided funding to cover the costs. Although UMRA was intended to curb the practice of imposing unfunded federal mandates, the act does not prevent Congress or federal agencies from doing so. Instead, it generates information about the potential impacts of mandates proposed in legislation and regulations. In particular, title I of UMRA requires Congressional committees and the Congressional Budget Office (CBO) to identify and provide information on potential federal mandates in certain legislation. Title I also provides opportunities for Members of Congress to raise a point of order when covered mandates are proposed for consideration in the House or Senate. Title II of UMRA requires federal agencies to prepare a written statement identifying the costs and benefits of federal mandates contained in certain regulations and consult with affected parties. It also requires action of the Office of Management and Budget (OMB), including establishing a program to identify and test new ways to reduce reporting and compliance burdens for small governments and annual reporting to Congress on agencies’ compliance with UMRA. Title III of UMRA required the Advisory Commission on Intergovernmental Relations to conduct a study reviewing federal mandates. Title IV establishes limited judicial review regarding agencies’ compliance with certain provisions of title II of the act. UMRA generally defines a federal mandate as any provision in legislation, statute, or regulation that would impose an enforceable duty on state, local, or tribal governments (intergovernmental mandates) or the private sector (private sector mandates) or that would reduce or eliminate the funding authorized to cover the costs of existing mandates. However, some other definitions, exclusions, and thresholds in the act apply and may vary according to whether the mandate is in legislation or a rule and whether a provision imposes an intergovernmental or private sector mandate. For example, UMRA includes definitional exceptions for enforceable duties that are conditions of federal financial assistance or that arise from participation in a voluntary federal program. UMRA also excludes certain types of provisions, such as any provision that enforces Constitutional rights of individuals, from its application. When, in aggregate, the provisions in proposed legislation or regulations equal or exceed UMRA’s thresholds, other provisions and analytical requirements in UMRA apply. For legislation, the thresholds are direct costs (in the first 5 fiscal years that the relevant mandates would be effective) of $50 million or more for intergovernmental mandates and $100 million or more for private sector mandates, while the threshold for regulations is expenditures of $100 million or more in any year. GAO has issued two previous reports addressing UMRA and federal mandates. In our May 2004 report we provided information and analysis regarding the identification of federal mandates under titles I and II of UMRA. In that report, we described the complex procedures, definitions, and exclusions under UMRA for identifying federal mandates in statutes and rules. For calendar years 2001 and 2002, we also identified those statutes and rules that contained federal mandates under UMRA and provided examples of statutes and rules that were not identified as federal mandates but that affected parties might perceive as “unfunded mandates” and the reasons these statutes and rules were not federal mandates under UMRA. In February 1998, we reported on the implementation of title II. In that report, we found that UMRA appeared to have had little effect on agencies rulemaking and most significant rules promulgated were not subject to title II requirements. Both of these reports had relatively consistent findings—that only a limited number of statutes and rules have been identified as federal mandates under UMRA. UMRA’s coverage, which includes its numerous definitions, exclusions, and exceptions, was the issue most frequently commented on by parties from all five sectors (see table 1). Most parties from the state and local governments, federal, business, and academic/think tank sectors viewed UMRA’s narrow coverage as a major weakness that leaves out many federal actions with potentially significant financial impacts on nonfederal parties. Conversely, a few parties, from the public interest sector and academic/think tank sector, considered some of the existing exclusions important or identified UMRA’s narrow scope as one of the act’s strengths. While there was no clear consensus across sectors on how to address coverage, some suggestions designed to expand UMRA’s coverage had support from parties across and within certain sectors. UMRA does not apply to legislative provisions that cover constitutional rights, discrimination, emergency aid, accounting and auditing procedures for grants, national security, treaty ratification, and certain parts of Social Security. CBO estimates that about 2 percent of the bills that it reviewed from 1996 to 2004 contained provisions that fit within UMRA’s exclusions. All sectors other than the public interest advocacy sector said they viewed UMRA’s narrow coverage as a significant weakness because it precludes an official accounting of the costs to nonfederal parties associated with many federal actions. This issue was described by one party who noted that any of the exclusions, as well as the exemptions, in UMRA may be justified in isolation, but suggested that it is their cumulative impact that raises concerns. Some parties from the business, academic/think tank, public interest advocacy, and state and local governments sectors made general comments on the clarity of certain UMRA definitions and exemptions and whether this results in different interpretations across agencies. One party who said UMRA’s coverage was narrow often cited UMRA’s definitional exceptions for mandates, including conditions of federal financial assistance (such as grant programs) or that arise from participation in voluntary federal programs, saying some laws enacted under these exceptions imposed significant mandates. A prominent example of a grant condition excluded from UMRA cited by parties in the state and local government sector is the No Child Left Behind Act of 2001, which places various requirements on states and localities, including that their schools measure the progress of students through annual tests based on challenging academic standards and that teachers are highly qualified as defined in the act. Other parties commented about various other definitional issues involving the exclusion of certain types of costs (indirect costs) and UMRA’s cost thresholds for legislative and regulatory mandates, which result in excluding many federal actions that may significantly impact nonfederal entities. Other parties cited the general exclusions for appropriations and other legislation not covered by the act and for rules issued by independent regulatory agencies, which are also not covered by UMRA. CBO estimates that 5 of the 8 laws containing federal mandates (as defined by UMRA) that it did not review before enactment, were appropriations acts. A few parties from academic/think tank and state and local government sectors commented about UMRA’s lack of coverage for certain tax legislation that may reduce state or local revenues. Even though federal tax changes may have direct implications for state tax revenue for the majority of states whose income tax is directly linked to the federal tax base, these impacts are not considered as mandates under UMRA because states have the option of decoupling their tax systems from federal law. Finally, parties from the state and local government sector also identified concerns about gaps in UMRA’s coverage of federal preemption of state and local authority. Although some preemptions are covered by UMRA such as those that preempt state or local revenue raising authority, they are covered only for legislative actions and not for federal regulations. According to CBO’s 2005 report on unfunded mandates, “Over half of the intergovernmental mandates for which CBO provided estimates were preemptions of state and local authority.” Despite the widespread view in several sectors that UMRA’s narrow coverage leaves out federal actions with potentially significant impacts on nonfederal entities, there was less agreement by parties about how to address this issue. The options ranged from general to specific but those most frequently suggested were: Generally revisit, amend, or modify the definitions, exceptions, and exclusions under UMRA and expand its coverage. Clarify UMRA’s definitions and ensure their consistent implementation across agencies to ensure that all covered provisions are being included. Change the cost thresholds and/or definitions that trigger UMRA by for example lowering the threshold for legislative or executive reviews and expanding cost definitions from beyond direct to cover indirect costs as well. Eliminate or amend the definitional exceptions for conditions of federal financial assistance or that arise from participation in voluntary federal programs. Expand UMRA coverage to all preemptions of state and local laws and regulations, including those nonfiscal preemptions of state and local authority. The level of agreement for each suggested option varied across sectors. The first option came from parties in every sector except public interest advocacy. Although parties representing businesses did not comment on preemption during our data collection, the business sector has generally been in favor of federal preemptions for reasons such as standardizing regulation across state and local jurisdictions. (See appendix V for a more complete list of suggested options by theme.) The results of our January symposium confirmed support for generally revisiting and expanding UMRA coverage. See appendix VI for a list of the symposium results. The symposium participants also raised a cautionary note about potential consequences of some of the suggested options. For example, if UMRA coverage were expanded by changing exclusions and limitations or lowering or eliminating UMRA thresholds or including regulations issued by independent agencies, the workloads of CBO and the regulatory agencies would increase substantially. Another issue raised by a few parties that evoked some reaction at the symposium was whether private sector mandates should be included in UMRA. Some parties, from the federal agency, academic/think tank and public interest advocacy sectors, questioned whether private sector mandates should be included in UMRA. According to one party, the inclusion of the private sector seems contrary to the intent of the action, which they viewed to be intergovernmental mandates. Parties from the state and local government and academic/think tank sectors indicated during our symposium that they would not support dropping private sector mandates from UMRA. They pointed out, for example, that intergovernmental and private sector mandates can be interrelated, in particular that businesses, which can be affected by private sector mandates, are a key revenue source for state and local governments. Contrary to the view that UMRA’s coverage was too narrow, some parties from academic/think tank and public interest advocacy sectors viewed UMRA’s narrow scope as one of its primary strengths. Rather than expanding UMRA’s coverage, these parties said that it should be kept narrow. One party expressed concern that eliminating any of UMRA’s exceptions and exclusions might make the identification of mandates less meaningful, saying, “The more red flags run up, the less important the red flag becomes.” Between 1996 and 2004, CBO reports that of the 5,269 intergovernmental statements, 617 had mandates; of the 5,151 private sector statements, 732 had mandates. Of the mandates identified by CBO, 9 percent of the intergovernmental mandates and 24 percent of private sector mandates had costs that would exceed the thresholds. Specifically, these parties argued in favor of maintaining UMRA’s exclusions or expanding them to include federal actions regarding public health, safety, environmental protection, workers’ rights, and the disabled. Unlike the parties that viewed UMRA’s exclusions as too expansive, some parties from the public interest advocacy sector and the academic/think tank sector focused on the importance of the existing exclusions, particularly those dealing with constitutional and statutory rights, such as those barring discrimination against various groups. During our January symposium, parties from multiple sectors took issue with any suggestion that the constitutional and statutory rights exclusions in UMRA be repealed. One party stated that the concept of unfunded mandates should not apply to laws intended to protect such fundamental rights. Another party suggested that the narrow scope of UMRA was generally useful, noting that, “One of the strengths of UMRA has been that it doesn’t try to be more ambitious than it needs to be.” Conversely, parties from most sectors opposed further limiting UMRA’s coverage. Enforcement of UMRA’s provisions was the second most frequently cited issue but with far fewer parties from each sector commenting. Parties across and within sectors had differing views on both the mechanisms provided in the law itself and the level of effort exercised by those responsible for implementing the provisions. With regard to Congressional procedures, some parties observed that the opportunity provided for lawmakers to raise a point of order had a deterrent effect, while others described it as ineffective or underutilized. With regard to federal regulations, some questioned the agencies’ compliance with the provisions of the act. Finally, parties had mixed views about the judicial review provision under title IV, which provides limited remedies against agencies that fail to prepare UMRA statements, among other things. Parties from various sectors also suggested options to strengthen the issues raised about UMRA enforcement, but none was suggested by parties from a majority of sectors. One of the primary tools used to enforce UMRA requirements in title I is the point of order—a parliamentary term used by a member of Congress in committee or on the floor of either chamber of Congress to raise an objection about proceeding to vote when a rule of procedure has been or will be violated. Once raised, an UMRA point of order prevents legislative action on a covered mandate unless overcome by a majority. The point of order, which provides members of Congress the opportunity to raise challenges to hinder the passage of legislative provisions containing an unfunded intergovernmental mandate, was the most frequently cited enforcement issue with varying views about its effectiveness. Those representing state and local government and federal agency sectors said that the point of order should be retained because it has been successful in reducing the number of unfunded mandates by acting as a deterrent to their enactment, without greatly impeding the process. One party commented that the threat of a point of order against a legislative proposal has caused members and staff to rethink and revise many proposals that would have likely imposed unfunded federal mandates on the states in excess of the threshold set in the law. This is consistent with the information presented in our May 2004 on UMRA, which quoted the Chairman of the House Rules Committee as saying that UMRA “has changed the way that prospective legislation is drafted…” We also reported that “although points of order are rarely used, they may be perceived as an unattractive consequence of including a mandate above cost thresholds in proposed legislation.” Conversely, parties primarily from academic/think tank, business, and federal sectors did not believe the point of order has been effective in preventing or deterring the enactment of mandates. Moreover, others commented about its infrequent use. In the last 10 years, at least 13 points of order under UMRA were raised in the House of Representatives and none in the Senate. Only 1 of the 13, regarding a proposed minimum wage increase as part of the Contract with America Advancement Act in 1996, resulted in the House voting to reject consideration of a proposed provision. Some parties said the point of order needs to be strengthened by making it more difficult to defeat. One suggested revision was to require a three-fifths vote in Congress, rather than a simple majority, to overturn a point of order. This change was believed to strengthen the “institutional salience of UMRA” and to ensure that no mandate under UMRA could be enacted if it was supported only by a simple majority. On March 17, 2005 the Senate approved the fiscal year 2006 budget, which included a provision that would increase to 60 the number of votes needed to overturn an UMRA point of order in the Senate. As of March 28, the fiscal year 2006 budget was in conference negotiations with the House of Representatives. Commenting parties from state and local government, business, and federal agency sectors questioned some federal agencies’ compliance with UMRA requirements and the effectiveness of enforcement mechanisms to address this perceived noncompliance. They mentioned the failure of some agencies to consult with state, local and tribal governments when developing regulations that may have a significant impact on nonfederal entities, which is discussed later in the report. Likewise, at least one party of the business, federal, and state and local government sectors each expressed concerns about the lack of accurate and complete information provided by federal agencies, which are responsible for determining whether a rule includes a mandate and whether it exceeds UMRA’s thresholds. The perceived lack of compliance with certain UMRA requirements generated several suggested changes to UMRA to address this problem. The only suggestion that had support across parties from multiple sectors, however, was to create a new office within OMB to calculate the cost estimates for federal mandates in regulations. They suggested that this office have responsibilities similar to the State and Local Government Cost Estimates Unit at CBO. However, the parties did not specify whether the office should exist as an office within OMB’s Office of Information and Regulatory Affairs or exist separately. A few parties from the federal and academic/think tank sectors commented that UMRA’s judicial review provision does not provide meaningful relief or remedies if federal agencies have not complied with the requirements of UMRA because of its limited focus. In general, title IV subjects to judicial review any agency compliance or noncompliance with certain provision in the act. Specifically, the judicial review is limited to requirements that pertain to preparing UMRA statements and developing federal plans for mandates that may significantly impact small governments. However, if a court finds that an agency has not prepared a written statement or developed a plan for one of its rules, the court can order the agency to do the analysis and include it in the regulatory docket for that rule but the court may not block or invalidate the rule. The few parties commenting about judicial review suggested expanding it to provide more opportunities for judicial challenges and more effective remedies when noncompliance of the act’s requirements occur. However, one party from the public interest advocacy sector said that a benefit of the existing judicial review is that the remedy for noncompliance is to provide the required statement versus impeding the regulatory process. Similarly, when this issue was discussed at the symposium, a few parties primarily from the academic/think tank and public interest advocacy sectors said that efforts to limit or stop implementation of mandates through legal action might be unwarranted, because as noted earlier, UMRA was not intended to preclude the enactment of federal mandates. They were concerned about legal actions being used to slow down the regulatory process through litigation. Parties from all sectors also raised a number issues about the use and usefulness of UMRA information (e.g., has it helped decrease the number of mandates?), UMRA’s analytic framework, and federal agency consultations with state, local, and tribal governments, but there was no consensus in their views about how these issues should be addressed. The parties provided mixed but generally positive views about the use and usefulness of UMRA information; the only option that attracted multiple supporters was a suggestion for a more centralized approach for generating information within the executive branch. Parties also provided a number of comments about the UMRA provisions that establish the analytic framework for cost estimates, which generated a few suggested options. UMRA’s consultation provision generated the fewest comments, which focused primarily on a general concern about a perceived lack of consistency across agencies when consulting with state and local governments. Parties from all sectors commented about the use and usefulness of information generated by UMRA. While most of the comments about information generated under title I were positive, some parties raised concerns about the quality and usefulness of some of the information and suggested improvements. While many of the comments were about UMRA information in general, most of the positive comments from a majority of the sectors were specific to the usefulness of information generated under title I by CBO in particular. For example, one party, who characterized UMRA as a success, credited the act with bringing unfunded mandates to the forefront of Congressional debates and slowing down the enactment of new unfunded mandates. Parties from several sectors praised the value and quality of CBO’s analyses of mandates and the attention that CBO’s cost estimates under UMRA bring to the fiscal effects of federal legislation. However, some parties from academic/think tank, public interest advocacy, and state and local governments sectors had more mixed views about the usefulness of information generated under UMRA. One party characterized the information as “marginally effective” in reducing costly and cumbersome rules and a few parties shared similar views about legislative mandates. Specifically, some of these parties commented that while the information may increase awareness of unfunded or under funded mandates, UMRA has been less successful in actually changing legislation to reduce the number of mandates. The parties from various sectors suggested several options to improve the use and usefulness of information under UMRA, but there was no agreement across or within sectors on any particular option. Only the suggestion to provide for a centralized review of regulatory mandates was suggested by more than two parties. (As discussed previously, this was also suggested as a way to improve UMRA enforcement.) Parties from all sectors agreed that UMRA’s provisions work to constrain the analysis of mandate costs, which may impact the quality of the estimates. For example, parties from the academic/think tank, federal, and state and local governments sectors commented that the act excludes the consideration of the indirect costs of mandates, which can be significant for regulated entities. Moreover, others commented that certain definitions under UMRA are not clearly understood or easily interpreted, which can impact estimates. For example, some parties said that terms such as “federal mandates” and “enforceable duty” are not clearly defined and thus open to interpretation by the agencies. Others noted that there can be differences in the cost analyses for legislative and regulatory mandates in areas such as making determinations about whether a mandate exceeds UMRA cost thresholds when ranges are used. For example, CBO has developed its own criteria for applying the act and has extended its general practice of providing point estimates for mandates rather than ranges when possible, as it does for its federal budget estimates. The federal agencies are left to their own discretion in deciding whether to use estimate ranges for costs and how to apply them to the threshold. In one case, which we observed in a prior report, the U.S. Department of Agriculture (USDA) appeared to have developed a range of costs associated with implementing its rule on retained water in raw meat and poultry products. However, USDA provided only a lower bound estimate of $110 million, but did not quantify median or upper bound cost estimates. Because the lower bound was so close to the inflation adjusted threshold of $113 million, it is reasonable to assume that the median or upper bound estimate would have exceeded the threshold and been a mandate under UMRA. Some parties expressed frustrations with the inherent uncertainties of estimating mandate costs. In particular, some parties commented that cost estimates are sometimes difficult or not feasible to calculate because they rely on future actions. That is, CBO sometimes finds that cost estimates for legislative mandates are difficult or not feasible to prepare, which can happen because CBO’s analysis is generally done before bills are approved and regulations needed to implement them have been developed. For example, in 2004, CBO reported that of the 66 intergovernmental mandates, 2 could not be estimated; of the 71 private sector mandates, 10 could not be estimated. In many of these cases, CBO reported that the costs could not be determined because it had no basis for predicting what regulations would be issued to implement them. The parties offered a variety of suggested options to address their concerns about estimation, but only a few had support across or within the sectors. There was, however, some overlap between options suggested addressing UMRA coverage and enforcement issues and options to address estimation issues. For example, some parties suggested revising UMRA’s cost or expenditure definitions and thresholds, including revisiting the exclusion of indirect costs from UMRA estimates, which may affect both the actual estimation process and whether a legislation or regulation will be identified as containing a federal mandate at or above UMRA’s thresholds. Parties from several sectors suggested examining or monitoring the implementation of UMRA’s estimation process for federal agencies’ regulations through an independent agency. A few parties had comments regarding UMRA’s requirement that federal agencies consult with elected officers of state, local and tribal governments (or their designees) on the development of proposals containing significant intergovernmental mandates. Parties from all five sectors commented on the consultation provisions, and these comments generally focused on the quality of consultations across agencies, which was viewed as inconsistent. A few parties commented that UMRA had improved consultation and collaboration between federal agencies and nonfederal levels of government. A few commenters also raised concerns that UMRA’s consultation provisions focus on state, local and tribal governments, but exclude other constituencies that might be affected by proposed federal mandates. While several parties primarily from the state and local government sector suggested options for improving consultation, the only one mentioned by more than 2 parties was a suggestion for agencies to replicate CBO’s consultation approach for legislative mandates, which some parties characterized as collaborative. Parties from all sectors also raised a number of broader issues about federal mandates—namely, the design and funding and evaluation of federal mandates—and suggested a variety of options. Specific comments about the design and funding of federal mandates varied across sectors. Most often, the comments focused on a perceived mismatch between the costs of federal mandates and the amount of federal funding provided to help carry them out. Some parties from several sectors suggested that the problem they are concerned about is not so much unfunded federal mandates as underfunded mandates. When this issue was addressed at the symposium, a few parties pointed out that this issue is broader than UMRA, dealing with such issues as how to address the imbalance between mandate costs and available resources, how to generate the resources to meet these needs, and how to address the incentives for the federal government to “over leverage” federal funds by attaching (and often revising) additional conditions for receiving the funding. Some parties also raised concerns about the varying cost of some mandates across various affected nonfederal entities, mismatches between the funding needs of parties compared to federal formulas, and the effects of the timing of federal actions and program changes on nonfederal parties. Parties, primarily from the academic/think tank sector, suggested a wide variety of options to address their concerns, but there was no broad support for any option. Parties across four sectors suggested providing waivers or offsets to reduce the costs of the mandates on affected parties or “off ramps” to release them of some responsibilities to fulfill the mandates in a given year if the federal government does not provide sufficient funding. However, when this was discussed at the symposium, parties said that compliance with federal mandates should not be made contingent on full federal funding. They said, for example, that it is an appropriate role for the federal government to require compliance with certain mandates even if they are not fully funded. These parties also said that state and local governments do not always comply with mandates under existing laws. Some of the symposium participants also pointed out potential pitfalls of “off ramps” noting that they could actually provide an incentive to under fund mandates and that it might be difficult to manage who would determine that federal funding does not cover the costs of a mandate in a given year and how that determination would be made. During the symposium, the option of building into the design of federal mandates “look back” or sunset provisions that would require retrospective analyses of the mandates’ effectiveness and results was discussed. About half the parties, representing all sectors except federal agencies, commented on the evaluation of federal mandates and offered suggestions to improve mandates, whether covered by the act or not. This issue received the most focus from parties in the academic/think tank sector, who felt that the evaluation of federal mandates was especially important because there is a lack of information about the effects of federal mandates on affected parties. Four issues emerged from the comments provided by the various sectors concerning evaluations. First, parties from four of the five sectors commented about the lack of evaluation of the effectiveness (results) of mandates and the implications of mandates, including benefits, non-fiscal effects and costs. According to some parties, if mandate-related evaluations were conducted more often, policy decisions regarding mandates, both specifically and collectively, could meaningfully consider mandate costs, benefits and other relevant factors. Second, they expressed concerns about the accuracy and completeness of mandate cost estimates. This concern was raised primarily by parties in the public interest advocacy and business sectors. While they agreed that estimating costs was difficult, they felt examining the quality of the estimates was necessary. Third, parties primarily from the academic/think tank and state and local governments sectors raised issues about the impacts and costs of federal mandates. They noted that while much attention has been focused on the actual costs of mandates, it is important to consider the broader implications of federal mandates on affected nonfederal entities beyond direct costs, including a wide range of issues such as opportunity costs, forgone revenues, shifting priorities, and fiscal trade-offs. Finally, a few parties were concerned about whether some agencies have compromised the effectiveness of certain regulations by designing them to ensure that their costs do not meet or exceed UMRA’s cost threshold. Parties across the sectors suggested that various forms of retrospective analysis are needed for evaluating federal mandates after they are implemented. First, parties in all sectors except the federal sector suggested retrospective analyses on the costs and effectiveness of mandates, including comparing them to the estimates and expected outcomes. Second, parties in the state and local sector suggested conducting retrospective studies on the cumulative costs and effects of mandates—the impact of various related federal actions, which when viewed collectively, may have a substantial impact although any one may not exceed UMRA’s thresholds. Third, parties in the academic/think tank sector suggested examining local and regional impacts of mandates. According to one party, mandate costs could have a significant effect on a particular state or region without exceeding UMRA’s overall cost threshold. Finally, parties in the academic/think tank sector suggested analyzing the benefits of federal mandates, when appropriate, not just costs. As Congress begins to reevaluate UMRA on its 10-year anniversary, some of the issues raised by the various sectors we contacted may provide a constructive starting point. While the sectors provided a wide variety of comments, their views were often mixed across and within certain sectors. Given the wide-ranging view of opinions, it will be challenging to find workable solutions that will be broadly supported across sectors that often have differing interests and perspectives. Although parties from various sectors generally focused on the areas of UMRA and federal mandates that they would like to see fixed, they also recognized positive aspects and benefits of UMRA. In particular, they commented about the attention UMRA brings to potential consequences of federal mandates and how it serves to keep the debate in the spotlight. We also found it notable that no one suggested repealing UMRA. One challenge for Congress and other federal policy makers is to determine which issues and concerns about federal mandates can be best addressed in the context of UMRA and which ones are best considered as part of more expansive policy debates. When considering changes to UMRA itself, one issue stood out, UMRA’s narrow coverage. This was clearly an issue for certain parties within all sectors based on the comments. The various definitions, exceptions, and exclusions were a source of frustration for many who responded to our review, especially those most affected by federal mandates. Although the parties in most sectors generally agreed that UMRA’s coverage should be expanded given its narrow focus, parties in the public interest advocacy sector disagree. Even among those that believe that UMRA’s coverage is too narrow, identifying suggested options that had broad-based support was challenging. Most parties simply suggested revisiting, amending, or modifying UMRA to expand coverage. Others provided more specific suggestions, including expanding UMRA to cover conditions of financial assistance, such as grants, and all preemptions of state and local authority. However, certain proposed changes were strongly opposed by certain parties in the public interest advocacy and academic sectors, such as dropping the exclusions for civil rights-related provisions. Likewise, parties from the business and state and local governments sectors opposed any further narrowing of UMRA. On broader policy issues concerning federal mandates, most parties supported the need for more evaluation and research on federal mandates. More retrospective analysis to ensure that mandates are achieving their desired goals could enable policymakers to better gauge the mandates’ benefits and costs, determine whether the mandates are providing the desired and expected results at an acceptable cost and assess any unanticipated effects from the implementation of mandate programs. Such analysis could be done not only for individual mandates but also for the cumulative, aggregate costs and other impacts that major mandates may be having for the budgetary priorities of regulated entities, such as state or local governments. Such information could help provide additional accountability for federal mandates and provide information which could lead to better decisions regarding the design and funding of mandate programs. Some suggested that the design of mandates could incorporate “look back” or sunset provisions that would require retrospective analyses of mandate results periodically. As we move forward, the unfunded mandates issue raises broader questions about the assignment of fiscal responsibilities within our federal system. The federal government, as well as states, faces serious fiscal challenges both in the short and longer term. In February 2005, we issued our report on 21st century challenges. Given the long-term fiscal challenges facing the federal budget as well as numerous other geopolitical changes challenging the continued relevance of existing programs and priorities, we called for a national debate to review what the government does, how it does business and how it finances its priorities. Such a reexamination should usefully consider how responsibilities should be allocated and shared across the many nonfederal entities in our system as well. As we rethink the federal role, many in the state and local or business sector would view unfunded mandates as among the areas warranting serious reconsideration. Unfunded mandates potentially can weaken accountability and remove constraints on decisions by separating the enactment of benefit programs from the responsibility for paying for these programs. Similar objections, however, could also be raised over 100 percent federal financing of intergovernmental programs, since this could vitiate the kind of fiscal incentives necessary to ensure proper stewardship at the state and local level for shared programs. Reconsideration of responsibilities begins with the observation that most major domestic programs, costs and administrative responsibilities are shared and widely distributed throughout our system. The fiscal burdens of public policies in areas ranging from primary education to homeland security are the joint responsibility of all levels of government and, in some cases, the private sector as well. As we reexamine the federal role in our system, there is a need to sort out how responsibilities for these kinds of programs should be financed in the future. Sorting out fiscal responsibilities involves a variety of considerations. Issues to be considered include the fiscal capacity of various levels of government to finance services from their own resources both now and over the long term as well as the extent to which the benefits of particular programs or services are broadly distributed throughout the nation. Moreover, consideration should also be given to the fiscal capacity of various levels of government and other entities to finance their share of responsibilities in our system, both now and over the longer term. The following kinds of questions can be raised as part of this reexamination of fiscal responsibilities What governmental activities should fall entirely within the purview of the federal or state/local governments and what activities should be shared responsibilities? If the federal government “mandates” activities to be undertaken by state/local governments, under what circumstances is it appropriate for the federal government to finance them and what share of the costs should be borne by federal and nonfederal sources? Are the potential revenue sources available to the various level of government adequate to finance their responsibilities? Because issues involving UMRA and unfunded mandates are part of a broader public policy debate to be had by Congress, we are making no recommendations in this report. As agreed with your office, unless you publicly announce the contents of this report earlier, we will not distribute it until 30 days from the date of this letter. We will then send copies of this report to the Ranking Member, Subcommittee on Oversight of Government Management, the Federal Workforce, and the District of Columbia, Committee on Homeland Security and Governmental Affairs, U.S. Senate; the Chair and Ranking Member of the Government Reform Committee, House of Representatives; the Directors of OMB and CBO and others on request. It will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me or Tim Bober at (202) 512-6806 or williamso@gao.gov or bobert@gao.gov. Key contributors to this report were Tom Beall, Kate Gonzalez, Boris Kachura, Paul Posner, and Michael Rose. For this report, you asked us to provide more information and analysis regarding the Unfunded Mandates Reform Act of 1995 (UMRA) and federal mandates in general. Specifically, you asked us to consult with a diverse group of knowledgeable parties familiar with the act and to report their views on (1) the significant strengths and weaknesses of UMRA as the framework for addressing federal mandates issues, including why the parties believed the issues they identified were significant, and (2) potential options suggested for reinforcing the strengths or addressing the weaknesses. For both of those central objectives, you also asked that we report, to the extent possible, on level of agreement among the various individuals and organizations, which we refer to as “parties” throughout the report. To address our objectives, we primarily used a structured data collection approach to obtain feedback from a diverse set of organizations and individuals knowledgeable about the implementation of UMRA and/or federal mandate programs. To identify prospective parties, we first built upon our recognition of knowledgeable parties based on our past work on unfunded mandates by conducting extensive literature reviews on federal mandates issues. Second, as we contacted the individuals, we asked each of them to recommend other knowledgeable parties for us to contact. In total, 52 individuals and organizations participated in the review. (See app. II for the list of organizations and individuals who provided information responding to our research questions.) The parties provided us their input through a variety of means, including group meetings, individual interviews, and written responses. We sought and obtained viewpoints from organizations and individuals across a broad spectrum of interested communities that we classified into five sectors for purposes of structuring our analyses. These sectors were: academic centers and think tanks; businesses; federal agencies (including executive and legislative branch agencies); public interest advocacy groups; and state and local governments. (For a comprehensive list of their comments and suggested options, see appendix IV, which is available as an electronic supplement to this report.) We reviewed all the information provided by those various parties and organized it on the basis of the topics they addressed. To facilitate analysis and discussion of the considerable amount of information provided by the sources, we first itemized the input, to the extent possible, into a set of discrete separable points. In some instances, if a party’s comments were part of a more lengthy discussion addressing a larger issue, we kept the material together to avoid losing the context of the input. Next, we identified seven broad topical areas or themes, which we used to classify the specific comments, observations, issues, and options that were provided: 1. uses and usefulness of information UMRA generates, 2. UMRA coverage of federal actions, 3. UMRA enforcement, 4. UMRA’s analytic framework, 5. UMRA consultation requirements, 6. design and funding of federal mandates, and 7. evaluation and research needs regarding federal mandates. These themes were further characterized as falling into one of two sets. The first five themes captured input specifically on UMRA and its provisions, and the remaining two themes captured input that was focused on issues about federal mandates in general. We then analyzed and independently coded the resulting master table on the parties’ input using the themes listed above. Any differences in the coding were discussed and a team consensus code determined. If the party’s input touched on more than one theme (for example, options might have been suggested regarding both enforcement of UMRA and how to improve estimates), we assigned multiple codes. Therefore, items with multiple codes are repeated under each relevant theme subsection in this document. This coding into themes was not intended to be precise or to limit suggested options to only certain topics. The coding was simply intended to help group together items that included input relevant to a given topic. To ensure that our organization and characterization of the information that the parties provided accurately reflected their views, we provided each contributor an opportunity to review our summary of their input. They generally concurred with the accuracy of our characterization of their views and, in a few instances, supplemented or clarified their original comments by providing additional information, which we incorporated into our master list of parties’ responses. (Again, see app. IV, which is an electronic supplement for a complete list of the information provided by all of the contributing parties.) We supplemented the information obtained through this broad data gathering effort with a half-day symposium held at GAO on January 26, 2005, involving 26 experts from across all five sectors. (See app. III for a list of the symposium participants.) The overall objectives of the symposium were to provide an opportunity for the participants from different sectors and viewpoints to engage each other, to discuss in more depth the issues and options previously identified, to identify additional options for augmenting strengths or addressing weaknesses, and to elaborate on the relative priorities of the options suggested. To meet these objectives in the limited time available, the discussions at the symposium were structured to focus mainly on the three themes that appeared to attract the greatest number and/or variety of comments during our initial data collection, as well as to address themes from both the UMRA-specific and general mandate sets: UMRA coverage, UMRA enforcement, and the design and funding of federal mandate programs. To encourage open and candid input from the various parties, we are not attributing any input from either our general data collection effort or the symposium to specific organizations or individuals. While our initial data collection effort and the symposium collectively yielded information of considerable breath and depth on UMRA and UMRA-related issues and options, the information we gathered only represents the views of those organizations and individuals who chose to participate in this review. For this reason and related issues, this information provides only a rough gauge as to the prevalence of opinion about given issues or options or the extent to which there is agreement among and within particular sectors about those issues and options. Despite our efforts to solicit a comparable level of input from the different sectors, fewer identified parties from some sectors chose to participate in our review than others. When parties who chose not to participate recommended other contacts that they considered as knowledgeable about UMRA and mandates issues, we sought the participation of the recommended contacts, which allowed us to partially mitigate the extent of non-participation. Also, given the variety of methods and sources used to collect the views, we structured our analyses of prevalence and agreement to avoid double counting the same response provided by different representatives of an organization at different points in time. We did this by categorizing the input on an identified issue or option that we received from a specific entity, whether it came from multiple sources or a single source, as the view of a party. To illustrate this categorization process, a reference to “one party” may represent the views of many representatives of a given organization obtained through a number of meetings or interviews, while another such “one party” reference may represent the views of one person through a single written response. Similarly, in examining the comments classified each theme, if the same issue was identified as a strength by one party and a weakness by another party, we counted the comments as applying to the same issue. While these steps help address some of the difficulties in examining the prevalence of views and agreement between parties, it is a very imprecise assessment. We conducted our review from August 2004 through February 2005 in Washington, D.C., in accordance with generally accepted government auditing standards. 1. American Association of People with Disabilities (AAPD) 2. American Federation of State, County, and Municipal Employees (AFSCME) 3. American Public Power Association (APPA) 4. The Arc of the United States 5. Association of Metropolitan Sewerage Agencies (AMSA) 6. Center on Budget and Policy Priorities (CBPP) 7. Congressional Budget Office (CBO) 8. Congressional Research Service (CRS) 9. Council of State Governments (CSG) 10. Federal Funds Information for States (FFIS) 11. International City/County Management Association (ICMA) 13. National Association of Counties (NACO) 14. National Association of Protection and Advocacy Systems (NAPAS) 15. National Association of State Budget Officers (NASBO) 16. National Conference of State Legislatures (NCSL) 17. National Governors Association (NGA) 18. National League of Cities (NLC) 19. Natural Resources Defense Council (NRDC) 20. Office of Advocacy, Small Business Administration 21. Office of Management and Budget (OMB) 23. Regulatory Brown Bag (regulatory staff from the Departments of Justice, Labor, Transportation, and Veterans Affairs, the Environmental Protection Agency, and the Federal Communications Commission) 24. U.S. Chamber of Commerce 25. U.S. Conference of Mayors (USCM) 1. Keith Bea, Congressional Research Service 2. Richard Belzer, Regulatory Checkbook 3. Neil Bergsman, State of Maryland 4. Richard Beth, Congressional Research Service 5. This e-supplement is available on our Web site at http://www.gao.gov/cgi- bin/getrpt?GAO-05-497SP. Once the strengths, weaknesses and options were identified and reviewed, GAO developed a thematic framework for classifying and organizing this information. Below is a summary list of the options provided by participating parties organized by theme. The list of options presented under each theme is intended to be a complete accounting of the suggested options associated with that theme. The lists are not in any particular order and do not to reflect the relative frequency with which participating parties identified the same or similar option. Options appear on these lists if mentioned by even one participating party. See appendix I for further information about the procedures followed in the organization of this information and associated qualifications concerning its use. See appendix IV e-supplement for a detailed listing of options as suggested by participants as part of their response to perceived strengths and weaknesses. Provide for more centralized review of regulatory mandates. Analyze benefits, as well as costs, of mandates. Apply the Data Quality Act criteria to information generated under Congress should track “unfunded mandates,” defined broadly. Congress and OMB should develop more expertise on regulations and how to govern them. The most important point is to clarify in advance what consequences federal actions will have. Although additional program evaluation of federal mandates would help, this was not the initial intent of UMRA. Research into the scope and scale of unfunded mandates will not be informative unless and until the law has adequate incentives for compliance and accounting. It would be useful for the GAO to provide an annual report documenting the total budgetary shortfall of unfunded mandates. Make the potentially affected nonfederal parties aware when there is a finding that proposed legislation contains a mandate. Enhance the work of CBO’s State and Local Government Cost Estimates Unit by providing the unit more timely access to bills and joint resolutions that may impose unfunded federal mandates. Generally amend, modify or revisit the definitions, exceptions, and exclusions under UMRA and “close loopholes.” Eliminate/amend exceptions for conditions of federal financial assistance and participation in voluntary programs. Expand UMRA to cover appropriations bills and other legislation currently not covered. Expand UMRA to cover changes in conditions of existing programs. Cover rules issues by independent agencies. Amend UMRA to include federal tax actions that reduce state revenues. Amend UMRA to include federal preemptions. Amend/eliminate the national security exclusions. Amend/eliminate the civil rights exclusions. Change cost thresholds and definitions for purposes of identifying mandates that trigger UMRA’s threshold. Expand the definition of an unfunded mandate to include all open-ended entitlements, such as Medicaid, child support, and Title 4E (foster care and adoption assistance) and proposals that would put a cap on or enforce a ceiling on the cost of federal participation in any entitlement or mandatory spending program. Expand the definition of mandates to include those that fail to exceed the statutory threshold only because they do not affect all states. Broaden the definitions in UMRA to apply to federal processes that do not result in published rules but have the effect of a mandate. A wider definition of UMRA’s applicability is needed to address such processes. UMRA hasn’t been as successful in dealing with previous mandates as in discouraging new mandates, but I am not sure how UMRA could be changed to address that. UMRA should authorize CBO to identify and estimate the costs of potential mandates in final agency rules. This would be a purely informational function. UMRA should authorize CBO to identify and estimate the costs of potential mandates in U.S. Supreme Court rulings. The information provide by CBO analyses of judicial intergovernmental mandates would allow the Congress to provide compensatory funding to state and local governments and/or to amend statutes that produce unintended judicial mandates. Under title II, amend the limitation of UMRA not applying to rules without a notice of proposed rulemaking. The Joint Committee on Taxation, responsible for performing costs estimates of tax legislation, should provide additional information on the costs of mandates outside of UMRA’s strict definition, as CBO endeavors to do. Establish an institutional entity whose responsibilities include analysis of federal policies and actions that affect state and local governments. substantive reporting on legislative, government-sought judicial and regulatory preemptions regardless of cost thresholds. Don’t expand UMRA’s coverage; keep it narrow. Retain the current rights exclusions. Add new exclusions. Drop or differentiate coverage of private sector mandates. Clarify definitions under UMRA and ensure consistency of implementation. Maintain the current point of order mechanism. Strengthen the point of order mechanism. Reconsider the usefulness of the point of order mechanism. Require roll call votes for legislation imposing an unfunded federal mandate. Put some backbone into the UMRA requirements that committees provide. information, e.g., set up a hurdle for consideration of legislation if committees leave out required information. Open the CBO methodology for comment, perhaps through the Federal Register or by requiring an independent examination of the process used by CBO. There may be a need to “toughen up” UMRA. Making the “roar” of UMRA a little bigger might at least increase attention to these issues. However, it is not certain one could get Congress to pay more attention legislatively, nor can you legislate Congress from imposing mandates. In short, it is not certain that there are any procedural fixes that could address the problem of unfunded mandates. It is not certain that fixing or simplifying UMRA’s procedures would address the underlying purposes of the act. Generally strengthen enforcement of agency compliance with title II. Reassign oversight responsibilities for agencies’ compliance with title II. Apply the Federal Data Quality Act to agencies’ UMRA analyses. Create more accountable means of estimating mandate costs. Improve title II, including enhanced requirements for federal agencies to consult with state and local governments and the creation of an office within the Office of Management and Budget that is analogous to the State and Local Government Cost Estimates Unit at the Congressional Budget Office. Revisit the provisions of title II. The Office of Information and Regulatory Affairs should return a rule that is not in compliance with UMRA to the agency from which it came. If an agency is unsure whether a rule contains a significant mandate, it should err on the side of caution and prepare a mandates impact statement prior to issuing the regulation. Implement some form of third-party, independent review of the UMRA estimates, data, and processes. Revisit the exclusion of indirect costs from UMRA estimates. Expand the title II definition to include more than just expenditures for purposes of triggering the UMRA threshold. Consider new approaches to address uncertainties in the estimation of potential effects of mandates. Analyze the benefits, as well as the costs, of federal mandates in UMRA estimates. Examine/monitor the implementation of the UMRA estimation process and mandate determinations by different agencies. Amend UMRA so that Federal regulatory agencies would not be allowed to avoid congressional mandates by mischaracterizing the cost of a rulemaking. Congress should amend UMRA to lower the fiscal impact threshold for federal agency intergovernmental mandates from $100 million to $50 million. UMRA estimates should be done on a regional/local level basis also, not just at an aggregate national level. Federal agencies should look into the cost-benefit ratio of their mandates. Other agencies should consider emulating CBO’s approach of more centralized reviews of statutes and direct contacts with state and local governments when preparing estimates. Enhance the work of CBO’s State and Local Government Cost Estimates Unit by providing more timely access to bills and joint resolutions that may impose unfunded federal mandates. Require UMRA-like estimates when major changes in grant conditions and/or formulas occur. Clarify what constitutes a mandate and whether a bill’s effect on the costs of existing mandates should be counted as a new mandate cost when the bill itself contains no new enforceable duty. Replicate on the regulatory side approaches CBO uses for reviews of statutory mandates. Bring more uniformity and consistency to the consultation process. Do more to involve State and local governments early in the rulemaking process. Provide more training and education to agencies’ regulatory staffs and their contractors who prepare many of the rulemaking studies and materials, such as regulatory impact analyses. State and local governmental authority to reject mandates or litigate based on noncompliance with clear statutory criteria would dramatically improve states’ ability to ensure that federal agencies take seriously their duty to consult. More parties may need to be covered by the consultation provision (e.g., not just focused on state, local, and tribal governments). Intergovernmental communications should be documented and made part of the rulemaking proceeding while deliberation about the proposal is still going on. If not, the decision making process is opaque. To avoid elevating the position of one particular voice in the debate, amend the consultation provisions of UMRA so the act does not require federal agencies to consult with state, local and tribal governments before a regulation is proposed. Ensure sufficient federal funding for mandated services Provide state and local governments waivers, offsets, etc. Compliance with federal mandates should not be made contingent on full federal funding. Cap the costs of mandates on state and local governments. Provide more flexibility in the design of mandate programs. Design federal mandate programs with sunset provisions. Restrict the preemption of state laws. Something bigger than just amending UMRA is needed to address this policy issue. Question whether an entitlement approach and model for federal funding (as with the Medicaid program) makes sense as public policy for providing federal assistance. An eligibility-based system becomes an entitlement program under which costs are hard to control. In contrast, a block grant model lets states experiment with flexible approaches and cap some costs. However, it is questionable whether there would ever be a way to modify the federal model for these programs so they weren’t entitlements. This dilemma can’t be solved by just another federal statute or amendment to UMRA. Discipline is the only real solution to curbing the practice of Congress adding, and often changing, lots of conditions that come with federal programs and funding. Most states have created a budget that is dependent on the federal funding, and measures need to be taken to wean the state system off the federal revenue. The federal government should consider using a “zero-based budgeting” approach to funding for federal mandates. Such an approach would flip the usual arrangement so that states would get no federal funds (e.g., federal highway funds) until they do what is required under federal statutes. There hasn’t been sufficient consideration of user fees. For example, if there is a permitting program that is delegated to the states, the applicants should bear the cost of the permitting process, not the states. Incongruous to require cost-benefit analysis for regulations but only require cost estimate for legislation. Address the incongruity of requiring cost-benefit analysis for regulations but only requiring cost estimates for legislation. Cost-effectiveness of UMRA has not been explored. Explore the cost- effectiveness of UMRA. Do retrospective analyses of the costs and/or effects of mandates. Do a study/provide data on the cumulative impact of federal mandates. Do studies/provide data on the local/regional impacts of mandates. Analyze benefits, as well as costs, of federal mandates. Federal agencies should look into the cost-benefit ratio of their mandates. It might help to provide more training and education to agencies’ regulatory staffs and their contractors who prepare many of the rulemaking studies and materials, such as regulatory impact analyses. A first step in getting states to do what laws mandate is simply to report, in a straightforward way, what states are or are not doing (e.g., have a “national scorecard” or central point of contact where one could go to get such information). GAO’s report on UMRA should try to bring a little more clarity to the mandates issue. It would be valuable to discuss conceptually what an unfunded mandate is and identify the associated federalism issues. Do research on whether the statute has changed agencies’ regulations. Help Congress and the general public to recognize that these numbers are soft. GAO conducted two information collection efforts to arrive at our findings regarding UMRA and federal mandates’ strengths, weaknesses and options. The first was an effort focusing on 52 organizations and individuals that are knowledgeable about UMRA and federal mandates. We solicited information from these parties regarding the strengths, weaknesses and options. On the basis of our analysis of the information provided by these parties, we identified seven major themes. The second information collection effort was a symposium held on January 26, 2005. All the parties we contacted during our initial data collection phase were invited to attend. In addition, we sent each of them a discussion draft presenting all of the issues (strengths and weaknesses) and options suggested to address those issues. The symposium was divided into four sessions with three of the four sessions focused on the themes most frequently cited. Sessions 1 and 2 focused on UMRA-specific themes (coverage and enforcement, respectively), Session 3 dealt with broader federal mandates issues (design and funding), and Session 4 was an open session for other issues that participants wanted to raise. Each session was opened with a brief overview provided by GAO and was followed by an open discussion among the participants. To obtain a general sense of which suggested options had the greatest or least amount of support among the symposium participants, we used a balloting process at the end of each session. We provided the participants a ballot that was to be completed at the end of each session. Each ballot listed the options suggested for that theme collected during our initial information collection effort. Second, the participants were asked to review the ballot and provide any additional options during the course of the discussion that they wanted to be added to the ballot and considered in the balloting process. At the conclusion of a session, we asked each participant to identify (a) the three options having their greatest support and (b) the three options they could not support. The results of that balloting for the symposium sessions are presented below. As mentioned previously, all the suggested options on the ballot were provided by the parties we contacted during the initial data collection phase or added by participants during the symposium. In accord with the voting instructions, we present for each session the top three options getting the most votes. These results reflect the views of symposium participants only and are provided to convey a general sense of their preferences. Due to variation in vote tallies for each of these options, these results should not be construed as showing options achieving a consensus among symposium participants. Options that participants indicated had their greatest support: Generally amend, modify or revisit the definitions, exceptions, and exclusions under UMRA and “close loopholes.” Amend UMRA to include federal preemptions. Move to definition of whether it will cost state and local governments money to comply-so as to include federal tax changes that affect state revenue system, requirements that are a condition of federal fiscal assistance and similar issues. Options that participants indicated they could not support: Don’t expand UMRA’s coverage; keep it narrow. Amend or eliminate the civil rights exclusions in UMRA. Add new exclusions for mandates regarding public health, safety, environmental protection, workers’ rights, and disability. Options that participants indicated had their greatest support: Create an office within the OMB that is analogous to the State and Local Government Cost Estimate Unit at CBO. Require program legislation to contain mandate cost authorizations; provide that a mandate (including mandate pursuant to regulations) not funded at the authorized level for a fiscal year is held in abeyance unless the funding or obligations are altered to remove the inconsistency. Add processes for accounting for cumulative effects of regulatory activities in similar fields, (e.g., environmental regulations) including a requirement to collect data on actual costs. Options that participants indicated they could not support: Maintain the current point of order mechanism (i.e., keep the status quo). Empower the states to either reject mandates on their own authority or litigate congressional and/or agency noncompliance with clear statutory criteria. Cap the magnitude of actual state and local outlays at a level equal to the Congress’s or an agency’s prior estimate of those burdens to eliminate incentives to underestimate the impacts and provide a level of discipline to determinations of whether proposals contain significant unfunded mandates. Options that participants indicated had their greatest support: Restrict the preemption of state laws. Consider the effects of the timing of federal actions and program changes on state governments. Recognize that states (and the populations served by federal-state programs) are very diverse. Create a mechanism, similar to section 610 of the Regulatory Flexibility Act, where agencies would evaluate the effectiveness of a mandate after a certain period of time (e.g., 5 or 10 years). Options that participants indicated they could not support: As an option for addressing the funding of mandates, consider waivers or swaps. Amend UMRA so that, if a mandate is legislated, then state and local governments gain certain waiver rights or a regulatory “off ramp” when faced with costly mandates. Remind states that participation in some of the federal mandate programs is voluntary and, therefore, states can opt out of the programs if participation is considered too costly. The federal government should consider using a “zero-based budgeting” approach to funding for federal mandates. Such an approach would flip the usual arrangement so that states would get no federal funds (e.g., federal highway funds) until they do what is required under federal statutes. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
The Unfunded Mandates Reform Act of 1995 (UMRA) was enacted to address concerns about federal statutes and regulations that require nonfederal parties to expend resources to achieve legislative goals without being provided federal funding to cover the costs. UMRA generates information about the nature and size of potential federal mandates on nonfederal entities to assist Congress and agency decision makers in their consideration of proposed legislation and regulations. However, it does not preclude the implementation of such mandates. At various times in its 10-year history, Congress has considered legislation to amend various aspects of the act to address ongoing questions about its effectiveness. Most recently, GAO was asked to consult with a diverse group of parties familiar with the act and to report their views on (1) the significant strengths and weaknesses of UMRA as the framework for addressing mandate issues and (2) potential options for reinforcing the strengths or addressing the weaknesses. To address these objectives, we obtained information from 52 organizations and individuals reflecting a diverse range of viewpoints. GAO analyzed the information acquired and organized it into broad themes for analytical and reporting purposes. The parties GAO contacted provided a significant number of comments about UMRA, specifically, and federal mandates, generally. Their views often varied across and within the five sectors we identified (academic/think tank, public interest advocacy, business, federal agencies, and state and local governments). Overall, the numerous strengths, weaknesses and options for improvement identified during the review fell into several broad themes, including UMRA-specific issues such as coverage and enforcement, among others, and more general issues about the design, funding, and evaluation of federal mandates. First, UMRA coverage was, by far, the most frequently cited issue by parties from the various sectors. Parties across most sectors that provided comments said UMRA's numerous definitions, exclusions, and exceptions leave out many federal actions that may significantly impact nonfederal entities and should be revisited. Among the most commonly suggested options were to expand UMRA's coverage to include a broader set of actions by limiting the various exclusions and exceptions and lowering the cost thresholds, which would make more federal actions mandates under UMRA. However, a few parties, primarily from the public interest advocacy sector, viewed UMRA's narrow coverage as a strength that should be maintained. Second, parties from various sectors also raised a number of issues about federal mandates in general. In particular, they had strong views about the need for better evaluation and research of federal mandates and more complete estimates of both the direct and indirect costs of mandates on nonfederal entities. The most frequently suggested option to address these issues was more post-implementation evaluation of existing mandates or "look backs." Such evaluations of the actual performance of mandates could enable policymakers to better understand mandates' benefits, impacts and costs among other issues. In turn, developing such evaluation information could lead to the adjustment of existing mandate programs in terms of design and/or funding, perhaps resulting in more effective or efficient programs. Going forward, the issue of unfunded mandates raises broader questions about assigning fiscal responsibilities within our federal system. Federal and state governments face serious fiscal challenges both in the short and longer term. As GAO reported in its February 2005 report entitled 21st Century Challenges: Reexamining the Base of the Federal Government (GAO-05-325SP), the long-term fiscal challenges facing the federal budget and numerous other geopolitical changes challenging the continued relevance of existing programs and priorities warrant a national debate to review what the government does, how it does business and how it finances its priorities. Such a reexamination includes considering how responsibilities for financing public services are allocated and shared across the many nonfederal entities in the U.S. system as well.
Passenger and freight rail services help move people and goods through the transportation system, which helps the economic well-being of the United States. Passenger rail services can take many forms. Some mass transit agencies, which can be public or private entities, provide rail services, such as commuter rail and heavy rail (e.g., subway) in cities across the United States. Through these rail services, mass transit agencies serve a large part of the commuting population. For example, in the third quarter of 2003, commuter rail systems provided an average of 1.2 million passenger trips each weekday. The National Railroad Passenger Corporation (Amtrak) provides intercity passenger rail services in the United States. Amtrak operates a 22,000-mile network, primarily over freight railroad tracks, providing service to 46 states and the District of Columbia. In fiscal year 2002, Amtrak served 23.4 million passengers, or about 64,000 passengers per day. The nation’s freight rail network carries 42 percent of domestic intercity freight (measured by ton miles) in 2001— everything from lumber to vegetables, coal to orange juice, grain to automobiles, and chemicals to scrap iron. Prior to September 11, 2001, DOT—namely, the Federal Railroad Administration (FRA), Federal Transit Administration (FTA), and Research and Special Programs Administration (RSPA)—was the primary federal entity involved in passenger and freight rail security matters. However, in response to the attacks on September 11, Congress passed the Aviation and Transportation Security Act (ATSA), which created TSA within DOT and defined its primary responsibility as ensuring security in all modes of transportation. The act also gives TSA regulatory authority over all transportation modes. With the passage of the Homeland Security Act, TSA, along with over 20 other agencies, was transferred to the new Department of Homeland Security (DHS). Throughout the world, rail systems have been the target of terrorist attacks. For example, the first large-scale terrorist use of a chemical weapon occurred in 1995 on the Tokyo subway system. In this attack, a terrorist group released sarin gas on a subway train, killing 11 people and injuring about 5,500. In addition, according to the Mineta Institute, surface transportation systems were the target of more than 195 terrorist attacks from 1997 through 2000. (See fig. 1.) Passenger and freight rail providers face significant challenges in improving security. Some security challenges are common to passenger and freight rail systems; others are unique to the type of rail system. Common challenges include the funding of security improvements, the interconnectivity of the rail system, and the number of stakeholders involved in rail security. The unique challenges include the openness of mass transit systems and the transport of hazardous materials by freight railroads. A challenge that is common to both passenger and freight rail systems is the funding of security enhancements. Although some security improvements are inexpensive, such as removing trash cans from subway platforms, most require substantial funding. For example, as we reported in December 2002, one transit agency estimated that an intrusion alarm and closed circuit television system for only one of its portals would cost approximately $250,000—an amount equal to at least a quarter of the capital budgets of a majority of the transit agencies we surveyed. The current economic environment makes this a difficult time for private industry or state and local governments to make additional security investments. As we noted in June 2003, the sluggish economy has further weakened the transportation industry’s financial condition by decreasing ridership and revenues. Given the tight budget environment, state and local governments and transportation operators, such as transit agencies, must make difficult trade-offs between security investments and other needs, such as service expansion and equipment upgrades. Further exacerbating the problem of funding security improvements are the additional costs the passenger and freight rail providers incur when the federal government elevates the national threat condition. For example, Amtrak estimates that it spends an additional $500,000 per month for police overtime when the national threat condition is increased. Another common challenge for both passenger and freight rail systems is the interconnectivity within the rail system and between the transportation sector and nearly every other sector of the economy. The passenger and freight rail systems are part of an intermodal transportation system—that is, passengers and freight can use multiple modes of transportation to reach a destination. For example, from its point of origin to its destination, a piece of freight, such as a shipping container, can move from ship to train to truck. The interconnective nature of the transportation system creates several security challenges. First, the effects of events directed at one mode of transportation can ripple throughout the entire system. For example, when the port workers in California, Oregon, and Washington went on strike in 2002, the railroads saw their intermodal traffic decline by almost 30 percent during the first week of the strike, compared with the year before. Second, the interconnecting modes can contaminate each other—that is, if a particular mode experiences a security breach, the breach could affect other modes. An example of this would be if a shipping container that held a weapon of mass destruction arrived at a U.S. port where it was placed on a train. In this case, although the original security breach occurred in the port, the rail or trucking industry would be affected as well. Thus, even if operators within one mode established high levels of security, they could be affected by the security efforts, or lack thereof, in the other modes. Third, intermodal facilities where passenger and freight rail systems connect and interact with other transportation modes—such as ports—are potential targets for attack because of the presence of passengers, freight, employees, and equipment at these facilities. An additional common challenge for both passenger and rail systems is the number of stakeholders involved. Government agencies at the federal, state, and local levels and private companies share responsibility for rail security. For example, there were over 550 freight railroads operating in the United States in 2002. In addition, many passenger rail services, such as Amtrak and commuter rail, operate over tracks owned by freight railroads. For instance, over 95 percent of Amtrak’s 22,000-mile network operates on freight railroad tracks. The number of stakeholders involved in transportation security can lead to communication challenges, duplication, and conflicting guidance. As we have noted in past reports, coordination and consensus-building are critical to successful implementation of security efforts. Transportation stakeholders can have inconsistent goals or interests, which can make consensus-building challenging. For example, from a safety perspective, trains that carry hazardous materials should be required to have placards that identify the contents of a train so that emergency personnel know how best to respond to an incident. However, from a security perspective, identifying placards on vehicles that carry hazardous materials make them a potential target for attack. In addition to the common security challenges that face both passenger and rail systems, there are some challenges that are unique to the type of rail system. In our past reports, we have discussed several of these unique challenges, including the openness of mass transit systems and the size of the freight rail network and the diversity of freight hauled. According to mass transit officials and transit security experts, certain characteristics of mass transit systems make them inherently vulnerable to terrorist attacks and difficult to secure. By design, mass transit systems are open (i.e., have multiple access points and, in some cases, no barriers) so that they can move large numbers of people quickly. In contrast, the aviation system is housed in closed and controlled locations with few entry points. The openness of mass transit systems can leave them vulnerable because transit officials cannot monitor or control who enters or leaves the systems. In addition, other characteristics of some transit systems—high ridership, expensive infrastructure, economic importance, and location (e.g., large metropolitan areas or tourist destinations)—also make them attractive targets because of the potential for mass casualties and economic damage. Moreover, some of these same characteristics make mass transit systems difficult to secure. For example, the number of riders that pass through a mass transit system—especially during peak hours—make some security measures, such as metal detectors, impractical. In addition, the multiple access points along extended routes make the costs of securing each location prohibitive. Further complicating transit security is the need for transit agencies to balance security concerns with accessibility, convenience, and affordability. Because transit riders often could choose another means of transportation, such as a personal automobile, transit agencies must compete for riders. To remain competitive, transit agencies must offer convenient, inexpensive, and quality service. Therefore, security measures that limit accessibility, cause delays, increase fares, or otherwise cause inconvenience could push people away from mass transit and back into their cars. The size and diversity of the freight rail system make it difficult to adequately secure. The freight rail system’s extensive infrastructure crisscrosses the nation and extends beyond our borders to move millions of tons of freight each day (see fig. 2.). There are over 100,000 miles of rail in the United States. The extensiveness of the infrastructure creates an infinite number of targets for terrorists. Protecting freight rail assets from attack is made more difficult because of the tremendous variety of freight hauled by railroads. For example, railroads carry freight as diverse as dry bulk (grain) and hazardous materials. The transport of hazardous materials is of particular concern because serious incidents involving these materials have the potential to cause widespread disruption or injury. In 2001, over 83 million tons of hazardous materials were shipped by rail in the United States across the rail network, which extends through every major city as well as thousands of small communities. (Figure 3 is a photograph of a rail tanker car containing one of the many types of hazardous materials commonly transported by rail.) For our April 2003 report on rail security, we visited a number of local communities and interviewed federal and private sector hazardous materials transportation experts. A number of issues emerged from our work: the need for measures to better safeguard hazardous materials temporarily stored in rail cars while awaiting delivery to their ultimate destination—a practice commonly called “storage-in-transit,” the advisability of requiring companies to notify local communities of the type and quantities of materials stored in transit, and the appropriate amount of information rail companies should be required to provide local officials regarding hazardous material shipments that pass through their communities. We recommended in April 2003 that DOT and DHS develop a plan that specifically addresses the security of the nation’s freight rail infrastructure. This plan should build upon the rail industries’ experience with rail infrastructure and the transportation of hazardous materials and establish time frames for implementing specific security actions necessary to protect hazardous material rail shipments. DHS has informed us that this plan is in progress. Since September 11, passenger and freight rail providers have been working to strengthen security. Although security was a priority before September 11, the terrorist attacks elevated the importance and urgency of transportation security for passenger and rail providers. According to representatives from the Association of American Railroads, Amtrak, and transit agencies, passenger and freight rail providers have implemented new security measures or increased the frequency or intensity of existing activities, including: Conducted vulnerability or risk assessments: Many passenger and freight rail providers conducted assessments of their systems to identify potential vulnerabilities, critical infrastructure or assets, and corrective actions or needed security improvements. For example, the railroad industry conducted a risk assessment that identified over 1,300 critical assets and served as a foundation for the industry’s security plan. Increased emergency drills: Many passenger rail providers have increased the frequency of emergency drills. For example, as of June 2003, Amtrak had conducted two full-scale emergency drills in New York City. The purpose of emergency drilling is to test emergency plans, identify problems, and develop corrective actions. Figure 4 is a photograph from an annual emergency drill conducted by the Washington Metropolitan Area Transit Authority. Developed or revised security plans: Passenger and freight rail providers developed security plans or reviewed existing plans to determine what changes, if any, needed to be made. For example, the Association of American Railroads worked jointly with several chemical industry associations and consultants from a security firm to develop the rail industry’s security management plan. The plan establishes four alert levels and describes a graduated series of actions to prevent terrorist threats to railroad personnel and facilities that correspond to each alert level. Provided additional training: Many transit agencies have either participated in or conducted additional training on security or antiterrorism. For example, many transit agencies attended seminars conducted by FTA or by the American Public Transportation Association. The federal government has also acted to enhance rail security. Prior to September 11, DOT modal administrations had primary responsibility for the security of the transportation system. In the wake of September 11, Congress created TSA and gave it responsibility for the security of all modes of transportation. In its first year of existence, TSA worked to establish its infrastructure and focused primarily on meeting the aviation security deadlines contained in ATSA. As TSA worked to establish itself and improve the security of the aviation system, DOT modal administrations, namely FRA, FTA, and RSPA, acted to enhance passenger and freight rail security (see tab. 1.). For example, FTA launched a multipart initiative for mass transit agencies that provided grants for emergency drills, offered free security training, conducted security assessments at 36 transit agencies, provided technical assistance, and invested in research and development. With the immediate crisis of meeting many aviation security deadlines behind it, TSA has been able to focus more on the security of all modes of transportation, including rail security. We reported in June 2003 that TSA was moving forward with efforts to secure the entire transportation system, such as developing standardized criticality, threat, and vulnerability assessment tools; and establishing security standards for all modes of transportation. Although steps have been taken to enhance passenger and freight security since September 11, the recent terrorist attack on a rail system in Spain naturally focuses our attention on what more could be done to secure the nation’s rail systems. In our previous work on transportation security, we identified future actions that the federal government could take to enhance security of individual transportation modes as well as the entire transportation system. For example, in our December 2002 report on mass transit security, we recommended that the Secretary of Transportation seek a legislative change to give mass transit agencies more flexibility in using federal funds for security-related operating expenses, among other things. Two recurring themes cut across our previous work in transportation security—the need for the federal government to utilize a risk management approach and the need for the federal government to improve coordination of security efforts. Using risk management principles to guide decision-making is a good strategy, given the difficult trade-offs the federal government will likely have to make as it moves forward with its transportation security efforts. We have advocated using a risk management approach to guide federal programs and responses to better prepare against terrorism and other threats and to better direct finite national resources to areas of highest priority. As figure 5 illustrates, the highest priorities emerge where threats, vulnerabilities, and criticality overlap. For example, rail infrastructure that is determined to be a critical asset, vulnerable to attack, and a likely target would be at most risk and therefore would be a higher priority for funding compared with infrastructure that was only vulnerable to attack. The federal government is likely to be viewed as a source of funding for at least some rail security enhancements. These enhancements will join the growing list of security initiatives competing for federal assistance. A risk management approach can help inform funding decisions for security improvements within the rail system and across modes. A risk management approach entails a continuous process of managing, through a series of mitigating actions, the likelihood of an adverse event happening with a negative impact. Risk management encompasses “inherent” risk (i.e., risk that would exist absent any mitigating action), as well as “residual” risk (i.e., the risk that remains even after mitigating actions have been taken). Figure 6 depicts the risk management framework. Risk management principles acknowledge that while risk cannot be eliminated, enhancing protection from known or potential threats can help reduce it. (Appendix I provides a description of the key elements of the risk management approach.) We reported in June 2003 that TSA planned to adopt a risk management approach for its efforts to enhance the security of the nation’s transportation system. According to TSA officials, risk management principles will drive all decisions—from standard-setting, to funding priorities, to staffing. Coordination is also a key action in meeting transportation security challenges. As we have noted in previous reports, coordination among all levels of the government and the private industry is critical to the success of security efforts. The lack of coordination can lead to such problems as duplication and/or conflicting efforts, gaps in preparedness, and confusion. Moreover, the lack of coordination can strain intergovernmental relationships, drain resources, and raise the potential for problems in responding to terrorism. The administration’s National Strategy for Homeland Security and the National Strategy for the Physical Protection of Critical Infrastructures and Key Assets also emphasize the importance of and need for coordination in security efforts. In particular, the National Strategy for the Physical Protection of Critical Infrastructures and Key Assets notes that protecting critical infrastructure, such as the transportation system, “requires a unifying organization, a clear purpose, a common understanding of roles and responsibilities, accountability, and a set of well-understood coordinating processes.” We reported in June 2003 that the roles and responsibilities of TSA and DOT in transportation security, including rail security, have yet to be clearly delineated, which creates the potential for duplicating or conflicting efforts as both entities work to enhance security. Legislation has not defined TSA’s role and responsibilities in securing all modes of transportation. ATSA does not specify TSA’s role and responsibilities in securing the maritime and land transportation modes in detail as it does for aviation security. Instead, the act simply states that TSA is responsible for ensuring security in all modes of transportation. The act also did not eliminate DOT modal administrations’ existing statutory responsibilities for securing the different transportation modes. Moreover, recent legislation indicates that DOT still has security responsibilities. In particular, the Homeland Security Act of 2002 states that the Secretary of Transportation is responsible for the security as well as the safety of rail and the transport of hazardous materials by all modes. To clarify the roles and responsibilities of TSA and DOT in transportation security matters, we recommended that the Secretary of Transportation and Secretary of Homeland Security use a mechanism, such as a memorandum of agreement to clearly delineate their roles and responsibilities. The Department of Homeland Security (DHS) and DOT disagreed with our recommendation, noting that DHS had the lead for the Administration in transportation security matters and that DHS and DOT were committed to broad and routine consultations. We continue to believe our recommendation is valid. A mechanism, such as a memorandum of agreement, would serve to clarify, delineate, and document the roles and responsibilities of each entity. This is especially important considering DOT responsibilities for transportation safety overlap with DHS’ role in securing the transportation system. Moreover, recent pieces of legislation give DOT transportation security responsibilities for some activities, including the rail security. Consequently, the lack of clearly delineated roles and responsibilities could lead to duplication, confusion, and gaps in preparedness. A mechanism would also serve to hold each entity accountable for its transportation security responsibilities. Finally, it could serve as a vehicle to communicate the roles and responsibilities of each entity to transportation security stakeholders. Securing the nation’s passenger and freight rail systems is a tremendous task. Many challenges must be overcome. Passenger and freight rail stakeholders have acted to enhance security, but more work is needed. As passenger and freight rail stakeholders, including the federal government, work to enhance security, it is important that efforts be coordinated. The lack of coordination could lead to duplication and confusion. More importantly, it could hamper the rail sector’s ability to prepare for and respond to attacks. In addition, to ensure that finite resources are directed to the areas of highest priority, risk management principles should guide decision-making. Given budget pressures at all levels of government and the sluggish economy, difficult trade-offs will undoubtedly need to be made among competing claims for assistance. A risk management approach can help inform these difficult decisions. This concludes our prepared statement. We would be pleased to respond to any questions you or other Members of the Committee may have. For information about this testimony, please contact Peter Guerrero, Director, Physical Infrastructure Issues, on (202) 512-2834; or Norman Rabkin, Managing Director, Homeland Security and Justice Issues, on (202) 512- 8777. Individuals making key contributions to this testimony included Nikki Clowers, Susan Fleming, Maria Santos, and Robert White. Threat Assessment. Threat is defined as potential intent to cause harm or damage to an asset (e.g., natural environment, people, man-made infrastructures, and activities and operations). A threat assessment identifies adverse events that can affect an entity and may be present at the global, national, or local level. Criticality assessment. Criticality is defined as an asset’s relative worth. A criticality assessment identifies and evaluates an entity’s assets based on a variety of factors, including importance of a function and the significance of a system in terms of national security, economic activity, or public safety. Criticality assessments help to provide a basis for prioritizing protection relative to limited resources. Vulnerability assessment. Vulnerability is defined as the inherent state or condition of an asset that can be exploited to cause harm. A vulnerability assessment identifies the extent that these inherent states may be exploited, relative to countermeasures that have been or could be deployed. Risk Assessment. Risk assessment is a qualitative and/or quantitative determination of the likelihood of an adverse event occurring and the severity, or impact, of its consequences. It may include scenarios under which two or more risks interact, creating greater or lesser impacts, as well as the ranking of risky events. Risk characterization. Risk characterization involves designating risk on a categorical scale (e.g., low, medium, and high). Risk characterization provides input for deciding which areas are most suited to mitigate risk. Mitigation Evaluation. Mitigation evaluation is the identification of mitigation alternatives to assess the effectiveness of the alternatives. The alternatives should be evaluated for their likely effect on risk and their cost. Mitigation Selection. Mitigation selection involves a management decision on which mitigation alternatives should be implemented among alternatives, taking into account risk, costs, and the effectiveness of mitigation alternatives. Selection among mitigation alternatives should be based upon pre-considered criteria. There are as of yet no clearly preferred selection criteria, although potential factors might include risk reduction, net benefits, equality of treatment, or other stated values. Mitigation selection does not necessarily involve prioritizing all resources to the highest risk area, but in attempting to balance overall risk and available resources. Risk mitigation. Risk mitigation is the implementation of mitigating actions, depending upon an organization’s chosen action posture (i.e. the decision on what to do about overall risk). Specifically, risk mitigation may involve risk acceptance (taking no action), risk avoidance (taking actions to avoid activities that involve risk), risk reduction (taking actions to reduce the likelihood and/or impact of risk), and risk sharing (taking actions to reduce risk by sharing risk with other entities). As shown in figure 6, risk mitigation is best framed within an integrated systems approach that encompasses action in all organizational areas; including personnel, processes, technology, infrastructure, and governance. An integrated systems approach helps to ensure that taking action in one or more areas would not create unintended consequences in another area. Monitoring and evaluation. Monitoring and evaluation is a continuous repetitive assessment process to keep risk management current and relevant. It should involve reassessing risk characterizations after mitigating efforts have been implemented. It also includes peer review, testing, and validation. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Passenger and freight rail services are important links in the nation's transportation system. Terrorist attacks on passenger and/or freight rail services have the potential to cause widespread injury, loss of life, and economic disruption. The recent terrorist attack in Spain illustrates that rail systems, like all modes of transportation, are targets for attacks. GAO was asked to summarize the results of its recent reports on transportation security that examined (1) challenges in securing passenger and freight rail systems, (2) actions rail stakeholders have taken to enhance passenger and freight rail systems, and (3) future actions that could further enhance rail security. Securing the passenger and freight rail systems are fraught with challenges. Some of these challenges are common to passenger and freight rail systems, such as the funding of security improvements, the interconnectivity of the rail system, and the number of stakeholders involved in rail security. Other challenges are unique to the type of rail system. For example, the open access and high ridership of mass transit systems make them both vulnerable to attack and difficult to secure. Similarly, freight railroads transport millions of tons of hazardous materials each year across the United States, raising concerns about the vulnerability of these shipments to terrorist attack. Passenger and freight rail stakeholders have taken a number of steps to improve the security of the nation's rail system since September 11, 2001. Although security received attention before September 11, the terrorist attacks elevated the importance and urgency of transportation security for passenger and rail providers. Consequently, passenger and freight rail providers have implemented new security measures or increased the frequency or intensity of existing activities, including performing risk assessments, conducting emergency drills, and developing security plans. The federal government has also acted to enhance rail security. For example, the Federal Transit Administration has provided grants for emergency drills and conducted security assessments at the largest transit agencies, among other things. Implementation of risk management principles and improved coordination could help enhance rail security. Using risk management principles can help guide federal programs and responses to better prepare against terrorism and other threats and to better direct finite national resources to areas of highest priority. In addition, improved coordination among federal entities could help enhance security efforts across all modes, including passenger and freight rail systems. We reported in June 2003 that the roles and responsibilities of the Transportation Security Administration (TSA) and the Department of Transportation (DOT) in transportation security, including rail security, have yet to be clearly delineated, which creates the potential for duplicating or conflicting efforts as both entities work to enhance security.
The importance of patents and other mechanisms to enable inventors to capture some of the benefits of their innovations has long been recognized in the United States as a tool to encourage innovation, dating back to Article 1 of the U.S. Constitution and the 1790 patent law. Ensuring the protection of IP rights encourages the introduction of innovative products and creative works to the public. Protection is granted by guaranteeing proprietors limited exclusive rights to whatever economic reward the market may provide for their creations and products. Today, eight federal agencies and entities within them undertake the primary U.S. government activities in support of IP rights. These agencies and entities include Commerce, HHS, DHS, Justice, ITC, State, USTR, the Copyright Office, and entities such as Customs and Border Protection (CBP), the U.S. Patent and Trademark Office, and the Federal Bureau of Investigation (FBI). In addition to domestic efforts for protecting IP, the U.S. government participated actively in negotiating the World Trade Organization’s (WTO) Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS), which came into force in 1995 and broadly governs the multilateral protection of IP rights. Under TRIPS, all WTO member countries are obligated to establish laws and regulations that meet a minimum standard for protecting various areas of IP rights. It also provides for enforcement measures for members. One of USTR’s priorities in recent years has been negotiating free trade agreements. Since 2000, USTR has completed negotiations for free trade agreements that have entered into force with Australia, Bahrain, Central America, Chile, Jordan, Morocco, Oman, Peru, and Singapore. According to officials at USTR, these agreements offer protection beyond that required in TRIPS. Intellectual property is an important component of the U.S. economy, and the United States is an acknowledged global leader in the creation of intellectual property. According to the USTR, “Americans are the world’s leading innovators, and our ideas and intellectual property are a key ingredient to our competitiveness and prosperity.” The United States has generally been very active in terms of advocating strong IP protection and encouraging other nations to improve these systems for two key reasons. First, the U.S. has been the source of a large share of technological improvements for many years and, therefore, stands to lose if the associated IP rights are not respected in other nations. Secondly, a prominent economist noted that IP protection appears to be one of the factors that has helped to generate the enormous growth in the world economy and in the standard of living that has occurred in the last 150 years. This economist pointed out that the last two centuries have created an unprecedented surge in growth compared to prior periods. Among the factors attributed to creating the conditions for this explosion in economic growth are the rule of law, including property rights and the enforceability of contracts. While these conditions are clearly important for generating economic growth, determining the contributions of innovation to economic growth at the level of the overall economy has been a challenging task. Economists have used a variety of techniques to better understand the role of innovation in growth, and historical evidence shows that growth rates have periodically been driven upward by major technological improvements, beginning with the industrial revolution and the role of electricity, and continuing with the current revolution in information technology. Generally, individual countries grant and enforce IP rights. IP is any innovation, commercial or artistic, or any unique name, symbol, logo, or design used commercially. IP rights protect the economic interests of the creators of these works by giving them property rights over their creations. Copyright. A set of exclusive rights subsisting in original works of authorship fixed in any tangible medium of expression now known or later developed, for a fixed period of time. For example, works may be literary, musical, or artistic. Trademark. Any sign or any combination of signs capable of distinguishing the source of goods or services is capable of constituting a trademark. Such signs— in particular, words (including personal names), letters, numerals, figurative elements, and combinations of colors, as well as any combination of such signs— are eligible for registration as trademarks. Patent. Exclusive rights granted to inventions for a fixed period of time, whether products or processes, in all fields of technology, provided they are new, not obvious (involve an inventive step), and have utility (are capable of industrial application). “Pirated copyright goods” refer to any goods that are copies made without the consent of the right holder or person duly authorized by the right holder. “Counterfeit goods” refer to any goods, including packaging or bearing without authorization, a trademark that is identical to a trademark validly registered for those goods, or that cannot be distinguished in its essential aspects from such a trademark, and that, thereby, infringes the rights of the owner of the trademark in question. According to the U.S. Food and Drug Administration (FDA), “counterfeit drugs” are defined under U.S. law as those sold under a product name without proper authorization, where the identity of the source drug is knowingly and intentionally mislabeled in a way that suggests that it is the authentic and approved product. CBP data show that between fiscal years 2004 and 2009, the domestic value and number of U.S. seizures of counterfeit goods imported from other countries have fluctuated. These seizures have been concentrated among certain types of products. For example, seizures of footwear, wearing apparel, and handbags accounted for about 57 percent of the aggregate domestic value of goods seized in those 6 years. Table 1 shows the percent of total domestic value for different types of commodities seized as well as the domestic value of all goods seized and total number of seizures. The value of wearing apparel and cigarette seizures generally declined, while the value of pharmaceutical seizures generally increased. Several factors influence trends in seizure values. For example, values of seized goods can vary from year to year due to counterfeiters’ responses to changes in marketplace demand or enforcement actions. For instance, in fiscal year 2006, a federal enforcement investigation resulted in the seizure of 77 cargo containers of counterfeit Nike Air Jordan shoes and one container of counterfeit Abercrombie & Fitch clothing. The estimated domestic value of these goods was about $19 million, representing about 12 percent of the total domestic seizure value that year. In addition, the level of federal border enforcement effort varies across ports, resulting in different seizure rates, which is discussed in a later section of this report. According to CBP data, seized counterfeit goods are dominated by products from China. During fiscal years 2004 through 2009, China accounted for about 77 percent of the aggregate value of goods seized in the United States. Hong Kong, India, and Taiwan followed China, accounting for 7, 2, and 1 percent of the seized value, respectively. CBP data indicate certain concentrations of counterfeit production among these countries: in 2009, about 58 percent of the seized goods from China were footwear and handbags; 69 percent of the seized goods from Hong Kong were consumer electronics and watch parts; 91 percent of the seized goods from India were pharmaceuticals and perfume; and 85 percent of seized goods from Taiwan were computers and consumer electronics. CBP data show that goods were also seized frequently from Russia, Korea, Pakistan, Vietnam, and certain Southeast Asian countries. Unlike imported counterfeits, there is little information on the extent and sources for domestically produced counterfeits. According to the Congressional Research Service, the United States is especially concerned with foreign counterfeits of U.S. intellectual property. Compared to foreign countries, counterfeits produced in the United States are estimated to be relatively low. Another significant aspect of IP infringement is the piracy of digital copyrighted products, which is not captured by CBP seizure data. The development of technologies that enable the unauthorized distribution of copyrighted works is widely recognized as leading to an increase in piracy. The rapid growth of Internet use, in particular, has significantly contributed to the increase. Digital products are not physical or tangible, can be reproduced at very low cost, and have the potential for immediate delivery through the Internet across virtually unlimited geographic markets. Sectors facing threats from digital piracy include the music, motion picture, television, publishing, and software industries. Piracy of these products over the Internet can occur through methods including peer-to-peer networks, streaming sites, and one-click hosting services. There is no government agency that systematically collects or tracks data on the extent of digital copyright piracy. These technological developments, along with an increase in the sophistication of packaging for counterfeit goods, have changed the nature of counterfeiting and piracy substantially in recent years. Industry associations with whom we met commented that technological changes and increased sophistication among counterfeiters have affected their businesses significantly. According to experts we spoke with and literature we reviewed, counterfeiting and piracy have produced a wide range of effects on consumers, industry, government, and the economy as a whole, depending on the type of infringements involved and other factors. Most of the information and views we obtained from our interviews and literature review focused on the significant direct negative effects of counterfeiting and piracy on stakeholders, including health and safety risks, lost revenues, and increased costs of protecting and enforcing IP rights. However, some experts and literature point out that certain stakeholders may experience some positive effects from counterfeits and piracy, though there is little information available on potential positive effects. Table 2 summarizes the positive and negative effects by stakeholder, based on our discussions with experts and literature we reviewed. A commonly cited concern about counterfeit trade is that certain types of counterfeit goods can have harmful effects on consumers’ health and safety, causing serious illness or death. Experts we spoke with and literature we reviewed identified certain counterfeit products, such as pharmaceuticals, automotive parts, electrical components, toys, and household goods, as having potentially damaging health and safety effects. According to experts we spoke with, a key characteristic of these types of counterfeit goods, which distinguishes their effects from other types of counterfeiting or piracy, is that U.S. consumers are likely to have been deceived about the origin of the product. In addition, some studies and an expert reported that counterfeiters have increasingly diversified beyond their traditional products, such as luxury goods, to more functional products such as baby shampoo and household cleaners, and will continue to expand their product portfolios since the profit incentives are large. Examples of the types of counterfeit products that may have negative health and safety effects on consumers are presented below. Counterfeit pharmaceuticals may include toxic or nonactive ingredients, correct ingredients in incorrect quantities, or other mislabeling. These products can be ineffective in treating ailments or may lead to adverse reactions, drug resistance, or even death. The FDA has specifically highlighted and issued warnings to U.S. consumers on the dangers of buying prescription drugs over the Internet. Counterfeit automotive products may be substandard. A representative of a U.S. automotive parts supplier stated that it tested a supply of counterfeit timing belts that did not meet industry safety standards and could potentially impair the safety of vehicles. Counterfeit or pirated software may threaten consumers’ computer security. The illegitimate software, for example, may contain malicious programming code that could interfere with computers’ operations or violates users’ privacy. Counterfeit or pirated products that act as substitutes for genuine goods can have a wide range of negative effects on industries, according to experts we spoke with and literature we reviewed. These sources further noted that the economic effects vary widely among industries and among companies within an industry. The most commonly identified effect cited was lost sales, which leads to decreased revenues and/or market share. Many industries lose sales because of consumers’ purchases of counterfeit and pirated goods, particularly if the consumer purchased a counterfeit when intending to purchase a genuine product. In such cases, the industry may lose sales in direct proportion to the number of counterfeit products that the deceived consumers purchased. Industries in which consumers knowingly purchase counterfeits as a substitute for the genuine good may also experience lost sales. For example, recording companies have lost sales on a wide scale as a result of pirated music distributed over the Internet and producers of high-end fashion goods have lost sales from purchases of counterfeit goods made to look similar to genuine products. Lost revenues can also occur when lower-priced counterfeit and pirated goods pressure producers or IP owners to reduce prices of genuine goods. In some industries, such as the audiovisual sector, marketing strategies must be adjusted to minimize the impact of counterfeiting on lost revenues. Movie studios that use time-related marketing strategies— introducing different formats of a movie after certain periods of time— have reduced the time periods or “windows” for each format as a countermeasure, reducing the overall revenue acquired in each window. Experts stated that companies may also experience losses due to the dilution of brand value or damage to reputation and public image, as counterfeiting and piracy may reduce consumers’ confidence in the brand’s quality. Consumers who are unaware that a product is counterfeit may blame the manufacturer of the legitimate good for negative effects of the fake. Some manufacturers learn of the existence of counterfeit versions of their products from returns of inferior counterfeit goods. Companies are affected in additional ways. For example, to avoid losing sales and liability issues, companies may increase spending on IP protection efforts. In addition, experts we spoke with stated that companies could experience a decline in innovation and production of new goods if counterfeiting leads to reductions in corporate investments in research and development. Another variation in the nature of the effects of counterfeiting and piracy is that some effects are experienced immediately, while others are more long-term in nature, according to the OECD. The OECD’s 2008 report cited loss of sales volume and lower prices as short-term effects, while the medium- and long-term effects include loss of brand value and reputation, lost investment, increased costs of countermeasures, potentially reduced scope of operations, and reduced innovation. Finally, one expert emphasized to us that the loss of the IP rights is much more important than the loss of revenue. He stated that the danger for the United States is in the accelerated “learning effects”—companies learn how to produce and will improve upon these goods. They will no longer need to illegally copy a given brand—they will be in the aftermarket. He suggested that companies should work to ensure their competitive advantage in the future by inhibiting undesired knowledge transfer. Many of the experts we interviewed identified lost tax revenue as an effect of counterfeiting and piracy on governments. IP owners or producers of legitimate goods who lose revenue because of competition from counterfeiters pay less in taxes. The U.S. government also incurs costs due to IP protection and enforcement efforts. Researchers have found anecdotal evidence that organized criminal and terrorist organizations are involved in counterfeiting and piracy. A 2009 RAND Corporation study, for example, presented case studies showing the involvement of organized crime or terrorist groups involved in film piracy to generate funding for their activities. Because criminal networks are involved, government law enforcement priorities may be affected since more resources are devoted to combating these networks. Researchers have identified economic incentives that have contributed to the increase in counterfeiting and piracy in recent years. Economic incentives include low barriers to entering the counterfeiting and piracy business, potentially high profits, and limited legal sanctions if caught. The federal government also incurs costs to store and destroy counterfeit and pirated goods. Seized goods have to be secured, as they have potential value but cannot be allowed to enter U.S. commerce. Storage may be prolonged by law enforcement actions, but the goods are generally destroyed or otherwise disposed of when they are determined to be illegal and are no longer needed. According to CBP officials, as seizures have increased, the agency’s storage and destruction costs have grown and become increasingly burdensome. CBP reported that it spent about $41.9 million to destroy seized property between fiscal years 2007 and 2009. Counterfeits also pose a threat to the reliability of supply chains that have national security or civilian safety significance. According to a recent Commerce report, counterfeit electronics parts have infiltrated U.S. defense and industrial supply chains and almost 40 percent of companies and organizations—including the Department of Defense—surveyed for the report have encountered counterfeit electronics. Commerce reported that the infiltration of counterfeit parts into the supply chain was exacerbated by weaknesses in inventory management, procurement procedures, and inspection protocols, among other factors. The Federal Aviation Administration (FAA) tracks and posts notifications of incidents of counterfeit or improperly maintained parts entering airline industry supply chains through its Suspected Unapproved Parts Program in an effort to improve flight safety. The FAA program has identified instances of counterfeit aviation parts, as well as fake data plates and history cards to make old parts look new. FAA’s program highlights the risks that counterfeit parts pose to the safety of commercial aircraft. The U.S. economy as a whole may grow at a slower pace than it otherwise would because of counterfeiting and piracy’s effect on U.S. industries, government, and consumers. According to officials we interviewed and OECD’s 2008 study, to the extent that companies experience a loss of revenues or incentives to invest in research and development for new products, slower economic growth could occur. IP-related industries play an important role in the growth of the U.S. economy and contribute a significant percentage to the U.S. gross domestic product. IP-related industries also pay significantly higher wages than other industries and contribute to a higher standard of living in the United States. To the extent that counterfeiting and piracy reduce investments in research and development, these companies may hire fewer workers and may contribute less to U.S. economic growth, overall. The U.S. economy may also experience slower growth due to a decline in trade with countries where widespread counterfeiting hinders the activities of U.S. companies operating overseas. In addition to the industry effects, the U.S. economy, as a whole, also may experience effects of losses by consumers and government. An economy’s gross domestic product could be measured as either the total expenditures by households (consumers), or as the total wages paid by the private sector (industry). Hence, the effect of counterfeiting and piracy on industry would affect consumers by reducing their wages, which could reduce consumption of goods and services and the gross domestic product. Finally, the government is also affected by the reduction of economic activity, since fewer taxes are collected. Some experts we interviewed and literature we reviewed identified potential positive economic effects of counterfeiting and piracy. Some consumers may knowingly purchase a counterfeit or pirated product because it is less expensive than the genuine good or because the genuine good is unavailable, and they may experience positive effects from such purchases. For example, consumers in the United States and other countries purchase counterfeit copies of high-priced luxury-branded fashion goods at low prices, although the products’ packaging and sales venues make it apparent they are not genuine. Consumers may purchase movies that have yet to be released in theaters and are unavailable in legitimate form. Lower-priced counterfeit goods may exert competitive pressure to lower prices for legitimate goods, which may benefit consumers. However, according to the OECD, the longer-term impact for consumers of falling prices for legitimate goods is unclear, as these changes may affect the speed of innovation. There are also certain instances when IP rights holders in some industries might experience potentially positive effects from the knowing consumption of pirated or counterfeit goods. For example, consumers may use pirated goods to “sample” music, movies, software, or electronic games before purchasing legitimate copies, which may lead to increased sales of legitimate goods. In addition, industries with products that are characterized by large “switching costs,” may also benefit from piracy due to lock-in effects. For example, some experts we spoke with and literature we reviewed discussed how consumers after being introduced to the pirated version might get locked into new legitimate software because of large switching costs, such as a steep learning curve, reluctance to switch to new products, and search costs incurred by consumers to identify a new product to use. Some authors have argued that companies that experience revenue losses in one line of business—such as movies—may also increase revenues in related or complementary businesses due to increased brand awareness. For instance, companies may experience increased revenues due to the sales of merchandise that are based on movie characters whose popularity is enhanced by sales of pirated movies. One expert also observed that some industries may experience an increase in demand for their products because of piracy in other industries. This expert identified Internet infrastructure manufacturers (e.g., companies that make routers) as possible beneficiaries of digital piracy, because of the bandwidth demands related to the transfer of pirated digital content. While competitive pressure to keep one step ahead of counterfeiters may spur innovation in some cases, some of this innovation may be oriented toward anticounterfeiting and antipiracy efforts, rather than enhancing the product for consumers. According to experts we spoke with and literature we reviewed, estimating the economic impact of IP infringements is extremely difficult, and assumptions must be used due to the absence of data. Assumptions, such as the rate at which consumers would substitute counterfeit goods for legitimate products, can have enormous impacts on the resulting estimates and heighten the importance of transparency. Because of the significant differences in types of counterfeit and pirated goods and industries involved, no single method can be used to develop estimates, and each method has limitations. Nonetheless, research in specific industries suggest that the problem is sizeable. Most experts we spoke with and the literature we reviewed observed that despite significant efforts, it is difficult, if not impossible, to quantify the net effect of counterfeiting and piracy on the economy as a whole. Quantifying the economic impact of counterfeit and pirated goods on the U.S. economy is challenging primarily because of the lack of available data on the extent and value of counterfeit trade. Counterfeiting and piracy are illicit activities, which makes data on them inherently difficult to obtain. In discussing their own effort to develop a global estimate on the scale of counterfeit trade, OECD officials told us that obtaining reliable data is the most important and difficult part of any attempt to quantify the economic impact of counterfeiting and piracy. OECD’s 2008 report, The Economic Impact of Counterfeiting and Piracy, further states that available information on the scope and magnitude of counterfeiting and piracy provides only a crude indication of how widespread they may be, and that neither governments nor industry were able to provide solid assessments of their respective situations. The report stated that one of the key problems is that data have not been systematically collected or evaluated and, in many cases, assessments “rely excessively on fragmentary and anecdotal information; where data are lacking, unsubstantiated opinions are often treated as facts.” In cases in which data on counterfeits are collected by federal agencies, such as CBP or FAA, it is difficult to know how complete the data are. For example, it is difficult to determine whether CBP’s annual seizure data in table 1 reflect the extent and types of counterfeits entering the United States in any given year, the counterfeit products that were detected, or the level of federal border enforcement effort expended. FAA’s notifications on counterfeit parts through its Suspect Unapproved Parts Program rely, in part, on reported incidents or complaints from members of the aviation community. Commerce and FBI officials told us they rely on industry statistics on counterfeit and pirated goods and do not conduct any original data gathering to assess the economic impact of counterfeit and pirated goods on the U.S. economy or domestic industries. However, according to experts and government officials, industry associations do not always disclose their proprietary data sources and methods, making it difficult to verify their estimates. Industries collect this information to address counterfeiting problems associated with their products and may be reluctant to discuss instances of counterfeiting because consumers might lose confidence. OECD officials, for example, told us that one reason some industry representatives were hesitant to participate in their study was that they did not want information to be widely released about the scale of the counterfeiting problem in their sectors. Because of the lack of data on illicit trade, methods for calculating estimates of economic losses must involve certain assumptions, and the resulting economic loss estimates are highly sensitive to the assumptions used. Two experts told us that the selection and weighting of these assumptions and variables are critical to the results of counterfeit estimates, and the assumptions should, therefore, be identified and evaluated. Transparency in how these estimates are developed is essential for assessing the usefulness of an estimate. Two key assumptions that typically are required in calculating a loss estimate from counterfeit goods include the substitution rate used by consumers and the value of counterfeit goods. Substitution rate. The assumed rate at which a consumer is willing to switch from purchasing a fake good to the genuine product is a key assumption that can have a critical impact on the results of an economic loss estimate. For example, if a consumer pays the full retail price for a fake movie thinking that it is the genuine good, an assumption can be made that a legitimate copy would have been bought in the absence of the fake product, representing a one-to-one substitution rate. However, this one-to-one substitution rate requires three important conditions: (1) the fake good is almost identical in quality to the genuine one; (2) the consumer is paying full retail price for the fake product; and (3) the consumer is not aware he is purchasing a counterfeit product. When some of these conditions are not met (e.g., the consumer paid a significantly lower price for the counterfeit), the likelihood that the consumer would have purchased the genuine product at full price is not clear. Substitution rates also vary by industry, since factors such as product quality, distribution channels, and information available about the product can differ significantly. Value of fake goods. Valuation of the fake goods constitutes another set of assumptions that has a significant impact. There are several measures of value that can be used, such as the production cost, the domestic value, or the manufacturer’s suggested retail price. For example, CBP announced in a January 2010 press release that it had seized 252,968 DVDs with counterfeit trademarks. The agency reported that the manufacturer’s suggested retail price of the shipment was estimated to be more than $7.1 million and the domestic value was estimated at $204,904. Officials from the International Trade Commission stated that counterfeits are very difficult to price and estimates of economic impact would benefit from including a range of prices, from the spot price of the fake on the street corner at the bottom to the manufacturer’s suggested retail price at the top. The level or extent of deception that consumers face is also an important factor to consider when developing assumptions for the substitution rate and value of the fake goods. If a consumer is completely deceived, it could be reasonable to assume a one-to-one substitution rate (i.e., the purchase of a legitimate good in lieu of the counterfeit one) and a full retail price (i.e., the manufacturer’s suggested retail sales price). Price, packaging, and location of the transaction are the most important signs to the consumer indicating the legitimacy of a good. Many of the experts we interviewed said that a one-to-one substitution rate is not likely to exist in most circumstances where counterfeit goods are significantly cheaper than the legitimate goods. Some experts also noted that the level of consumer deception varies across industries. For example, consumers who purchase counterfeit pharmaceuticals are more likely to be deceived, particularly when the counterfeit good is sold through the same distribution channel as the genuine product. Some experts observed that few, if any, consumers would willingly purchase a pharmaceutical product they knew might be counterfeit. However, the extent of deception among consumers of audiovisual products is likely lower because sales venues for counterfeit audiovisual goods tend to be separate from the legitimate ones. Unless the assumptions about substitution rates and valuations of counterfeit goods are transparently explained, experts observed that it is difficult, if not impossible, to assess the reasonableness of the resulting estimate. Three commonly cited estimates of U.S. industry losses due to counterfeiting have been sourced to U.S. agencies, but cannot be substantiated or traced back to an underlying data source or methodology. First, a number of industry, media, and government publications have cited an FBI estimate that U.S. businesses lose $200-$250 billion to counterfeiting on an annual basis. This estimate was contained in a 2002 FBI press release, but FBI officials told us that it has no record of source data or methodology for generating the estimate and that it cannot be corroborated. Second, a 2002 CBP press release contained an estimate that U.S. businesses and industries lose $200 billion a year in revenue and 750,000 jobs due to counterfeits of merchandise. However, a CBP official stated that these figures are of uncertain origin, have been discredited, and are no longer used by CBP. A March 2009 CBP internal memo was circulated to inform staff not to use the figures. However, another entity within DHS continues to use them. Third, the Motor and Equipment Manufacturers Association reported an estimate that the U.S. automotive parts industry has lost $3 billion in sales due to counterfeit goods and attributed the figure to the Federal Trade Commission (FTC). The OECD has also referenced this estimate in its report on counterfeiting and piracy, citing the association report that is sourced to the FTC. However, when we contacted FTC officials to substantiate the estimate, they were unable to locate any record or source of this estimate within its reports or archives, and officials could not recall the agency ever developing or using this estimate. These estimates attributed to FBI, CBP, and FTC continue to be referenced by various industry and government sources as evidence of the significance of the counterfeiting and piracy problem to the U.S. economy. There is no single methodology to collect and analyze data that can be applied across industries to estimate the effects of counterfeiting and piracy on the U.S. economy or industry sectors. The nature of data collection, the substitution rate, value of goods, and level of deception are not the same across industries. Due to these challenges and the lack of data, researchers have developed different methodologies. In addition, some experts we interviewed noted the methodological and data challenges they face when the nature of the problem has changed substantially over time. Some commented that they have not updated earlier estimates or were required to change methodologies for these reasons. Nonetheless, the studies and experts we spoke with suggested that counterfeiting and piracy is a sizeable problem, which affects consumer behavior and firms’ incentives to innovate. The most commonly used methods to collect and analyze data, based on our literature review and interviews with experts, are presented below. Seizure data from CBP is one of the few types of hard data sources available and is often used to extrapolate the level of counterfeit and pirated trade. This approach provides hard evidence of the minimum quantity of counterfeit goods, but a major limitation is that levels of border enforcement efforts can vary. For example, in our study of seizures made by the CBP field offices, we calculated “seizure rates” for the top 25 U.S. ports, based on the dollar value of IP seizures at each port compared to the dollar value of IP-related imports there. These ports accounted for over 75 percent of the value of all IP-related imports into the United States in fiscal year 2005. We found that the top 3 ports seized over 100 times more IP counterfeits than the lowest 5 of these ports per dollar of IP- related imports. As a result, it appears that the importance of IP enforcement and the skill of the personnel at the ports have significant impact on the level of seizures. This suggests that seizure data might be useful as a floor, but are not indicative of the actual level of U.S. imports of counterfeit goods. A study conducted by the Los Angeles County Economic Development Corporation, A False Bargain: The Los Angeles County Economic Consequences of Counterfeit Products, used extrapolation of seizure data as one of its three approaches to estimate the economic impact of counterfeits.The authors noted that the key variable in extrapolating seizure data from CBP was to determine CBP’s success rate in interdicting illegal goods, which they acknowledged was “unknowable.” One of the study’s estimates that used CBP seizures to extrapolate the value of counterfeit and pirated goods in Los Angeles County calculated a range between $1 billion and $4.6 billion in 2005. This range was based on different assumptions used for seizure rates and other variables. Another challenge when extrapolating seizure data is determining the dollar value to assign to the seized good, which can have a significant impact on the magnitude of the estimates. For example, in 2009, CBP seized a shipment of counterfeit sunglasses from China and reported an estimated total domestic value at $12,146 and a manufacturer’s suggested retail price at $7.9 million. Researchers have conducted surveys to gather data on the consumption or sales patterns of counterfeit or pirated goods. The main advantage of this method is that it can also show consumers’ behavior in terms of their preferences. For example, a survey could collect information on the consumer’s willingness to pay for a counterfeit good; the number of counterfeit units purchased in a determined period of time; the minimum expected quality; the necessary price reduction of the legitimate good to avoid the consumer’s purchase of the counterfeit good; the knowledge of sanctions if caught purchasing the counterfeit good; and the knowledge of potential “side effects” due to the purchase of fake goods. However, a survey can be a labor-intensive project and can cost in the millions of dollars. Moreover, one expert stated that the bias in surveys is hard to identify. For example, he commented that students, who are often the subjects in surveys of illegal file sharing, may either not admit that they are engaging in illegal activity, or may admit to such behavior because it may be popular for this demographic. The Business Software Alliance publishes piracy estimates based on a set of annual surveys it conducts in different countries. Based on its survey results, the industry association estimated the U.S. piracy rate at 20 percent for business software, carrying a loss of $9 billion in 2008. This study defined piracy as the difference between total installed software and legitimate software sold, and its scope involved only packaged physical software. While this study has an enviable data set on industries and consumers located around the world from its country surveys, it uses assumptions that have raised concerns among experts we interviewed, including the assumption of a one-to-one rate of substitution and questions on how the results from the surveyed countries are extrapolated to nonsurveyed countries. Another example of the use of surveys is the study by the Motion Picture Association, which relied on a consumer survey conducted in several countries. This study found that U.S. motion picture studios lost $6.1 billion to piracy in 2005. It is difficult, based on the information provided in the study, to determine how the authors handled key assumptions such as substitution rates and extrapolation from the survey sample to the broader population. In a smaller-scale example of a survey method, Rob and Waldfogel surveyed students in American universities during parts of 2003 and 2004, asking not only about the amount of music albums they purchased and illegally downloaded, but also the titles and their valuation for the albums they purchased and illegally downloaded. Their main findings are: (1) downloading reduces legitimate purchases by individuals by 20 percent in the sample, that is, every five music downloads substitute one legitimate purchase; (2) on average, respondents downloaded music that they valued one-third to one-half less than their legitimately purchased music, suggesting that some of the music that was downloaded would never have been purchased as an album; and (3) while downloading reduces per capita expenditures by $25, it raises per capita consumers’ surplus by $70. The study indicated that downloading illegal music can have a positive effect on total consumer welfare. However, as explained by the authors, this experiment cannot be generalized; the data consist of a snapshot of undergraduate students’ responses, which is not representative of the general population. As previously discussed, Commerce recently conducted a survey of 387 companies and organizations participating in U.S. defense and industrial supply chains and reported that almost 40 percent of them encountered counterfeit products between 2005 and 2008. The report focused on basic electronic parts and components, including microcircuits and circuit boards, throughout the entire electronics industrial base in the United States. The report noted that these parts are key elements of electronic systems that support national security missions and control essential commercial and industrial operations. Information provided by these companies and organizations also demonstrated an increase in the number of reported counterfeit incidents from 3,868 in 2005 to 9,356 in 2008. Some of these counterfeit incidents could include DOD-qualified parts and components. Economic multipliers show how capital changes in one industry affect output and employment of associated industries. Commerce’s Bureau of Economic Analysis guidelines make regional multipliers available through its Regional Input-Output Modeling System (RIMS II). These multipliers estimate the extent to which a one-time or sustained change in economic activity will be attributed to specific industries in a region. Multipliers can provide an illustration of the possible “induced” effects from a one- time change in final demand. For example, if a new facility is to be created with a determined investment amount, one can estimate how many new jobs can be created, as well as the benefit to the region in terms of output (e.g., extra construction, manufacturing, supplies, and other products needed). It must be noted that RIMS II multipliers assume no job immigration or substitution effect. That is, if new jobs are created as a result of investing more capital, those jobs would not be filled by the labor force from another industry. In the case of estimating the effect of counterfeiting and piracy, RIMS II economic multipliers are applied to U.S. industry loss figures, which have been derived from other studies, and used to calculate the harm on employment and output due to reduced investments. Using the RIMS II multipliers in this setting does not take into account the two-fold effect: (1) in the case that the counterfeit good has similar quality to the original, consumers have extra disposable income from purchasing a less expensive good, and (2) the extra disposable income goes back to the U.S. economy, as consumers can spend it on other goods and services. Most of the experts we interviewed were reluctant to use economic multipliers to calculate losses from counterfeiting because this methodology was developed to look at a one-time change in output and employment. Nonetheless, the use of this methodology corroborates that the effect of counterfeiting and piracy goes beyond the infringed industry. For example, when pirated movies are sold, it damages not only the motion picture industry, but all other industries linked to those sales. The Institute of Policy Innovation has commissioned three studies in the audiovisual industries using economic multipliers; the most expansive of the studies covers motion pictures, sound recordings, business and entertainment software, and video games for the year 2005. This study found that losses in the U.S. economy due to piracy accounted for $58 billion in output, over 370,000 jobs, and $2.6 billion in tax revenue. It was calculated by taking industry estimates of loss revenue and applying the RIMS II multipliers to these figures. Several additional studies that we reviewed provided alternative data collection and modeling techniques to quantify the effect of counterfeiting on a specific industry or, in the case of the OECD, on world trade. The OECD, for example, adopted an approach of combining different methodologies to develop a single estimate. The OECD triangulated a combination of data sets: extrapolating seizure data provided by national customs authorities, comparing the seizure data to international trade data, and using these data in an econometric model. The seizure data were used to develop a model that would measure the magnitude of global counterfeit trade. The OECD estimated that the magnitude of counterfeit and pirated goods in international trade could have accounted for up to $200 billion in 2005, and later updated this estimate to $250 billion based on 2005-2007 world trade data.26, 27 As noted by the OECD, most of the international trade data were supplied by national governments and relevant industries, and the OECD did not independently assess the reliability of the figures. Its methodology is based on matching, to the best of its knowledge, the industry data with customs seizure data from the OECD members, acknowledging the limitations of working with customs seizure data. OECD heavily qualified this estimate, however, reporting that “the overall degree to which products are being counterfeited and pirated is unknown and there do not appear to be any methodologies that could be employed to develop an acceptable overall estimate.” A second phase of the OECD project covered digital piracy, but did not attempt to quantify the effects. The OECD estimate was limited to internationally traded hard goods and did not include digital piracy or counterfeit goods produced and consumed within the same country. OECD, Magnitude of Counterfeiting and Piracy of Tangible Products: An Update, Paris: OECD, November 2009. In a more narrowly focused study on downloads of music, Oberholzer-Gee and Strumpf used modeling to determine that illegal downloads have no effect on record sales. They concluded that, in contrast with industry estimates, declining sales over the period of 2000-2002 were not primarily caused by illegal downloads. The results were found after compiling a data set of illegal downloads from a prominent server and testing the variation between illegal downloads and legal sales in the United States of specific albums on a weekly basis for 17 weeks in the second half of 2002. This was done by modeling album sales as a function of the quantity of album downloads and other album specific characteristics. While this is an enviable data set of actual illegal downloads, the study has two main limitations: first, the study uses a static model which does not reflect the effect of downloads apart from the week the download occured. Second, the study only observed the supply side of music. Thus, it is not clear if consumers who are illegally downloading music would have purchased the genuine albums. Hui and Png’s study provided another example that used modeling. This study estimated that piracy in the music industry caused revenue losses of 6.6 percent in 1998. The authors stated that their estimate is significantly less than the industry loss estimate. In particular, for the year 1998 in the United States, legitimate sales of CDs were 3.73 CDs per capita, and the average loss in sales per capita due to piracy was 0.044 CDs. The data set included CD prices, music CD demand, piracy level and country-specific characteristics for 28 countries, mostly provided by the International Federation of the Phonographic Industry. The main limitation for this study was that it only covered physical piracy. While digital piracy was not a major concern during the time period sampled, it has become so for at least the last decade due to the Internet. Another limitation is that the study used piracy rates that assumed a one-to-one substitution rate, including those used by the Business Software Alliance. Many experts we interviewed also agreed that general or partial equilibrium models would offer useful insights if the input data existed. These involve modeling the supply and demand of a good and simulating the effect of how counterfeiting affects the market for that good (in the case of a partial model) and the economy as a whole (for a general equilibrium model). The approach allows a systematic analysis of the problem, but depends on the quality of the data used to develop the models. The benefit of an equilibrium model is that assumptions can be tested based on the results obtained and modified if the results fall outside of established parameters. Experts agreed on the potential benefits of this approach, but recognized that data limitations make it currently close to impossible to implement. Officials from the International Trade Commission and other industry experts said that this would be their preferred approach to think of the problem in question, but they also acknowledged that data reliability is a major concern, as with the other methodologies. According to experts we interviewed and the literature we reviewed, there is no evidence to support a “rule of thumb” that measures counterfeit trade as a proportion of world trade to estimate the amount of counterfeit trade that occurs in a local economy. The advantage of finding a so-called “rule of thumb” for counterfeit trade is that it can be applied generally and does not try to take into consideration the different rates of counterfeiting and piracy for each of the different industry sectors. However, as noted earlier, piracy rates differ enormously across industries, so it is not possible to generalize findings. Moreover, not all goods from world trade can be counterfeited or pirated. The most commonly cited “rule of thumb” is that counterfeit trade accounts for 5 to 7 percent of world trade, which has been attributed to the International Chamber of Commerce. The Office of the Comptroller of the City of New York used this rule of thumb in its 2004 study to estimate the total dollar exchange of counterfeit goods in the United States and in New York State. This study first applied a 6 percent rule (an average of 5 to 7 percent “rule of thumb”) to the total value of world trade in 2003 ($7.6 trillion) to calculate the value of world trade that is made up of counterfeit goods, arriving at $456 billion. This rule of thumb was widely spread by a 1998 OECD report, although OECD and experts cautioned that this estimate was not verifiable and the source data were not independently calculated. In its 2008 report, The Economic Impact of Counterfeiting and Piracy, the OECD commented that the “metrics underlying the International Chamber of Commerce’s estimates are not clear,” nor is it clear what types of IP infringements are included in the estimate. In a 2009 update to the report, the OECD estimated the share of counterfeit and pirated goods in world trade as 1.95 percent in 2007, increasing from 1.85 percent in 2000. Many of the experts we interviewed also expressed skepticism over the estimate that counterfeit trade represents 5 to 7 percent of world trade. While experts and literature we reviewed provided different examples of effects on the U.S. economy, most observed that despite significant efforts, it is difficult, if not impossible, to quantify the net effect of counterfeiting and piracy on the economy as a whole. For example, as previously discussed, OECD attempted to develop an estimate of the economic impact of counterfeiting and concluded that an acceptable overall estimate of counterfeit goods could not be developed. OECD further stated that information that can be obtained, such as data on enforcement and information developed through surveys, “has significant limitations, however, and falls far short of what is needed to develop a robust overall estimate.” One expert characterized the attempt to quantify the overall economic impact of counterfeiting as “fruitless,” while another stated that any estimate is highly suspect since this is covert trade and the numbers are all “guesstimates.” To determine the net effect, any positive effects of counterfeiting and piracy on the economy should be considered, as well as the negative effects. Experts held different views on the nature of potentially offsetting effects. While one expert we interviewed stated that he did not believe there were any positive effects on the economy due to counterfeiting and piracy, other experts stated that there were positive effects and they should be assessed as well. Few studies have been conducted on positive effects, and little is known about their impact on the economy. Although some literature and experts suggest that negative effects may be overstated, in general, literature and experts indicate the negative effects of counterfeiting and piracy on the U.S. economy outweigh the positive effects. Since there is an absence of data concerning these potential effects, the net effect cannot be determined with any certainty. The experts we interviewed also differed regarding the extent to which net effects of counterfeiting and piracy could be measured in certain parts of the economy. For example, one expert we spoke with has conducted research that found that employment numbers may be lost to the U.S. economy when copyright industries lose business due to piracy. Other experts we interviewed stated that, in their view, employment effects are unclear, because employment may decline in certain industries or rise in other industries as workers are hired to produce counterfeits. Another expert told us that effects of piracy within the United States are mainly redistributions within the economy for other purposes and that they should not be considered as a loss to the overall economy. He stated that “the money does not just vanish; it is used for other purposes.” Other experts we spoke with focused more on the difficulties of aggregating the wide variety of effects on industries into a single assessment. We are sending copies of this report to interested congressional committees; the Secretaries of Commerce, Health and Human Services, and Homeland Security; the Attorney General; the Chairman of the International Trade Commission; the U.S. Trade Representative, and the Intellectual Property Enforcement Coordinator. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-4347 or yagerl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. The Prioritizing Resources and Organization for Intellectual Property Act of 2008 (PRO-IP Act) directed GAO to conduct a study on the quantification of the impacts of imported and domestic counterfeits on the stry and the overall economy of the United States. U.S. manufacturing indu government After conducting initial research, we determined that the U.S. form analysis on the impacts of did not systematically collect data and per counterfeiting and piracy on the U.S. economy, and concluded that it was not feasible to generate our own data or attempt to quantify the economic impact of counterfeiting or piracy on the U.S. economy based on the review of existing literature and interviews with experts. In addition, we noted that many of the existing studies and literature on economic effects address both counterfeiting and piracy. Based on discussions with staff from the House and Senate Judiciary Committees, we agreed that we would (1) examine existing research on the effects of counterfeiting and piracy on consumers, industries, government, and the U.S. economy; and (2) identify insights gained from efforts to quantify the effects of counterfeiting and piracy on the U.S. economy. To address both of these objectives, we interviewed officials and representatives from industry associations, nongovernmental organizations, academic institutions, and U.S. government agencies and the multilateral Organization for Economic Cooperation and Development (OECD). We also reviewed documents and studies quantifying or discussing the impacts of counterfeiting and piracy on the U.S. economy, industry, government, and consumers. Specifically, we reviewed quantitative and qualitative studies published since 1999 of the economic impact of intellectual property (IP) infringements to examine the range of impacts of counterfeiting and piracy on various stakeholders (both positive and negative) and to identify other insights about the nature of counterfeit markets, approaches to developing estimates, and the role IP plays in the U.S. economy. We identified these reports and studies through a literature search and discussions with representatives from industry associations, nongovernmental organizations, academic institutions, U.S. government agencies, and the OECD to obtain their views on the most relevant studies to review. Our literature review also included the OECD studies that examined the economic impact of counterfeiting and piracy. Although the OECD studies are global in scope rather than focused on the U.S. economy, their unique nature and prominence as the most comprehensive attempt to quantify the impacts of counterfeiting and piracy warranted their inclusion within our scope. See the bibliography for a partial list of references we consulted. We did not assess or evaluate the accuracy of quantitative estimates or other data found in these studies. We reviewed the studies primarily to obtain information on the range of effects from counterfeiting and piracy, different methods and assumptions used in determining effects, and insights gained from these efforts. In selecting studies for review, we sought to include a range of industries and methodologies. In some cases, we interviewed the authors of these reports to obtain additional information. We conducted structured interviews with subject matter experts to obtain their views on efforts to quantify the economic impacts of counterfeiting and piracy and methodological approaches, the range of impacts of counterfeits and piracy, and insights on counterfeiting activities and markets. We identified experts through a literature review and discussions with relevant government officials, industry and consumer representatives, academics, and other stakeholders. These subject ma tter experts were selected from a population of individuals from government, academia, industry, and professional organizations. More specifically, our criteria for selecting experts to interview included: type and depth of experience, for instance, whether the expert had authored a widely referenced study or article on the topic, and whether the expert was referred to us by at least one other interviewee as someone knowledgeable about the topic; relevance of published work to this engagement; representation of a range of perspectives; representation of relevant organizations and sectors including, where applicable, representatives from government, academia, industry, and professional organizations; and other subject matter experts’ recommendations. We developed a common list of structured interview questions that we asked of each of the experts. We pretested our questions with two of our initial respondents and refined our questions based on their input. The structured interviews included questions on definitions of counterfeit and pirated goods; effects of counterfeiting and piracy; and their views on methodologies and studies that quantify the effects of counterfeiting and piracy, as well as assumptions used. Individuals or organizations that we met with for these structured interviews are listed below: Business Software Alliance (BSA) Peggy Chaudhry, Villanova University Joe Karaganis, Social Science Research Council Keith Maskus, University of Colorado Felix Olberholzer-Gee, Harvard University Stephen Siwek, Economists Inc. Security, Health and Human Servic Representative, the International Trade Commission, and the Office of the U.S. Intellectual Property Enforcement Coordinator to obtain technical comments. We received comments from the Departments of Homeland Security and Justice, and the Offic Enforcement Coordinator and made changes as appropriate. es, the Office of the U.S. Trade e of the U.S. Intellectual Property he PRO-IP Act also directed us to report on the nature and scope of IP T statutory and case laws and the extent investig with Co our 2008 report, Intellectual Property: Federal E Generally Increased, but Assessing Perform Enforcement Efforts (GAO-08-157). In addition to the contact named above, Christine Broderick, Assistant Director; Jeremy Latimer; Catherine Gelb; Pedro Almoguera; Shirley Brothwell; Karen Deans; Matthew Jones; and Diahanna Post made key contributions to this report. In addition Virginia Chanley and Ernie Jackson provided technical assistance. Business Software Alliance (BSA), Sixth Annual BSA-IDC Global Software 08 Piracy Study, Washington, D.C.: BSA, May 2009. Customs and Border Protection. Press Release, May 29, 2002, Washington, D.C.: May 2002 http://www.cbp.gov/xp/cgov/newsroom/news_releases/archives/legacy/200 2/52002/05292002.xml (accessed April 4, 2009). Federal Bureau of Investigation. Press Release, July 17, 2002, Washington, D.C.: July 2002 http://www.fbi.gov/pressrel/pressrel02/outreach071702.htm (accessed March 30, 2010). Freeman, Gregory, Nancy D. Sidhu, and Michael Montoya, A False Bargain: The Los Angeles County Economic Consequences of Counterfeit Products. Los Angeles, Calif.: Los Angeles County Economic Development Corporation, February 2007. Hui, Kai-Lung and Ivan Png, “Piracy and the Legitimate Demand for Recorded Music,” Contributions to Economic Analysis & Policy, vol. 2, issue 1, article 11 (2003). International Chamber of Commerce, Counterfeiting Intelligence Bureau, London: 2010. http://www.icc-ccs.org/index.php?option=com_content&view=article&id= 29&Ite mid=39 (accessed March 30, 2010). L.E.K. Consulting, The Cost of Movie Piracy, sponsored by the Motion Picture Association, 2006. Motor & Equipment Manufacturers Association (MEMA), Stop Counterfeiting of Automotive and Truck Parts. MEMA, 2005. Oberholzer-Gee, Felix and Koleman Strumpf, “The Effect of File Sharing on Record Sales: An Empirical Analysis,” Journal of Political Economy, vol. 115, no. 1, 2007. Organisation for Economic Cooperation and Development (OECD), The Economic Impact of Counterfeiting and Piracy. Paris: OECD, 2008. OECD, Magnitude of Counterfeiting and Piracy of Tangible Products: An Update, Paris: OECD, November 2009. OECD, Piracy of Digital Content. Paris: OECD, 2009. April 2006). Siwek, Stephen E., The True Cost of Copyright Industry Piracy to the U.S. Economy, Institute for Policy Innovation (IPI), IPI Center for Technology Freedom, Policy Report 189, (October 2007). Thompson, William C. Jr., Bootleg Billions: The Impact of the Counterfeit Goods Trade on New York City, City of New York, Office of the Comptroller, November 2004. Baumol, William J. The Free Market Innovation Machine: Analyzing the Growth Miracle of Capitalism. Princeton and Oxford: Princeton University Press, 2002. Chaudhry, Peggy and Alan Zimmerman, The Economics of Counterfe Trade: Governments, Consumers, Pirates, and Intellectual Property Rights. Berlin: Springer, 2009. Fink, Carsten and Carlos M. Correa, The Global Debate on the Enforcement of Intellectual Property Rights and Developing Countries. International Centre for Trade and Sustainable Development. Issue Paper No. 22, February 2009. Forzley, Michele, Counterfeit Goods and the Public’s Health and Safety International Intellectual Property Institute, 2003. . Horan, Amanda, Christopher Johnson, and Heather Sykes, Fore Infringement of Intellectual Property Rights: Implications for Selected U.S. Industries. No. ID-14, U.S. International Trade Commission, Office of Industries Working Paper, October 2005. Mansfield, Edwin, In New York, N.Y.: W.W. Norton, 1968. dustrial Research and Technological Innovation. RAND Corporation, Film Piracy, Organized Crime, and Terrorism, RAND Safety and Justice Program and the Global Risk and Security Center, (Santa Monica, Calif., 2009). Rosenberg, Nathan, Exploring the Black Box: Technology, Economics, and History, Cambridge, United Kingdom: Cambridge University Press, 1994. Schumpeter, J.A. Business Cycles: A Theoretical, Historical and Statistical Analysis of the Capitalist Process, New York: McGraw-Hill, 1939. Siwek, Stephen E., Engines of Economic Growth: Economic Contributions of the US Intellectual Property Industries. Washing D.C.: Economists Incorporated, 2005. Staake, Thorsten, Frederic Thiesse and Elgar Fleisch, “The Emergence of Counterfeit Trade: A Literature Review,” European Journal of Marketing, Vol. 43 No. 3/4, (2009). Staake, Thorsten and Elgar Fleisch, Countering Counterfeit Trade: Illicit Market Insights, Best-Practice Strategies, and Management Toolbox. Berlin: Springer, 2008. U.S. Department of Commerce, Bureau of Industry and Security, Office Technology Evaluation. Defense Industrial Base Assessment: Counter Electronics. Washington, D.C., January 2010.
In October 2008, Congress passed the Prioritizing Resources and Organization for Intellectual Property Act of 2008 (PRO-IP Act), to improve the effectiveness of U.S. government efforts to protect intellectual property (IP) rights such as copyrights, patents, and trademarks. The act also directed GAO to provide information on the quantification of the impacts of counterfeit and pirated goods. GAO (1) examined existing research on the effects of counterfeiting and piracy on consumers, industries, government, and the U.S. economy; and (2) identified insights gained from efforts to quantify the effects of counterfeiting and piracy on the U.S. economy. GAO interviewed officials and subject matter experts from U.S. government agencies, industry associations, nongovernmental organizations, and academic institutions, and reviewed literature and studies quantifying or discussing the economic impacts of counterfeiting and piracy on the U.S. economy, industry, government, and consumers. GAO is making no recommendations in this report. According to experts and literature GAO reviewed, counterfeiting and piracy have produced a wide range of effects on consumers, industry, government, and the economy as a whole, depending on the type of infringements involved and other factors. Consumers are particularly likely to experience negative effects when they purchase counterfeit products they believe are genuine, such as pharmaceuticals. Negative effects on U.S. industry may include lost sales, lost brand value, and reduced incentives to innovate; however, industry effects vary widely among sectors and companies. The U.S. government may lose tax revenue, incur IP enforcement expenses, and face risks of counterfeits entering supply chains with national security or civilian safety implications. The U.S. economy as a whole may grow more slowly because of reduced innovation and loss of trade revenue. Some experts and literature also identified some potential positive effects of counterfeiting and piracy. Some consumers may knowingly purchase counterfeits that are less expensive than the genuine goods and experience positive effects (consumer surplus), although the longer-term impact is unclear due to reduced incentives for research and development, among other factors. Three widely cited U.S. government estimates of economic losses resulting from counterfeiting cannot be substantiated due to the absence of underlying studies. Generally, the illicit nature of counterfeiting and piracy makes estimating the economic impact of IP infringements extremely difficult, so assumptions must be used to offset the lack of data. Efforts to estimate losses involve assumptions such as the rate at which consumers would substitute counterfeit for legitimate products, which can have enormous impacts on the resulting estimates. Because of the significant differences in types of counterfeited and pirated goods and industries involved, no single method can be used to develop estimates. Each method has limitations, and most experts observed that it is difficult, if not impossible, to quantify the economy-wide impacts. Nonetheless, research in specific industries suggest that the problem is sizeable, which is of particular concern as many U.S. industries are leaders in the creation of intellectual property.
USPS’s current field-office structure includes 7 area offices and 67 district offices. USPS’s management structure is decentralized, with the area and district offices overseeing a vast network of facilities, which, as of December 19, 2011, included 31,060 post offices and 461 mail- processing facilities (see fig. 1). According to USPS data, the operating cost of its field offices in fiscal year 2011 totaled about $1.2 billion—the majority of which (about $1 billion) was spent operating district offices. The total operating costs of USPS’s field offices represented less than 2 percent of its approximately $71 billion fiscal year 2011 operating expenses. Significant policy and operational decisions are made at USPS headquarters and disseminated through the managerial hierarchy, including area and district offices. Each of the 67 district offices reports to a designated area office, which, in turn, reports to headquarters. Employees in the area offices are generally responsible for overseeing district offices and facilities that have an area-wide impact, such as mail- processing facilities, while employees in district offices are typically responsible for overseeing post offices and other facilities that serve a particular district. For example, as of December 19, 2011, the Southwest Area office was responsible for overseeing 90 mail-processing facilities, while the 12 district offices in the Southwest Area were responsible for overseeing operations at about 5,600 post offices—an average of about 470 post offices per district. Each area and district office is organized into departments, which include operations support, human resources, finance, and marketing. In 2011, USPS had 4,985 field office employees (806 area and 4,179 district employees), who comprised less than 1 percent of USPS’s workforce of about 557,000 career employees. About 85 percent of USPS’s career workforce, including most mail carriers and mail- processing staff, is covered by collective bargaining agreements with employment protections, such as no lay-off provisions. In contrast, most USPS field employees, which include area and district office managers, are not covered by collective bargaining agreements. From 2002 to 2011, USPS reduced the number of employees in area and district offices by almost 56 percent (see fig. 2) by, among other actions, closing 4 area offices and 18 district offices and centralizing some accounting, human resources, and other services. (App. I provides additional information on these office closures and selected centralizations of administrative services previously performed in field offices.) In September 2011, the Postmaster General testified before Congress that due to USPS’s urgent need to address its financial situation, USPS was undertaking or planning several efforts intended to improve the efficiency in its retail, mail-processing, and delivery networks.February 2012, USPS issued a 5-year business plan in which it estimated that these efforts and others, such as reducing Saturday deliveries, could restore USPS to profitability. USPS estimates that it could save $9.1 billion annually, the majority of which will be achieved by 2016, through changes in the following areas: In Retail network: USPS plans to downsize its retail network for potential savings of $2 billion annually beginning in 2016. As part of this effort, USPS plans to review about half of its post offices to identify opportunities to close facilities, reduce work hours, and expand the use of lower-cost alternatives, such as self-service kiosks, and partnerships with retailers. Mail-processing network: USPS plans to downsize its mail- processing network and reduce costs in its transportation network for potential savings of $4.1 billion annually beginning in 2016.this effort, USPS anticipates closing or consolidating about half of its mail-processing facilities and reducing the number of its employees. Delivery network: USPS is realigning its delivery routes for potential savings of $3 billion annually beginning in 2016. As part of this effort, USPS plans to eliminate and consolidate approximately 20,000 out of its 144,000 city routes. While reducing USPS’s network costs is essential, the Postmaster General testified that USPS also must generate additional revenue to deal with its financial crisis. To help accomplish this, he said he implemented a variety of “core business strategies” to, among other things, (1) strengthen the value of mail to businesses, (2) improve its customers’ experience using USPS’s services, and (3) compete with private sector firms for the package business. USPS’s fourth core business strategy—becoming a “leaner, faster, and smarter” organization—relates to reducing its network costs and includes the actions described above. To address USPS’s financial problems, several Members of the 112th Congress have introduced postal reform legislation which, if enacted, would likely impact USPS’s downsizing plans, including its ability to close retail and mail-processing facilities. Certain legislative proposals also include provisions requiring USPS to develop and submit to Congress plans for further consolidating its field offices. While USPS needs authority from Congress to make some of its planned network changes, it currently has the flexibility to continue consolidating area and district offices. In December 2011, USPS announced a moratorium on closing its post offices and mail-processing facilities until May 15, 2012. The moratorium was established in response to congressional requests for additional time to enact comprehensive postal reform legislation. In the interim, USPS is continuing to review the feasibility of closing retail facilities and recently completed studies of mail-processing facilities for consolidation and possible closure. Field employees have key roles in USPS’s efforts to reduce costs in its retail, mail-processing, and delivery networks. The extent of area and district employees’ involvement in specific cost-reduction efforts varies, however. For example, area employees, who comprise 16 percent of the field employees, generally provide guidance and oversee the implementation of all cost-reduction efforts in their area to ensure consistency in how these efforts are implemented. In addition, area employees prepare proposals for headquarters on potential consolidations of mail-processing operations, based on information gathered and presented by district employees. On the other hand, district employees, who account for the remainder of field employees, are directly responsible for carrying out both the retail and mail-processing facility reviews and other cost-reduction efforts, such as consolidating delivery routes. Since 2006, USPS has reviewed over 3,000 retail facilities and closed 686—about 23 percent of those reviewed. In 2011, as part of USPS’s Retail Access Optimization Initiative, the agency announced plans to review another 3,650 retail facilities for possible closure. Reviewing retail facilities for possible closure generally involves one to three stages, depending on the outcome of each stage. Area officials oversee activities related to each stage to ensure compliance with USPS’s requirements. District employees are involved in completing work required for each of these stages. For example, in the first stage, district employees study and prepare a report on the feasibility of closing a particular facility and examine the potential effects on (1) services, (2) customers, and (3) USPS employees, as well as the potential economic savings associated with closing a particular retail facility. To identify these effects, district employees collect and analyze operational, financial, and delivery data related to the facility. District employees also distribute questionnaires to potentially affected customers about the customers’ service needs and access to postal services in their vicinity and hold community meetings to discuss, among other matters, the reason for the proposed change in service and to respond to customer inquiries and concerns. According to USPS officials, these questionnaires and community meetings provide USPS with local information that headquarters might not otherwise have available during its decision-making process. (Fig. 3 provides more information on USPS’s process for reviewing retail facilities for possible closure.) When headquarters officials decide to close a retail facility, district employees carry out a variety of activities related to the closure. For example, district employees must move equipment, realign any affected delivery routes, and transfer mail carriers and other employees into other locations. Related to this, district employees also are responsible for identifying potential positions for staff affected by the closure. None of the 3,650 facilities USPS identified for review in 2011 had been closed at the completion of our review. USPS is continuing to review these facilities and could decide to close some of them after the moratorium on facility closures expires on May 15, 2012. USPS officials acknowledge that closing retail postal facilities is highly contentious. As a result, according to USPS officials, USPS is exploring additional options to reduce its retail facility costs by, for example, reducing employee work hours and shortening operating hours at selected facilities. Since 2006, USPS has taken several actions to reduce its costs by improving the operational efficiency of its mail-processing network. As discussed in our April 2012 report, these actions included consolidating operations at various types of mail-processing facilities and closing unneeded facilities, which, according to USPS, resulted in about $2.4 billion in cost savings. completed Area Mail Processing reviews of 264 of its mail-processing facilities to examine the feasibility of consolidating mail-processing operations from one or more postal facilities to other facilities to improve USPS’s operational efficiency. Of the 264 facilities that were reviewed, 35 will remain open, 6 are on hold for further study, and 223 have been found feasible for consolidation, according to USPS. Area and district employees completed these reviews, which examined opportunities to consolidate mail origination and destination operations. As discussed in our April 2012 report, these savings derive from actions in three areas: Specifically, USPS (1) closed nearly all of its Remote Encoding Centers (10 of 12) and Airport Mail Centers (76 of 77) between 2006 and 2011, (2) moved all of the operations previously performed at 21 Bulk Mail Centers into its Network Distribution Centers, and (3) completed 100 mail-processing consolidations. GAO-12-470. According to USPS, it considered a variety of criteria in identifying the 264 facilities for possible consolidation, including, projected savings, service issues, and capacity within processing plants. processing facilities for review, district employees analyzed, among other things, the facility’s mail volumes, work hours, and services, as well as the estimated costs and savings of a potential consolidation and prepared a report on their findings that was reviewed by headquarters and area managers. After considering the results of the feasibility study headquarters made a final determination on whether to consolidate one or more aspects of the facility’s operations. Figure 4 provides more information on USPS’s process for reviewing mail-processing facilities for possible closure or consolidation. If headquarters approves the consolidation of operations at a facility, area and district employees perform those activities. For example, area employees must move mail-processing equipment and transfer the facility’s mail operations and transportation network to other USPS facilities. In addition, area and district employees must coordinate on repositioning employees into new positions as specified by their collective bargaining agreement. After the consolidation has been completed, area employees conduct—with input from district employees—post- implementation reviews to evaluate, among other matters, the impacts of the consolidation and actual cost savings. USPS uses a variety of routes to deliver its mail, but the two principle route types are “city” and “rural.” City and rural carriers have different collective bargaining agreements and compensation systems. Generally, city carriers are paid by the hour with overtime, as applicable, while rural carriers are salaried employees. in routes that city carriers can complete in less than 8 hours. Recognizing that these factors cause inefficiencies in USPS’s city delivery network, in 2008, USPS and the National Association of Letter Carriers—the union that represents city carriers—entered into an agreement that permits USPS to conduct city route inspections and to realign routes that no longer reflect 8 hours of work into more efficient routes. In fiscal year 2011, USPS eliminated 6,821 of its 224,485 city and rural delivery routes (about 3 percent). In addition, USPS recently announced plans to eliminate another 20,000 city routes (about 9 percent of current routes) for an estimated $2 billion in annual savings. According to USPS officials, realigning delivery routes likely will continue well into the future given expected mail volume declines and annual increases in addresses receiving mail which, until recently, have historically increased by about 1 million addresses per year. Area and district employees have key roles in realigning city delivery routes. Specifically, area employees oversee the city route realignment process to ensure consistency across districts in their area, while district employees directly carry out the realignments. To do so, district employees (1) gather and analyze information on the factors that affect delivery time; (2) observe the time a mail carrier spends servicing his or her route and, if the route is determined to represent less than a full 8- hour work day; (3) reconfigure the route so that it is more efficient. To evaluate the factors that affect delivery time, district managers consider a variety of factors, including the number of addresses and mail volume on a particular route, the distance between addresses, the geographic location of the route (e.g., the downtown of a major metropolitan area versus a small town), and the mode of delivery (e.g., mail delivered to a curbside mailbox, a mail slot in a door, or a cluster box). In addition, district employees physically observe a mail carrier’s daily activities, both in the office preparing mail for delivery and transporting and delivering the mail, to determine whether the route represents a full 8-hour day. If after conducting these activities, district employees determine that the workload does not fill an 8-hour day, specially trained district employees use a computerized management tool—called the Carrier Optimal Routing system—to redesign routes. This system uses digital mapping, algorithms, and route inspection data to create efficient city carrier routes that are more compact and contiguous. As a result, USPS could, for example, consolidate portions of other city routes, such as routes that necessitate carrier overtime, to complete or augment the prior reconfigured route, thereby eliminating the need for overtime. Because route realignments reorder mail deliveries along city routes, the realignments result in major changes to USPS’s database for managing addresses that, according to district managers, requires district Address Management Systems Specialists to update the database on an ongoing basis. According to USPS field officials, if updates to the database are not made in a timely manner, delivery efficiency and customer service would be degraded. As with USPS’s cost reduction efforts, area and district employees have a significant role in several aspects of USPS’s efforts to generate additional revenue through three of the Postmaster General’s four core business strategies: (1) strengthening the value of mail to businesses, (2) improving its retail customers’ experience, and (3) competing for the package business. Area and district employees promote and oversee a variety of efforts intended to strengthen the value of mail to businesses. Collectively, these efforts are intended to enhance how U.S. businesses contact their customers to deliver billing statements and notifications (using First-Class Mail) and advertisements and offers (using Standard Mail). For example, area and district Business Service Network employees (marketing employees) provide direct and ongoing customer service to business mailers to, among other things, help the mailers conveniently process their mail and to correct any mailing problems, such as shipment delays, that may arise. In 2011, USPS generated about $50 billion (76 percent of its total operating revenue) from business mailers. Of this amount, $40 billion (60 percent of its total operating revenue) came from business mailers that area and district Business Service Network employees directly serviced. Area and district employees also have a role in promoting the value of mail with new business mailers. According to the Postmaster General’s speech in May 2011, three quarters of U.S. businesses are not using the mail to market their businesses to potential customers. Thus, he said encouraging these businesses to do so represents an opportunity to increase USPS’s revenue. In January 2011, USPS introduced a new initiative called Every Door Direct Mail to (1) make it easier for small and medium-sized businesses to advertise through the mail, and (2) enable local businesses to target potential customers by street, as opposed to specific mailing addresses. Area employees oversee the implementation of this initiative in districts within their area, and monitor performance metrics and revenue earned. Similarly, district employees promote the initiative in their districts to attract new customers in their locations. According to USPS, this initiative generated over $92 million in revenue in the 9 months following its introduction in January 2011. Improving the retail customer experience means maintaining and growing the customer base through improved customer service. Area employees have a key role in monitoring the quality of customer interactions and resolving any performance problems identified. For example, area employees told us that they regularly monitor the results of, among other things, customer surveys in specific post offices as well as in districts as a whole. When these data identify specific performance problems, area employees share the results with district employees who are then expected to work with particular facilities and individuals to correct the identified service-related problems. District employees also track the performance and revenue generated by third parties who provide alternative access to postal products through contract postal units and work with local postmasters, who most directly oversee these facilities, to improve service and maximize revenue. USPS’s field employees also have key roles in helping the USPS successfully compete for and grow its package business. Overall, according to USPS, its package deliveries have increased from 12.7 percent of its revenue in fiscal year 2006 to 16.1 percent in fiscal year 2011. District employees work directly with employees at post offices and mail-processing facilities to train employees on how to properly scan packages for delivery—a key factor in growing USPS’s package business. Proper scanning helps ensure that packages travel through its delivery network efficiently and that customers and USPS can track the packages to determine their current location. USPS’s competitors offer this service, and USPS hopes to improve the reliability of its package- tracking services to capture portions of its competitors’ business. In addition, field employees we interviewed in one of the four areas we selected for our review developed a program to generate additional revenue by targeting small businesses that are not currently using USPS’s services. Specifically, in the Southwest Area, area employees worked with employees of their Dallas district office to collaborate on ways to identify and contact small businesses that generate less than $10,000 annually in postage sales and that currently use USPS competitors, such as the United Parcel Service, to persuade these companies to use USPS for their package mailing needs. According to USPS from August 1, 2011, through February 10, 2012, this initiative generated about $11.9 million in revenue from new small business customers. USPS headquarters officials stated that USPS is currently considering whether to implement similar revenue-generating efforts elsewhere. In 2011 USPS consolidated its field offices by closing eight offices, centralizing support services, and eliminating field positions for an estimated $150 million in annual savings. However, USPS field employees we interviewed were concerned that the staffing reductions could negatively affect their ability to carry out additional cost-savings and revenue-generating efforts. USPS issued a plan in December 2011 to evaluate area and district offices for possible consolidation but, according to headquarters officials, USPS does not anticipate initiating these evaluations until after the completion of cost-reduction efforts in its retail, mail-processing, and delivery networks. Prior to the consolidation, USPS estimated that the 2011 consolidation would result in an estimated annual cost savings of $150 million through December 2011. According to USPS, the goals of this consolidation were to, among other things, enhance and strengthen customer service and allow USPS to more quickly adapt to changing market forces, such as continuing mail volume declines. USPS closed its Southeast area office and seven district offices, and assigned functions previously performed at these locations to other area and district offices within close proximity. According to USPS headquarters officials, USPS considered a variety of factors in deciding which field offices to close and which positions to eliminate, including mail volume, number of customers served, population density, number of delivery routes, and revenue generated. Overall, USPS reduced its field offices positions by 1,946, or 26 percent. At the conclusion of the consolidation in September 2011, USPS had 817 area and 4,698 district office positions, totaling 5,515 field positions. Of the four departments that we selected for review, most of the field position reductions were in operations support (35 percent) and human resources (35 percent). The smallest reductions (14 percent) were in marketing (see table 1). As part of this consolidation, USPS also centralized several district support services positions, including the following, into other locations which resulted in a net reduction of 247 positions. USPS centralized Family and Medical Leave Act Coordinators from district offices to its Human Resources Shared Services Center in Greensboro, North Carolina. Overall, USPS eliminated 106 district positions and, according to officials at the center, created 45 positions to handle the work previously carried out by the districts’ coordinators—a net reduction of 61 positions. USPS centralized responsibilities for allocating budgets from district to area offices. As part of this change, it eliminated 154 of its 221 district budget positions and created 11 positions in its area offices—a net reduction of 143 positions. According to area officials, moving this responsibility to area offices will help ensure that all facilities within an area have standardized and comparable budgets. USPS also eliminated 43 of the 78 district Mailpiece Design Analyst positions and centralized the remaining 35 positions at area offices. The remaining coordinators now provide design services to USPS customers through a centralized hotline. Appendix I provides additional information on selected centralization actions between 2002 and 2005. Several area and district officials expressed concern about how staff reductions and the allocation of field resources could affect their ability to manage ongoing and planned cost-reduction and revenue-generation efforts. Such concerns include the following: Four of the six district operations support managers whom we interviewed raised concerns about the number of Address Management Systems Specialist positions that USPS eliminated in the 2011 consolidation. According to these officials, the loss of these positions could limit their ability to perform future route realignments. Overall, USPS reduced the number of these district positions from 624 positions to 427—a 32 percent reduction. As discussed, according to field officials, Address Management Systems Specialists are critical to USPS’s ongoing efforts to update its address management database following route realignments. Seven of the 10 area and district marketing managers we interviewed also raised concerns about the number of marketing positions eliminated in the 2011 consolidation. Of these reductions, 38 percent were area and district Business Service Network positions. These reductions may have been particularly difficult in the Capital Metro Area office because the area gained responsibility for managing 25 additional business mailer accounts previously handled by the Southeast Area office. Several field officials told us that given USPS’s increased focus on generating revenue, it is important that area and district offices be staffed appropriately to oversee the range of marketing activities and to maintain business mailer customers. A district marketing manager reiterated this point, indicating that most business mailers expect personal service to resolve any mailing issues and that consequently, USPS should ensure that these business mailers receive consistent and reliable service to reduce the possibility of losing their business to competitors. As we have reported, businesses that publish mail Periodicals, such as daily or weekly news magazines, have expressed concern about USPS’s ability to provide reliable service. According to these business mailers, they will likely (1) accelerate their efforts to shift subscribers from hard copy mail to electronic communication or (2) otherwise stop using USPS if it is unable to provide reliable service. In addition, 12 of the 30 district managers we interviewed expressed concern that when USPS allocated resources for the 2011 consolidation, it did not fully consider that some offices would be picking up additional work from the field office closures. For example, several district managers told us that during this consolidation, several field position reductions were made that, in their view, did not reflect office workload variations. The managers said that their departments are now understaffed. For example, the District Manager at the Connecticut Valley District told us that following the 2011 consolidation, her district is now one of the largest districts in the country, both with respect to the number of USPS employees and geographic size. Specifically, she said her district grew from 11,939 employees to 15,421—an increase of 29 percent and added over 100 facilities, while losing 20 district positions. She expressed concern that despite overseeing one of the largest districts, she experienced the same number of staffing reductions as other districts with smaller workloads. Eight field officials told us that long work hours resulting from the 2011 consolidation combined with other factors, such as employee uncertainty about their future employment, have made it difficult to recruit and retain area and district employees for management positions. For example, one district office manager told us that employees in her department have been seeking management opportunities outside USPS because of these concerns. Another district manager commented that he has seen a decrease in the number of employees willing to move into managerial positions because of uncertainty about future promotional opportunities and the stress associated with these positions. In addition, 17 field officials we talked to expressed concern that the loss of a significant number of senior, experienced area and district employees during the 2011 consolidation might negatively affect USPS’s ability to manage field operations. USPS headquarters officials stated that while they understand the concerns expressed by area and district employees, the changes undertaken as part of the 2011 consolidation were intended to align with other actions that USPS has taken to standardize and streamline support services. For example, according to these officials, USPS has developed several Web-based systems to automate and standardize administrative tasks that were previously conducted by district staff. As these tasks are streamlined, fewer district resources are needed. In addition, as discussed, USPS headquarters officials told us that USPS considered a variety of factors in deciding its 2011 consolidation, including mail volume, number of customers, population density, number of delivery routes, and revenue generated. In addition, according to USPS headquarters officials, USPS also intends to consider workload impacts in its future field-office consolidations. In December 2011, USPS issued a plan for evaluating area and district offices for possible consolidation and for carrying out future consolidations. According to the OIG, this plan addresses its recommendations related to USPS’s 2009 consolidation. In particular, OIG officials told us that the document addresses recommendations to USPS to develop a plan which (1) periodically evaluates area and district offices for possible consolidation, (2) guides decisions on future field- office consolidations, and (3) considers factors such as an office’s mail volume, workload, and proximity to other offices. In addition, OIG officials told us that the plan addresses the agency’s recommendation that USPS develop procedures for maintaining adequate documentation of its field office evaluations and consolidation decisions. USPS’s December 2011 plan specifies that it will take numerous steps to evaluate field offices for possible consolidation, including: conducting periodic evaluations of area and district offices to assess the need for consolidations; developing a business case, which includes expected cost savings and benefits, for proposed field-office consolidations: using reliable data sources to evaluate area and district offices for adequately documenting all analyses and data used to make conducting post-consolidation reviews to identify key achievements and actual cost savings, and document lessons learned. USPS’s past area and district consolidations have lacked documentation and transparency. For example, despite numerous requests, USPS did not provide us with any documentation of the analyses it used or the approval process for its decision to consolidate field operations in 2011. Similarly, in 2010, the OIG reported that USPS could not always provide documentation supporting its 2009 and earlier field-office consolidations. In addition, USPS has not completed postconsolidation reviews to assess either its lessons learned or its actual cost savings. Headquarters officials told us that the plan it issued in December 2011 should address these past concerns. These officials also said that USPS intends to ensure that its future evaluations consider the operational impacts of potential field- office consolidations on ongoing and planned initiatives, such as those related to downsizing its retail, mail-processing, and delivery networks. While USPS’s recent plan indicates that it intends to periodically evaluate its field office structure for possible consolidation, it does not specify when it will initiate these evaluations. According to headquarters officials, USPS does not plan to initiate these evaluations until after the completion of cost-reduction efforts in its retail, mail-processing, and delivery networks, efforts that USPS intends to complete in 2016. These officials also told us that further consolidation of field offices would be counterproductive at this time because field employees are key to the successful accomplishment of these cost-reduction efforts. In addition, the headquarters officials noted that cost savings from future field consolidations would be minimal compared to the roughly $9 billion USPS estimates can be saved by streamlining its retail, mail-processing, and delivery networks. Thus, according to these officials, for the time being, USPS needs its current field-office structure to focus on other efforts that will result in the largest potential cost savings. USPS’s dire financial outlook necessitates urgent action to align its total network and reduce its costs as mail volume continues to decline. To reduce costs, USPS reduced the number of employees in its area and district offices by almost 56 percent from 2002 to 2011. And, in 2011, it announced additional USPS-wide initiatives to save an estimated $9.1 billion annually by 2016. USPS also recently issued a plan for evaluating area and district offices for future consolidations; however, USPS has chosen to hold off on future field-office consolidations until after the USPS-wide initiatives are complete, which should allow USPS to make additional field- office changes, if needed, based on a network that is aligned with the reduced mail volume. This decision seems appropriate in view of the importance of area and district offices in implementing and managing these initiatives. If USPS decides to move forward with future field-office evaluations, its plan for doing so should address past concerns about inadequate documentation and transparency and lead to postconsolidation reviews to assess lessons learned and to measure actual savings. We provided a draft of this report to USPS for review and comment. USPS had no comments. We are sending copies of this report to the appropriate congressional committees, the Postmaster General, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions on this report, please contact me at (202) 512-2834 or stjamesl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and key contributors to the report are listed in appendix III. Appendix I: Timeline of Recent Field Office Consolidations and Information on Selected Centralizations of Administrative Services Prior to 1992, USPS’s field-office structure consisted of 5 regions, 73 field divisions, and 144 management sectional centers. USPS centralized some of its accounting services in 2002 and 2003. As part of this effort, USPS replaced its prior information technology systems with a new computer system—the Standard Accounting for Retail System. In addition, USPS reviewed the range of accounting services performed at its district offices to identify services that could be standardized and streamlined and, as a result of this effort, eliminated 1,063 district accounting positions. To handle work previously performed by district employees, USPS created 295 positions at three accounting shared service centers in Eagan, Minnesota; St. Louis, Missouri; and San Mateo, California—a net reduction of 768 USPS positions. In fiscal year 2005, USPS completed a postcentralization review and determined that its actions had resulted in cost savings of $56.8 million. In 2004, USPS centralized activities related to its investigations of equal employment opportunity complaints within a shared service center—the National Equal Opportunity Employment Service Office—located in Tampa, Florida. To accomplish this centralization, USPS eliminated 169 area and district human-resource positions and created 47 positions at the new center to handle the complaints—a net reduction of 122 employee positions. According to USPS officials at the center, the goals of this centralization were to (1) correct problems in USPS’s equal employment opportunity complaint process, (2) reduce a substantial backlog of cases, and (3) implement a consistent process for investigating future complaints. Among other responsibilities, employees at this center review complaints filed by USPS employees and applicants for employment, assign complaints to contracted investigative personnel, and review and make determinations about the investigators’ findings. According to USPS officials at the center, centralizing this work resulted in about $13 million in cost savings as of fiscal year 2011. In addition, beginning in 2005, USPS began soliciting federal agencies to carry out their agencies’ equal employment opportunity investigations through the center. According to officials at the center, 17 agencies have entered into interagency agreements with USPS for this purpose. From 2006 to 2011, USPS earned about $3 million in revenue from these agreements, according to officials at the center. In 2005, USPS established its Human Resource Shared Services Center in Greensboro, North Carolina, to centralize most of its services related to employee benefits; personnel actions; safety and injury compensation; and employee hiring, retirements, and reassignments. According to USPS officials at the center, the centralization was needed to streamline and standardize its human resources services. As part of this effort, USPS introduced a new information technology system—the Human Capital Enterprise System—which integrated information from a variety of prior human resource computer systems. According to USPS officials, the introduction of the improved computer system, along with the centralization, allowed USPS to reduce its district human resource staff by approximately 1,300-1,400 positions. While USPS eliminated well over a thousand district positions, it also created 457 positions at the shared services center to provide these services—a net staff reduction of roughly 900 employee positions. According to a Human Resources official, the centralization of human resource services resulted in cost savings of about $150 million annually from 2007 through 2011. This report describes: (1) the role of area and district office employees in implementing the U.S. Postal Service’s (USPS) cost-savings and revenue-generation efforts and (2) USPS’s actions to consolidate its field office structure in 2011 and the impact of these actions. This report also describes actions taken by USPS between 2002 and 2005 to centralize selected services previously conducted at field offices to other USPS locations. To address these objectives, we reviewed numerous documents, including prior GAO and USPS Office of Inspector General (OIG) reports, USPS documents, and the September 6, 2011, testimony of the Postmaster General. We also interviewed USPS officials, including headquarters officials and officials in area and district offices. We interviewed officials at four area offices—the Capital Metro Area, the Northeast Area, the Southwest Area, and the Pacific Area—that we selected based on several factors, including geographic dispersion throughout the U.S. and whether the office had recently absorbed the responsibilities of other offices following a USPS field-office consolidation. We selected a mixture of area offices that had been affected by recent consolidations and one office that had not been affected. We also interviewed officials from six district offices—Connecticut Valley; Dallas; Fort Worth; Los Angeles; Northern Virginia; and Santa Ana—which we selected based on their proximity (i.e., within about 100 miles) to the four area offices we selected. In total, we interviewed over 50 USPS officials, including USPS’s Chief Human Resources Officer; Area Vice Presidents; District Managers; area and district officials responsible for operations support, human resources, finance, and marketing in their locations; and managers representing USPS’s accounting, equal opportunity employment investigations, and human resources shared services centers. We also interviewed representatives from the National Association of Postal Supervisors, which represents among others, USPS area and district managers. To describe the role of field office employees in implementing USPS’s cost-savings and revenue-generation efforts, we reviewed USPS documents describing USPS’s strategic goals and the role of area and district employees in carrying out potential consolidations of retail, mail- processing, and delivery networks. To describe USPS’s actions to consolidate its area and district offices in 2011 and the impact of this consolidation we reviewed a 2010 OIG report on the 2009 consolidation, which included recommendations for USPS related to future field-office consolidation actions, and a variety of USPS documents, including USPS’s (1) statements on the expectations and goals for the 2011 consolidation, (2) 2011 Annual Report, (3) Form 10-K filing with the Securities and Exchange Commission on its 2011 Annual Report, (4) plan issued in December 2011 entitled Area and District Office Structure Evaluations Strategy, Policy and Process, and (5) 5-Year Business Plan issued in February 2012. We also interviewed USPS officials to learn about the financial and operational impacts of the 2011 consolidation. In addition, we analyzed USPS data on the number of employee positions at area and districts offices following the 2011 consolidation, and USPS’s cost-savings estimates for this consolidation and for its support service centralization efforts between 2002 to 2005. We assessed the reliability of these data sources by, among other things, interviewing USPS officials and reviewing USPS procedures for maintaining the data and verifying their accuracy. Based on this information, we determined that the data provided to us were sufficiently reliable for our reporting purposes. Finally, we interviewed USPS headquarters officials and OIG officials to obtain information on recommendations in the OIG’s 2010 report and USPS’s December 2011 plan for future evaluations of area and district offices for possible consolidations. We conducted this performance audit from April 2011 to April 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient and appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Kathleen Turner, Assistant Director; Patrick Dudley; Delwen Jones; Elke Kolodinski; James Leonard; Maria Mercado; Sara Ann Moessbauer; Joshua Ormond; and Crystal Wesco made key contributions to this report.
USPS has lost $25.3 billion over the last 5 years and expects to lose another $83.2 billion through fiscal year 2016 unless it takes action to reduce its costs and improve its operational efficiency. USPS has cut costs in its retail, mail processing, and delivery networks, as well as in its field office structure, which includes 7 area offices and 67 district offices, and plans other cost-cutting actions throughout the organization. As requested, this report discusses (1) the role of area and district employees in implementing USPS’s cost-savings and revenue-generation efforts and (2) USPS’s actions to consolidate its field office structure in 2011, and the impact of this consolidation. GAO analyzed USPS documents describing the role of field staff in carrying out USPS’s cost-saving and revenue-generation efforts; information on the impact of the 2011 consolidation, including anticipated cost savings; and USPS’s plan, issued in December 2011, for evaluating and implementing possible field office consolidations. GAO also interviewed USPS officials at headquarters and at four area and six district offices selected based on several factors, including geographic dispersion throughout the U.S. Field employees have key roles in the U.S. Postal Service’s (USPS) efforts to reduce costs and generate revenue. For example, these employees evaluate the feasibility of closing or consolidating facilities, such as post offices and mail- processing facilities; carry out the closures and consolidations of these facilities; and evaluate and consolidate delivery routes. These roles support USPS’s plans to save, by 2016, about $9 billion annually by improving its operational efficiency and realigning its retail, mail processing, and delivery networks with declining mail use. These plans include evaluating about half of its approximately 31,000 post offices to identify cost-reduction opportunities, closing or reducing operations at about half of its 461 mail-processing facilities, and consolidating about 20,000 of its 144,000 city delivery routes. Area and district employees also have a significant role in USPS’s efforts to generate additional revenue by (1) promoting the value of mail to businesses, (2) maintaining and increasing its customer base through customer service, and (3) growing the package business. In 2011, USPS consolidated its field office structure by, among other actions, closing one area office and seven district offices and eliminating 1,946 positions—actions that it estimated would save about $150 million annually. However, several area and district officials expressed concern that this consolidation could lessen their ability to carry out ongoing and future cost-savings and revenue-generation initiatives, and to recruit and retain future managers. In December 2011, USPS issued a plan on how it would evaluate additional field offices for possible consolidation and address concerns that the USPS Office of Inspector General identified in past field office consolidations. These concerns included the need to develop a plan to guide future field office consolidations and to consider factors such as workload and proximity to other offices. According to USPS officials, the plan also addressed past concerns about inadequate documentation and transparency and will lead to post-consolidation reviews to assess lessons learned and measure actual savings. Although USPS has a plan to guide future consolidations, according to USPS officials, it does not plan additional field office consolidations until it has completed ongoing cost-reduction efforts in its retail, mail processing, and delivery networks. GAO is not making recommendations in this report. USPS had no comments on a draft of this report.
Each state has a central repository for receiving criminal history information contributed by law enforcement agencies, prosecutors, courts, and corrections agencies throughout the state. Each repository compiles this information into criminal history records (commonly called “rap sheets”), which are to be made available to criminal justice personnel for authorized purposes. Typically, a criminal history record is created for each individual offender (each “subject”). The record is to contain relevant identifiers (including fingerprints) and information about all arrests and their dispositions, such as whether the criminal charges were dropped or resulted in an acquittal or a conviction. Efforts to improve criminal history records nationwide predate NCHIP by more than 2 decades. For example, the development of computerized criminal history systems in the states was a priority of the Law Enforcement Assistance Administration, established by the Omnibus Crime Control and Safe Streets Act of 1968. Also, during much of the 1970s, 1980s, and early 1990s—largely without specifically appropriated funds—BJS (or its predecessor, the National Criminal Justice Information and Statistics Service) took the lead in encouraging states to computerize criminal records and ensure conformity with evolving FBI standards. In the 1990s, efforts to improve the accuracy, completeness, and accessibility of criminal history records received an impetus with passage of various federal statutes, particularly the Brady Handgun Violence Prevention Act (“Brady Act”), which, among other things, authorized grants for the improvement of state criminal history records and amended the Gun Control Act of 1968; the National Child Protection Act of 1993, which was enacted to provide national criminal background checks for child care providers; and the Violent Crime Control and Law Enforcement Act of 1994, which, among other things, strove to improve access to court protection orders and records of individuals wanted for stalking and domestic violence. With initial grant awards to states in 1995, NCHIP was designed by BJS to implement these federal mandates to improve public safety by enhancing the nation’s criminal history records systems. In 1998, NCHIP’s scope was expanded in response to federal directives to develop or improve sex offender registries and to contribute data to a national sex offender registry. Also, in 1998, the “permanent” provisions of the Brady Act went into effect with the implementation of NICS—the computerized system designed to instantly (as the name indicates) conduct presale background checks of purchasers of any firearm (both handguns and long guns). In contrast, the “interim” provisions of the Brady Act (effective from 1994 to 1998) applied to handgun purchases only, and law enforcement officers were allowed a maximum of 5 business days to conduct presale background checks for evidence of felony convictions or disqualifying information. The effectiveness of NICS depends largely on the availability of automated records—including the final dispositions of arrests, such as whether the criminal charges resulted in convictions or acquittals. In this regard, many criminal justice agencies, from police departments to the courts, are generators of records relevant to NICS. Over the years, BJS has tried to ensure that the use of NCHIP funds was closely coordinated with the federal Edward Byrne Memorial Grant Program, which requires that states use at least 5 percent of their awards for improving criminal history records. All 50 states, the District of Columbia, and the U.S. territories have been recipients of NCHIP grant awards, which totaled more than $438 million during fiscal years 1995 through 2003. Also, as mentioned previously, to ensure national compatibility and accessibility of records, recipients’ uses of NCHIP funds must conform with the FBI’s standards for national data systems—including, as applicable, NICS, NCIC, III, and IAFIS. Regarding IAFIS, for example, most states have some type of automated fingerprint identification system (AFIS); a state can use NCHIP funds to enhance its AFIS by purchasing Livescan equipment, if the state has implemented (or is implementing) procedures to ensure that the AFIS is compatible with FBI standards. More details about the national data systems are presented in appendix II. For the recent fiscal years we studied, states used NCHIP grants primarily to support NICS in conducting background checks of firearms’ purchasers. According to BJS data, a total of $165.2 million in NCHIP grants was awarded during fiscal years 2000 through 2003. Of this total, a majority— over 75 percent—was used for NICS-related purposes that encompassed a broad range of activities, such as converting manual records to automated formats and purchasing equipment to implement computerized systems or upgrade existing systems. All other uses of NCHIP grants during this period, according to BJS, also had either direct or indirect relevance to building an infrastructure of nationally accessible records, such as implementing technology to support the automated transfer of fingerprint data to IAFIS. We found that a state’s participation status in NICS— whether the state was a full participant, partial participant, or nonparticipant—made little difference in how NCHIP funds were used by states. As indicated in table 1, NCHIP award amounts can be grouped into six spending categories in which BJS awarded a total of $165.2 million in NCHIP grants for fiscal years 2000 through 2003. A majority of these funds was used for NICS-related purposes. For example, the two largest categories of spending—NICS/III/criminal records improvements and disposition reporting improvements—accounted for over 75 percent of total program awards during this period. Both categories directly affected NICS. The NICS/III/criminal records improvements category affected NICS by focusing on activities for improving records related to federal firearms disqualifiers and enhancing access to these records through III. Similarly, the disposition reporting improvements category provided access to information about the disposition of arrests—information that is critical for determining whether persons are legally prohibited from purchasing firearms. Regarding this category, BJS encourages states to focus on making systemic improvements rather than using staff to manually research records to determine dispositions. Nonetheless, according to BJS, states may use NCHIP funds to research arrest dispositions in response to specific NICS-related queries, if the information is subsequently added to the automated system. BJS officials could not quantify the NCHIP grant amounts that all states have allocated for staff to research arrest dispositions. Officials in 2 of the 5 case-study states indicated that their states had used NCHIP funding to research missing arrest dispositions and update criminal history records in response to specific NICS-related queries. One of these states (Maryland) used $41,000 of its fiscal year 2002 NCHIP award to fund a full-time position for researching the state’s archived criminal history records. Also, table 1 shows that BJS awarded 3 percent of NCHIP funding specifically for protection order activities to improve records related to this firearms-purchase disqualifier. The other categories in table 1AFIS/Livescan activities, sex offender registry enhancements, and national security/antiterrorism activitieswere for records improvement efforts that do not directly impact NICS. However, according to BJS, even if not NICS-related, each of the six spending categories in table 1 had either direct or indirect relevance to building an infrastructure of nationally accessible records, such as implementing technology to support the automated transfer of fingerprint data to IAFIS. Appendix III presents more information about the use of NCHIP funds in the 5 case-study states, and appendix IV presents information about the use of NCHIP funds in the 5 priority states. As mentioned previously, for purposes of NICS background checks of persons purchasing firearms, states are categorized as full participants, partial participants, or nonparticipants. As table 2 shows, we found little difference in the use of NCHIP funds by states based on their participation status in NICS. With relatively minor exceptions, the relative order of spending across categories was the same in all three types of states. Of the various spending categories, NICS/III/records improvements reflected the largest difference in percentage points—that is, a difference of 12 percentage points between the partial participant states (47 percent) and the full participant states (35 percent). A BJS official stated that this difference is not substantial and might occur because some states have legislation with slightly different prohibitors for purchasing firearms. Using their own funds, in addition to NCHIP grants and other federal funds, states have made progress in automating criminal history records and making them accessible nationally. For example, the percentage of the nation’s criminal history records that are automated increased from 79 percent at the end of 1993 to 86 percent at the end of 1995 and to 89 percent at the end of 2001, according to BJS’s most recent biennial survey of states. To ensure national compatibility and accessibility of records, recipients’ uses of NCHIP funds must conform with the FBI’s standards for national data systems—including, as applicable, NICS, NCIC, III, and IAFIS. Such conformance is important, for example, because III is the primary system used to access state-held data for NICS checks. The number of states participating in III increased from 26 at the end of 1993 to 30 at the end of 1995 and to 45 by May 2003, indicating growth in compatible automated records. On the other hand, progress has been more limited for some NICS-related purposes. For example, automated information on the disposition of felony and other potentially disqualifying arrests is not always widely available. Also, automated information is not always available to identify other prohibited purchasers of firearms, such as persons convicted of a misdemeanor crime of domestic violence, persons adjudicated as mental defective, or persons who are unlawful users of controlled substances. In fiscal year 2004, BJS plans to begin using a new, performance-based tool for making NCHIP funding decisions. In recent years, with the use of state and federal funds, criminal history record automation levels in the states and the accessibility of these records nationally have improved. BJS survey data from the end of 1993 to the end of 2001 (the most recent data) show that increases in automation levels have outpaced increases in the number of criminal history records. Specifically, while the number of total records increased 35 percent during this period, the number of automated records increased 52 percent— which indicates progress in automating older criminal history records. Also, the number of records accessible by the III system increased 196 percent (see fig. 1). Overall, the percentage of the nation’s criminal history records that are automated increased from 79 percent at the end of 1993 to 86 percent at the end of 1995 and to 89 percent at the end of 2001. The number of states participating in III increased from 26 at the end of 1993 to 30 at the end of 1995 and to 45 by May 2003. Also, according to BJS, other indicators of improved automation levels and accessibility are as follows: In 1997, the FBI established the NCIC Protection Order File to provide a repository for protection order records. As of May 2003 (within 6 years of implementation), 43 states and 1 territory had contributed more than 778,000 records to this system. In 1999, in response to mandates in the amendments to the Jacob Wetterling Crimes Against Children and Sexually Violent Offender Registration Act, the FBI established a national sex offender database for states to register and verify addresses of sex offenders. As of May 2003 (within 5 years of implementation), 50 states, the District of Columbia, and 3 territories had contributed all of their then-applicable records (over 300,000 records) to the National Sex Offender Registry. In 1999, the FBI implemented IAFISa computerized system for storing, comparing, and exchanging fingerprint data in a digital format. As of April 2003 (within 4 years of implementation), 44 states, the District of Columbia, and 3 territories had submitted some portions of their fingerprint files electronically to the FBI for entry into IAFIS. BJS officials told us that NCHIP funds played a role in leading states to these and other accomplishments. Similarly, officials in the 5 case-study states we visited told us that the criminal history record improvements in their states would not have been possible without NCHIP funds. According to BJS officials, NCHIP is best viewed as being an “umbrella” program that pools or coordinates various streams of monies. The officials noted that NCHIP grants generally should not be viewed in isolation, apart from funds that the states themselves spend for these initiatives. That is, the NCHIP grants generally provide the seed money or the supplemental funds that the states need to undertake major system upgrades or to implement an overall plan for modernizing their information systems. While NCHIP requires that states provide a 10 percent match to the federal funds awarded, officials in the case-study states told us that their states typically have invested much more than the required 10 percent. For example, 1 state that has received over $5 million in NCHIP funds estimated that over $20 million of its own funds have been invested in system improvements since 1995. Another state, receiving almost $7 million in NCHIP grants, estimated that $35.4 million in state resources have been spent on improving and automating its systems. In addition to NCHIP and state-provided funds, other federal programs provide funds that can be used to improve criminal history records. For example, the Bureau of Justice Assistance provides funds to states through Byrne grants, a block grant program that requires states to set aside 5 percent of any award for criminal justice information systems to assist law enforcement, prosecution, courts and corrections organizations. In addition to criminal history record improvements, Byrne grants may be used for a variety of other system-related activities that are not related to NCHIP. Examples include activities involving systems to collect criminal intelligence and systems to collect driving-under-the influence data. According to Bureau of Justice Assistance data for fiscal years 2001 through 2003, almost $73 million in Byrne grants were set-aside to improve criminal justice information systems. Grants are also now available for antiterrorism purposes under the Crime Identification Technology Act of 1998, as amended by the United Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act (USA PATRIOT Act) of 2001. Besides characterizing NCHIP as an umbrella program, BJS officials also described it as being a “partnership” program—among BJS, the FBI, and the states and localities—for building a national infrastructure to facilitate the interstate exchange of information. The officials explained that such exchanges or accessibility are needed to support a variety of both criminal justice purposes (e.g., making decisions regarding pretrial release, sentencing, etc.) and noncriminal justice purposes (e.g., conducting background checks of firearms’ purchasers, child-care providers, etc.). The BJS officials noted that NCHIP funds often are spread across a variety of long-term initiatives undertaken by the states’ executive and judicial branch agencies to upgrade the architecture and coverage of criminal records information systems. For some NICS-related purposes, limited progress had been made in the automation and accessibility of relevant records. For example, automated information on the disposition of older felony and other potentially disqualifying arreststhat is, information regarding whether the criminal charges against the arrested individual were dropped or proceeded to be prosecuted and resulted in a conviction or acquittalis critical for conducting background checks of persons purchasing firearms but is not always widely available. Also, automated information is not always available to identify other prohibited purchasers, such as persons convicted of misdemeanor crimes of domestic violence, adjudicated as mental defectives, or who are unlawful users of controlled substances. In conducting background checks of firearms’ purchasers, automated information on whether the criminal charges against arrested individuals were dropped or proceeded to be prosecuted and resulted in a conviction or acquittal is not always widely available. For example, 23 of the 38 states that responded to a question on final dispositions in BJS’s most recent biennial survey reported that 75 percent or less of their arrest records had final dispositions recorded (see table 3). It is important to draw a distinction between old and new arrest records with respect to disposition reporting. The BJS Director told us that, given limited resources, the agency has always emphasized to the states the importance of making certain that records of recent criminal activity are updated and compatible with FBI standards. In this regard, the Director explained that many states adopted a “day 1” approach in using NCHIP funds to improve records—that is, improve new records first—and left a number of old, inactive records archived in state repositories. The Director noted that BJS research, with FBI assistance, has indicated that older arrest records account for much of the “open arrest” problem. That is, of the criminal history records for which missing disposition information was never recorded, about one-half involve arrests that occurred before 1984 and three-quarters pre-date NCHIP. Nonetheless, while states have made progress in automating newer disposition information—and automating disposition information discovered when conducting research of older records—achieving universal automation of disposition information continues to present challenges, as table 3 indicates. BJS has recognized that, whenever criminal history records show arrests without final dispositions, there is the potential for delays in responding to presale firearms inquiries because, in most instances, disqualifications result from convictions rather than arrests. Since 1995, BJS has encouraged states to contact court representatives and determine how NCHIP funds can be used to improve disposition reporting. Further, since 2000, BJS has required that such contacts be documented in the states’ application packages for NCHIP funds. For example, in the Fiscal Year 2003 Program Announcement (Mar. 2003), BJS specified that “all applications will be required to demonstrate that court needs have been considered, and if no funds for upgrading court systems capable of providing disposition data are requested, applicants should include a letter from the State court administrator or Chief Justice indicating that the courts have been consulted in connection with the application.” The Gun Control Act of 1968, as amended, specifies four nonfelony or noncriminal categories that prohibit an individual from owning or purchasing a firearmthat is, persons who (1) have been convicted of a misdemeanor crime of domestic violence, (2) are subject to certain outstanding court protection orders, (3) have been adjudicated as mentally defective, or (4) are unlawful users of controlled substances. Generally, states have used NCHIP funds to provide information for only one of these four categories—court protection orders. For fiscal years 2000 through 2003, states received a total of approximately $5.3 million in NCHIP funds to develop systems for reporting information to the FBI to be included in the NCIC Protection Order File as indicated in table 1. As of May 2003, states had made more than 778,000 records of court protection orders available to the national file. However, the availability of information regarding domestic violence misdemeanor convictions, mental health commitments, and controlled substance abusers is problematic for various reasons. For example, according to BJS, problems in identifying domestic violence misdemeanor convictions are twofold—(1) misdemeanor data traditionally have not been maintained at the state level in an automated format and (2) misdemeanor assault charges rarely specify the victim-offender relationship unless domestic violence is specifically charged. That is, domestic violence-related offenses can be difficult to distinguish from misdemeanors broadly classified as assaults. Since fiscal year 1996, BJS has encouraged states to use NCHIP funds to improve access to domestic violence records. BJS has provided direction, for example, to the states to set “flags” on the records of persons known to have a conviction for domestic violence. Records regarding mental health commitments are often not available nationally for reasons beyond the control of NCHIP. For instance, state mental health laws, privacy laws, or doctor-patient considerations may preclude federal law enforcement officials from routinely accessing some of these records. According to BJS, the area of mental health records and their shareability is a very difficult area—and is an area in which BJS has encouraged states to do more with NCHIP funds since fiscal year 1996. The FBI’s strategy—which BJS encourages the states to use—has been to create a Denied Persons File in the NICS Index where the reason for denial is not given unless the denial is appealed. In reference to substance abuse, BJS noted that federal law is very unclear regarding who is a prohibited person, which makes it very difficult for states to make records available to the FBI for NICS checks. Also, BJS noted that states have no central registries of active drug users or addicts. Given the complications of federal definitions, BJS emphasized that it would be a very challenging undertaking to develop such registries and keep them current. Overall, as table 4 indicates, a national system for domestic violence misdemeanor records is not available, only 10 states have provided mental health records to the NICS Index, and only 3 states have provided substance abuse records. According to BJS, most states have chosen to use NCHIP awards to automate criminal history records overall and improve criminal history record systems, rather than focus on improving access to these four specific types of records. BJS recognizes that ensuring the availability of additional nonfelony or noncriminal records involves various considerations or challenges that extend beyond simply providing more money to improve records. For example, as mentioned previously, BJS noted that federal law is very unclear regarding who is a prohibited person in reference to substance abuse. BJS has recognized that the absence of widely accessible information on domestic violence misdemeanors and noncriminal disqualifying factors is among the most important issues affecting the accuracy and timeliness of presale background checks of firearms purchasers. Thus, for several years, BJS has been encouraging states to use NCHIP funds to make improvements. Recently, for example, in providing NCHIP guidance in the Fiscal Year 2003 Program Announcement (Mar. 2003), BJS encouraged states to develop systems that would make this information available nationally. As mentioned previously, NCHIP’s goal is to improve public safety by enhancing the quality, completeness, and accessibility of the nation’s criminal history and sex offender record systems and the extent to which such records can be used and analyzed for criminal justice and authorized noncriminal justice purposes. To better measure progress toward this goal, BJS is developing a tool—a criminal history records quality index (RQI)—to uniformly characterize and monitor performance across jurisdictions and over time. RQI is to be based on a series of key indicators or outcome measures, such as the proportion of fully automated criminal history records in a state’s repository, the proportion of court dispositions transmitted electronically to the repository, and the extent to which the state submits data electronically to the FBI. According to BJS, RQI will be used to assess the progress of records quality at both the state and national levels, identify critical records improvement activities by pinpointing areas of deficiency and permit BJS to target specific problems and deficiencies for allocating future funding at the individual state level. After RQI is operationalized, BJS plans to begin using it for NCHIP funding decisions. Initial RQI development—and pilot testing in 10 states—was completed in 2003. As of January 2004, according to BJS, collection of the underlying RQI measures data from the other 46 jurisdictions (40 states, the District of Columbia, the 5 U.S. territories) was still ongoing. BJS hopes to receive RQI data submissions from all jurisdictions by April 30, 2004. One of the most relevant factors for policymakers to consider when debating the future of NCHIP is the extent of cumulative progress (and shortfalls) to date in creating national, automated systems that cover all needed types of information. While states have made progress, more work remains. For NICS-related purposes, as discussed previously, automated information is not always widely available on the disposition of felony and other potentially disqualifying arrests, nor on other prohibited purchasers, such as persons convicted of a misdemeanor crime of domestic violence. Another relevant factor to consider is that the demand for background checks is growing, with increases in recent years driven by screening requirements for employment and other noncriminal justice purposes. Furthermore, technology is not static, which necessitates periodic upgrades or replacements of automated systems for them to remain functional. As discussed previously, much progress has been made in automating records in recent years. On the other hand, some areas reflect a continuing need for improvements. For instance, the availability of and access to arrest disposition informationnecessary for timely presale background checks of persons purchasing firearms—continues to be problematic. Such information is important for preventing or minimizing the sale of firearms by “default proceed.” That is, by statute, if a background check is not completed within 3 business days, the sale of the firearm is allowed to proceed by default, sometimes to prohibited persons. In 2000, we reported that default proceeds occurred primarily due to a lack of arrest dispositions in states’ automated criminal history records and that many of these transactions involved individuals—2,519 purchasers during a 10-month period—who were later determined by the FBI to be prohibited persons. We further reported that firearms being transferred to prohibited persons presented public safety risks and placed resource demands on law enforcement agencies in retrieving the firearms. More recently, according to the FBI, over one-third (1,203) of the total 3,259 firearms retrieved in 2002 by the Bureau of Alcohol, Tobacco, Firearms, and Explosives occurred because disposition information for felony arrests could not be determined within 3 days. Another one-third (1,052) of the total retrievals in 2002 involved background checks whereby FBI examiners were unable to timely determine from available records that misdemeanor assault convictions involved domestic violence. A national system for domestic violence misdemeanor records is not available (see table 4). To further support NICS, table 4 also indicates that there is still much opportunity for improving the availability of records regarding persons who have been adjudicated as mentally defective and persons who are unlawful users of controlled substances. Additional examples (not exhaustive) of opportunities for further progress in automating records and/or enhancing national systems include the following: 5 states (Hawaii, Kentucky, Louisiana, Maine, and Vermont), the District of Columbia, and the 5 U.S. territories (American Samoa, Guam, Northern Mariana Islands, Puerto Rico, and the U.S. Virgin Islands) still do not participate in III; 7 states (Hawaii, Mississippi, Nevada, New Jersey, Utah, Virginia, and West Virginia), the District of Columbia, and 4 U.S. territories (American Samoa, Guam, Northern Mariana Islands, and Puerto Rico) have not contributed any data to the NCIC Protection Order File; and 6 states (Arkansas, Delaware, Missouri, Nevada Vermont, and Wyoming) and 2 U.S. territories (Northern Mariana Islands and Puerto Rico) have not submitted any files electronically to IAFIS. In debating the future of NCHIP, another relevant factor for policymakers to consider is that the demand for background checks is growing, with increases in recent years driven by screening requirements for employment and other noncriminal justice purposes. Generally, background checks for these “civil” purposes are based on fingerprint submissions—in contrast to the “name-based” searches conducted under NICS. The number of civil fingerprint submissions to the FBI has increased substantially in recent years. As figure 2 shows, for 5 of the 7 years during 1996 to 2002, the number of civil fingerprint submissions exceeded the number of criminal fingerprint submissions (i.e., fingerprints of criminal suspects or arrestees). In the most recent year (2002), civil fingerprint submissions totaled 9.1 million, whereas criminal fingerprint submissions totaled 8.4 million. The growth in civil fingerprint submissions is partly attributable to 1993 federal legislation that encouraged states to have procedures requiring fingerprint-based national searches of criminal history records of individuals seeking paid or volunteer positions with organizations serving children, the elderly, or the disabled. As of February 2004, according to BJS, 47 states had enacted legislation authorizing these record checks. Further, in 2003, federal legislation was enacted that establishes, in general, a pilot program in 3 states to conduct fingerprint-based background checks on individuals seeking volunteer positions involving interactions with children. Within 6 months of the date of the 2003 Act’s enactment, the Attorney General is to conduct a feasibility study to determine, among other things, the number of background checks that would be required if the pilot were implemented nationwide and the impact these additional checks might have on the FBI and IAFIS. If this pilot program is implemented nationally, BJS officials estimate that millions of additional background checks would be required annually. Homeland security concerns are another factor that has increased the demand for fingerprint-based background checks. Since the events of September 11, 2001, Congress passed legislation to protect the nation from future terrorist attacks. These laws require that individuals employed in sensitive positions undergo background checks to qualify for employment. FBI and BJS officials expect the number of applicant background checks to be in the millions, as homeland defense laws are fully implemented. Examples of federal homeland defense legislation and the number of checks anticipated follow: USA PATRIOT Act of 2001—Requires background checks on commercially licensed drivers who transport hazardous materials. Officials from the FBI’s Criminal Justice Information Services Division estimated that 800,000 to 1,000,000 individuals held commercial licenses at the time the USA PATRIOT Act was passed. Under the act, license renewals, in addition to new licensees, will need background checks to qualify for commercial licenses. Aviation and Transportation Security Act of 2001—Requires background checks of those individuals in security screener positions or other positions such as those with unescorted access to aircraft or secured areas of an airport. New background checks are required for those employees already hired at the time of the Aviation and Transportation Security Act’s passage as well as for individuals seeking employment. This act further requires background checks of foreigners seeking enrollment in flight schools. The Transportation Security Administration has requested over 105,365 background checks since passage of the act in November 2001. In addition to these checks, FBI officials estimated that flight school checks alone could result in up to 50,000 fingerprint checks annually. Public Health Security and Bioterrorism Preparedness and Response Act of 2002—Requires the Attorney General to conduct background on persons possessing, using, or transferring various toxins and biological agents. FBI officials estimated that this law could result in 30,000 checks annually. Another factor for consideration is that technology is not static and can change rapidly, which necessitates periodic upgrades or replacements of automated systems. For example, 1 case-study state used fiscal year 1995 NCHIP funds to purchase Livescan equipment for its major metropolitan areas. According to state officials, this equipment is now outdated and fiscal year 2003 NCHIP funds will be used to purchase new equipment. According to state officials, the 1995 machines will be retained for installation in other areas, such as the state’s less populous or more rural counties. Another relevant factor is how long-term funding needs will be met. Replacing outdated equipment and automating records can be expensive. States advocate that steady or long-term funding streams are important for implementing technological advances. In this regard, states do not rely entirely on NCHIP grants for system improvements. That is, states view NCHIP funding as “seed” or supplemental money and contribute from their own coffers to fund these upgrades. For instance, as noted previously, officials in the case-study states told us that their states typically have invested much more than the 10 percent matching funds required by NCHIP. The overarching goal of NCHIP—building a national infrastructure to facilitate the interstate exchange of criminal history and other relevant records—is important for many purposes. Without such an infrastructure, individuals who are, in fact, prohibited but whose records are inaccessible, or do not reflect such a prohibition may be allowed to purchase firearms, creating safety concerns not only for the general public, but also for the law enforcement officials responsible for retrieving these firearms after the prohibited status is ascertained. Further, inaccurate, incomplete, or inaccessible records and systems do not help to prevent persons who have been convicted of crimes to be hired in paid or volunteer positions with organizations serving children, the elderly, or the disabled, putting these populations at risk for abuse or worse. Also, accurate, complete, and accessible records and systems are necessary to respond to the needs and requirements of homeland security and to avert terrorism, particularly with respect to individuals employed in sensitive positions. Since its initiation in 1995, NCHIP has provided more than $438 million in federal grants nationwide. Using their own funds, as well as NCHIP and other federal grants, states have made much progress in automating their records and making them accessible nationally by conforming with the FBI’s standards for applicable national data systems—such as NICS, NCIC, III, and IAFIS. Continued progress toward establishing and sustaining a national infrastructure inherently will involve a partnering of federal, state, and local resources and long-term commitments from all governmental levels. On January 28, 2004, we provided a draft of this report for comment to the Department of Justice. In a response letter, dated February 13, 2004, the Assistant Attorney General (Office of Justice Programs) commented that the report fairly and accurately described NCHIP, its accomplishments, and the continued need to promote state and local participation in national criminal history records systems. Also, the Assistant Attorney General commented that the following issues mentioned in the report should be highlighted: Given limited resources, it is important to draw the distinction between old and new arrest records with respect to disposition coverage. BJS has always emphasized to the states the importance of making certain that records of recent criminal activity were updated and compatible with FBI standards. In many cases, state laws prohibit sharing mental health information because of confidentiality and doctor-patient privacy laws. The strategy for the FBI, and one which BJS has encouraged the states to use, has been to utilize the Denied Persons File in the NICS Index where the reason for denial of a firearm purchase is not given unless the denial is appealed. Most states do not fingerprint misdemeanants, and misdemeanor assault charges rarely specify the victim-offender relationship (unless domestic violence is specifically charged). BJS has given strong direction to the states to set flags on the records of persons known to have a conviction for domestic violence. No state has a central registry of active drug users or addicts. It will be challenging to develop such registries and to keep them current. In perspective, the number of problematic firearms sales—that is, default proceeds that result in a need to retrieve firearms from prohibited purchasers—is very small compared to the 8 million to 9 million background checks conducted each year. RQI, a metric developed by BJS, is a major step forward and may provide a significant opportunity for evaluating performance over time and establishing a basis for targeting future assistance to state and local participants in federal funding programs. The full text of the Assistant Attorney General’s letter is presented in appendix V. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies of this report to interested congressional committees and subcommittees. We will also make copies available to others on request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report or wish to discuss the matter further, please contact me at (202) 512-8777 or Danny Burton at (214) 777-5600. Other key contributors to this report are listed in appendix VI. As requested by the Chairman, House Committee on the Judiciary, our overall objective was to broadly review the National Criminal History Improvement Program (NCHIP). Managed by the Department of Justice’s Bureau of Justice Statistics (BJS), NCHIP is a federal grant program to build a national infrastructure to facilitate the interstate exchange of criminal history and other relevant records—that is, to improve the accuracy, completeness, and accessibility of records used by various national systems. One of the primary systems is the National Instant Criminal Background Check System (NICS), which is managed by the Federal Bureau of Investigation (FBI) and is used to conduct presale background checks of persons purchasing firearms. As agreed with the requester’s office, this report presents information on how states have used NCHIP grant funds, particularly the extent to which they have been used by states for NICS-related purposes; the progressusing NCHIP grants and other funding sourcesthat states have made in automating criminal history and other relevant records and making them accessible nationally; and the various factors that are relevant considerations for policymakers in debating the future of NCHIP. Regarding the use of NCHIP grant funds, as further agreed with the requester’s office, this report also presents information on (1) the use of such funds by the priority states and their progress in automating records and (2) whether any of the 50 states have used NCHIP funds to develop or implement a ballistics registration systemthat is, a system that stores digital images of the markings made on bullets and cartridge casings when firearms are discharged. In addressing the objectives, to the extent possible, we focused on obtaining national or programwide perspectives. For example, we reviewed BJS’s biennial national survey data or reports on the automation status of all states’ criminal history records. Further, we interviewed NCHIP managers at BJS and NICS managers at the FBI’s Criminal Justice Information Services Division (Clarksburg, W. Va.). Also, we reviewed BJS program documentation that describes allowable NCHIP spending activities. In addition, given that NCHIP consolidates criminal records improvement funding authorized by various federal laws, we reviewed these laws, such as the Brady Handgun Violence Prevention Act, and related legislative histories. Also, to provide supplemental or more in-depth perspectives, we conducted case studies of 5 recipient states (California, Maryland, Mississippi, Texas, and West Virginia). We selected these states to reflect a range of various factors or considerations—the amounts of grant funding received, status of NICS participation, and levels of automation, as well as to encompass different geographic areas of the nation (see table 5). To obtain an overview of how all jurisdictions (the 50 states, District of Columbia, and 5 U.S. territories) have used NCHIP grant funds, we requested that BJS provide us information on total awards for each of the 4 most recent fiscal years (2000 through 2003)—with the amounts disaggregated into applicable spending categories. Generally, NCHIP spending can be grouped into six spending categories: (1) NICS/Interstate Identification Index (III)/criminal records improvements, (2) disposition reporting improvements, (3) Automated Fingerprint Identification System (AFIS)/Livescan activities, (4) sex offender registry enhancements, (5) protection order activities, and (6) national security/antiterrorism activities. In cases where expenditures could be included in more than one category, BJS judgmentally selected the category that was the most descriptive of the activity. We reviewed BJS documentation and interviewed BJS officials to determine which of these spending categories involved NICS-related purposes. In addition, we analyzed the spending category information in reference to the 50 states’ participation status in NICS (full participant, partial participant, or nonparticipant) to determine any general differences in the types of NCHIP-funded projects undertaken. Similarly, we analyzed the spending category information to determine how the 5 priority states had used NCHIP grant funds (see app. IV). For more in-depth perspectives, we reviewed data on the use of NCHIP grant funds by the 5 states we selected for case studies. Preliminarily, we reviewed information in grant files maintained by the Office of the Comptroller (a component of the Department of Justice’s Office of Justice Programs). Then, we visited each of the 5 states and interviewed state officials responsible for NCHIP-funded projects. At our request, using definitions provided by BJS, the officials grouped their respective state’s grant awards into applicable spending categories (see app. III). For some NCHIP-funded activities, officials in the case-study states indicated that expenditures could be included in more than one category. In these cases, based on input from state officials, we selected the category that was most descriptive of the activity. For each of the case-study states, these spending category analyses covered NCHIP grant awards for fiscal year 1995 (when the program was initiated) through fiscal year 2002 (the most current data available at the time of our visits). Regarding ballistics registration systems, we interviewed NCHIP managers to determine if NCHIP guidelines allow NCHIP funds to be used to develop and implement such systems and, if so, the extent to which states have used or are planning to use NCHIP funds for this purpose. In addition, in visiting the 5 case-study states, we asked state officials if NCHIP money had been or would be used to develop and implement ballistics registration systems. We reviewed BJS’s biennial survey data and/or reports (for 1993, 1995, 1997, 1999, and 2001) on the automation status of states’ criminal history records. We contacted BJS managers to clarify (when necessary) the survey data and discuss automation progress, including the contributing roles played by NCHIP and other federal grants and by the states’ use of their own funds. Further, we reviewed BJS and FBI information regarding the progress of states in making criminal history and other relevant records accessible nationally by, for example, conforming with the FBI’s standards for national data systems—including, as applicable, NICS, the National Crime Information Center (NCIC), III, and the Integrated Automated Fingerprint Identification System (IAFIS). Also, in each of the 5 case-study states, we discussed these issues with state officials. To determine various factors that are relevant considerations for policymakers in debating the future of NCHIP, we interviewed NCHIP and NICS managers, as well as officials in the 5 case-study states. We also contacted officials from other organizations, such as SEARCH (The National Consortium for Justice Information and Statistics) and the American Prosecutors Research Institute. Further, we relied on insights gained in addressing the objectives of this work. To assess the reliability of BJS’s data (by spending category) on NCHIP funds awarded to all jurisdictions for fiscal years 2000 through 2003 (see table 1) and to the 5 case-study states for fiscal years 1995 through 2002 (see tables 6 through 11), we reviewed existing documentation related to the data sources, electronically tested the data to identify obvious problems with completeness or accuracy, and interviewed knowledgeable agency officials about the data. We determined that the NCHIP funds data were sufficiently reliable for the purposes of this report. To assess the reliability of data reported by BJS based on its biennial surveys of state criminal history information systems for 1993, 1995, 1997, 1999, and 2001, we (1) reviewed the published survey results and (2) interviewed officials knowledgeable about the surveys. We determined that the biennial survey data were sufficiently reliable for the purposes of this report. BJS strives to create national criminal history records systems that contain accurate, complete, and accessible information. To accomplish this, since 1995, BJS has awarded approximately $438 million in NCHIP grants to states, the District of Columbia, and U.S. territories to help these jurisdictions improve their records and establish automated capabilities that enhance participation in national criminal history records systems. Each state operates a central criminal history records repository that receives information regarding individuals’ criminal histories from a number of sources throughout the state, including state and local law enforcement agencies, prosecutors, courts, and corrections agencies. For each individual, the repository compiles the information from these sources into a comprehensive criminal history record for that person. These records are commonly referred to as “rap sheets.” By means of statewide telecommunications systems, the repositories make these records available to criminal justice personnel for authorized purposes, such as pretrial release and sentencing decisions. The repositories also provide criminal history records for authorized noncriminal justice purposes. For example, with increasing frequency, state and federal laws are requiring local law enforcement agencies to conduct criminal history background checks on persons seeking employment in sensitive positions (such as child and elder care) and for occupational license authorizations. The FBI has historically maintained criminal history record files on all federal offenders and on state offenders to the extent that states voluntarily submit state criminal history information. The FBI also maintains a nationwide telecommunications system that enables federal, state, and local criminal justice agencies to conduct national record searches and to obtain criminal justice related-information, for example, about individuals who are arrested and prosecuted in other states. Criminal record services are also provided to noncriminal justice agencies authorized by federal law to obtain such records. The practice of maintaining duplicative state offender records at both the state and federal levels is being replaced by efforts to build an automated infrastructure that will make all criminal history records accessible nationally. To fully participate in the national systems that are to comprise this infrastructure, a jurisdiction must have an automated criminal history record system that meets FBI standards for participation. For example, the state’s automated system must be compatible with the federal systems and be capable of responding automatically to requests for records. The principal national, federal systems are discussed in the following paragraphs. Prior to 1967, the FBI’s criminal history records were manual files. In 1967, the FBI established NCIC, an automated, nationally accessible database of criminal justice and justice-related records. NCIC provides automated information on wanted and missing persons, as well as identifiable stolen property, such as vehicles and firearms. Each state has a central control terminal operator, who is connected to NCIC through a dedicated telecommunications line maintained by the FBI. Authorized local agencies use their state’s law enforcement telecommunications network to access NCIC through the respective operator. An investigator can obtain information on wanted and missing persons and stolen property by requesting a search by name or other nonfingerprint-based identification. Information provided can include graphics, such as mug shots, pictures of tattoos, and signatures in a paperless, electronic format. Using this system, an investigator can also perform searches for “sound alike” names, such as “Knowles” for “Nowles.” The system has an enhanced feature for searching all derivatives of names, such as Jeff, Geoff, Jeffrey. NCIC includes the National Sex Offender Registry and a Protection Order File (discussed later). NCIC data may be provided only for criminal justice and other specifically authorized purposes. For example, authorized purposes include presale firearms checks, as well as checks on potential employees of criminal justice agencies, federally chartered or insured banks or securities firms, and state and local governments. Maintained by the FBI, the III system is an interstate, federal-state computer network, which currently provides the means of conducting national criminal history record searches to determine whether a person has a criminal record anywhere in the country. This system is designed to tie the automated criminal history records databases of state central repositories and the FBI together into a national system by means of an “index-pointer” approach. The FBI maintains an identification index of persons arrested for felonies or serious misdemeanors under state or federal law. The index includes identification information (such as name, date of birth, race, and sex), FBI numbers, and state identification numbers from each state holding information about the individual. Criminal justice agencies nationwide can transmit search inquiries based on name or other identifiers automatically through state law enforcement telecommunications networks and the FBI’s NCIC telecommunications lines. According to the FBI, the III system responds to search inquiries within seconds. If the search results in a “hit,” the system automatically requests records using the applicable FBI and state identification numbers, and each repository holding information on the individual forwards its records to the requesting agency. The FBI provides responses for states that are not yet participants in III. Under Brady Handgun Violence Prevention Act requirements, the FBI established NICS to provide instant background checks of individuals applying to purchase firearms from federally licensed dealers. Federal law prohibits the purchase or possession of a firearm by any person who (1) has been convicted of a crime punishable by a prison term exceeding 1 year, (2) is a fugitive from justice, (3) is an unlawful user of controlled substances, (4) has been adjudicated as mental defective, (5) is an illegal or unlawful alien, (6) has been discharged dishonorably from the armed forces, (7) has renounced his or her U.S. citizenship, (8) has been convicted of a misdemeanor crime of domestic violence, or (9) is subject to certain domestic violence protection orders. The three primary, component databases searched by NICS are III, NCIC (including the Protection Order File and a file of active felony or misdemeanor warrants), and the NICS Index. This third database was created solely for presale background checks of firearms purchasers and contains disqualifying information contributed by local, state, and federal agencies. For example, the database contains information on individuals who are prohibited from purchasing firearms because they are aliens unlawfully in the United States, are persons who have renounced their U.S. citizenship, have been adjudicated as mental defectives, have been committed to a mental institution, have been dishonorably discharged from the armed forces, or are unlawful users of or addicted to controlled substances. The FBI established the National Sex Offender Registry (NSOR) to enable state sex offender information to be obtained and tracked from one jurisdiction to another. In 1994, the Jacob Wetterling Crimes Against Children and Sexually Violent Offender Registration Act (the Jacob Wetterling Act) required that states create sex offender registries within 3 years or lose some of their federal grant funds. The law further provided that—when any offender convicted of committing a criminal sexual act against a minor or committing any sexually violent offense—is released from custody or supervision into the community, he or she must register with law enforcement agencies for a period of 10 years. The act was amended in 1996 to require the FBI to establish a NSOR and to register and verify addresses of sex offenders when a state’s registry does not meet the minimum compliance standards required by the Jacob Wetterling Act. According to the FBI Law Enforcement Bulletin, all 50 states currently have sex offender registration laws, and all states require a registration period of at least 10 years, with some states requiring lifetime registration. State registry information typically includes the offender’s name, address, Social Security number, date of birth, physical description, photograph, and fingerprints. NSOR is a component of NCIC that serves as a pointer system to identify a sex offender’s records in the III system. When agencies request authorized fingerprint-based criminal history background checks, NSOR will flag the subjects who are registered sex offenders. The FBI established the Protection Order File in 1997 to provide a repository for protection order records. The purpose of this NCIC component is to permit interstate enforcement of protection orders and the denial of firearms transfers to individuals who are the subjects of court protection orders. Such orders include civil and criminal court orders issued to prevent a person from committing violent, threatening, or harassing acts against another individual. A protection order can preclude the person from contacting, communicating with, and being in physical proximity to a named individual. State and federal law enforcement agencies can submit protection orders to the NCIC Protection Order File. In 1999, the FBI implemented IAFIS, a computerized system for storing, comparing, and exchanging digitized fingerprint data. Most fingerprint data submitted to IAFIS originate when a local or state law enforcement agency arrests a suspect. At that time, the agency takes the suspect’s fingerprints manually (using ink and paper fingerprint cards) or electronically (using optical scanning equipment). The agency forwards a copy of the fingerprints—along with nonbiometric data such as name and age—through its state repository to the FBI. Electronic submissions are automatically entered into IAFIS, and paper submissions sent through the mail are scanned into an electronic format for entry. When a set of fingerprints is submitted, IAFIS searches for a prior entry in the system that matches the suspect’s nonbiometric personal identifying data. If a prior entry is not found, the system compares the submitted fingerprints with those previously stored in the computer’s memory to determine if the suspect has an entry under another name. This information can be used for a number of purposes, including positively identifying arrestees to prevent the premature release of suspects who use false names and are wanted in other jurisdictions. To support crime scene investigations, the system can also compare a full or partial fingerprint from a crime scene with the prints stored in the database to identify a suspect. This appendix presents information about the use of NCHIP funds by 5 case-study statesCalifornia, Maryland, Mississippi, Texas, and West Virginia—for fiscal years 1995 through 2002. As mentioned previously, we selected these states to reflect a range of factors or considerations—that is, the amounts of grant funding received, status of NICS participation, and levels of automation, as well as to encompass different geographic areas of the nation (see app. I). NCHIP funding amounts can be grouped into six categories of spending established by BJS to track the use of program funds. These six categories are (1) NICS/III/criminal records improvements, (2) disposition reporting improvements, (3) AFIS/Livescan activities, (4) sex offender registry enhancements, (5) protection order activities, and (6) national security/antiterrorism activities. Table 6 shows that since the inception of NCHIP in 1995, 4 of the 5 case- study states have devoted the majority of their grant awards for the first two BJS spending categories—NICS/III/criminal records improvements and disposition reporting improvements. Expenditures in the first category include overall system upgrades, equipment purchases, database development, and other activities required to bring states in compliance with FBI standards so that the states may participate in national systems maintained by the FBI. Expenditures in the second category include efforts to automate disposition records and provide linkages for reporting these records to the state’s central records repository. Maryland, the only case-study state that did not devote the majority of its funds to the first two categories, still allocated nearly half (48 percent) of its total grant awards for these two areas. Maryland devoted a large amount (40 percent) of its NCHIP funding to AFIS/Livescan activities, as did Texas (45 percent). For all 5 case-study states, the NCHIP funding detailed in table 6 represented “seed” or “catalyst” money and, therefore, accounted for only a portion of the total criminal records improvement spending. For example, according to California officials, state resources accounted for 85 percent of records improvement funding in California during fiscal year 2002-03. The remaining 15 percent consisted of NCHIP grants (6 percent) and other federal sources (approximately 9 percent). Three of the other 4 states provided data indicating that NCHIP grants accounted for less than a majority of the criminal records improvement funding in the respective state. More details on each case-study state’s use of NCHIP funds are presented in the following sections. During fiscal years 1995 through 2002, BJS awarded California a total of $29.9 million in NCHIP fundsthe most of any state. As shown in table 7, California allocated approximately two-thirds (66 percent) of its NCHIP awards for NICS/III/criminal records improvements. For example, the state devoted over $4.9 million of program funds to projects for converting manual fingerprint and palm print cards to an electronic format and matching records maintained by the FBI’s III system to those maintained by the state repository. According to California officials, these efforts will improve overall criminal record keeping and benefit NICS by improving the state’s response to queries on prospective gun purchasers. Officials also said that the state has used NCHIP funds to improve the reporting of case dispositions to the state’s central repository. For example, officials have used program funds to improve disposition reporting in the 28 counties that represent 70 percent of the disposition volume for the entire state. As a result, these 28 counties report 100 percent of their dispositions to the state central repository via a magnetic tape batch process occurring three times a week. In addition, California officials are conducting an NCHIP-funded pilot project in one county to test the feasibility of moving to a real-time updating system for disposition reporting rather than the current batching approach. During fiscal years 1995 through 2002, BJS awarded Maryland $6.8 million in NCHIP funds. As shown in table 8, Maryland allocated the largest percentage (40 percent or $2.7 million) of its NCHIP awards for AFIS/Livescan activities. This category, together with NICS/III/criminal records improvement, accounted for over three-fourths (76 percent) of the state’s use of NCHIP funds. Regarding the first category in table 8, Maryland devoted a sizeable portion of its NCHIP award ($1.2 million) to make the state’s automated systems compatible with the FBI’s NCIC database, which was updated and expanded in 2000. In addition, Maryland is using nearly $200,000 of program funds to convert over 700,000 historical arrest records (older than October 1998) to a format compatible with the FBI’s III system. This effort will make older records accessible to the FBI, which will improve NICS background checks. In the category of disposition reporting, Maryland has also implemented a $360,000 NCHIP project to automate reporting from the courts (including case dispositions) to the central records repository on a daily basis. Maryland currently reports dispositions from courts to the state’s central records repository through weekly magnetic tape updates. For purposes of NICS, Maryland is a partial participant state. That is, a designated state agency (Maryland State Police) conducts background checks for handgun purchases, whereas the FBI conducts such checks for long gun purchases. For both types of firearms purchases (handguns and long guns), another state agency (Maryland State Archives) provides support (researching the disposition results of arrests) for criminal history records generated before 1982. In fiscal year 2002, the Maryland State Archives received $41,000 in NCHIP funds to conduct disposition research for NICS queries from the FBI. Earlier, due to a lack of state funding, this state agency had discontinued such research for a period of approximately 3-1/2 months (March 18 to July 2, 2002). According to Maryland and BJS officials, the $41,000 award in 2002 was the first distribution of NCHIP funds to the Maryland State Archives since the inception of the grant program. As shown in table 9, for fiscal years 1995 through 2002, Mississippi allocated approximately three-fourths (76 percent) of its NCHIP funds for projects in the category of NICS/III/criminal records improvements. NCHIP projects in this category centered on creation of and support for the state’s computerized criminal history database. According to state officials, prior to the rollout of the state’s new automated criminal history database in March 1998, Mississippi was without any type of arrest record automation. After the rollout, Mississippi was one of fewer than 10 states with an automated system whereby every arrest record was automatically associated with a fingerprint record and made available to authorized inquirers across the state and the nation. Mississippi officials told us that, without NCHIP, this advance in records automation would not have been possible. On the other hand, in responding to BJS’s latest biennial survey (2001), Mississippi reported that 3 percent of its automated criminal records included final dispositionsthe lowest among the responding case-study states. However, as indicated in table 9, Mississippi is using NCHIP funds for various projects to improve disposition reporting. During fiscal years 1995 through 2002, BJS awarded Texas $19.5 million in NCHIP fundsthe third highest total among all states, behind only California and New York. As shown in table 10, Texas allocated about half (52 percent) of its NCHIP funds for NICS/III/criminal records improvements. A significant project in this category is an ongoing upgrade of the state’s computerized criminal history system. According to state officials, this upgrade will “rewrite” the system to meet new demands and expectations. For example, the rewrite will allow Texas to “flag” domestic violence misdemeanors (a category for prohibiting firearms sales under NICS) at the arrest, prosecution, and court levels. During this period, Texas also allocated 45 percent of its NICHIP funds for AFIS/Livescan activitiesthe highest percentage for this category among the 5 case-study states. To implement electronic reporting of arrest data, Texas used NCHIP funds to purchase Livescan equipment for placement in 4 major cities and 27 of the state’s 254 counties. According to Texas officials, these cities and counties account for a majority of the state’s total arrests. Also, as shown in table 10, Texas allocated 2 percent of its NCHIP awards for disposition reporting improvementsthe lowest among the 5 case- study states. However, according to Texas officials, criminal case disposition reporting is recognized as an area in need of improvement and will be addressed by future projects funded by NCHIP. Also, as an example of recent progress in Texas, BJS noted that NCHIP funds were used to automate approximately 52,600 court disposition records from Harris County—which includes Houston, the most populous city in Texas—for inclusion in the state’s central repository. During fiscal years 1995 through 2002, BJS awarded West Virginia approximately $4.7 million in NCHIP funds. As shown in table 11, West Virginia allocated half of its NCHIP funds for NICS/III/criminal records improvements. Also, the state allocated 35 percent for disposition reporting improvementsthe highest percentage for this category among the 5 case-study states. The purpose of the ongoing projects in this category is to automate the reporting of court data (including case dispositions) to the state’s central records repository. According to its 2003 NCHIP grant application, West Virginia was the last state to implement an AFIS. NCHIP funding assisted the state to implement its system by financing a study to determine AFIS requirements and costs. West Virginia officials noted that plans call for placing Livescan equipment in each of the state’s nine regional jails, which are to be booking sites for all persons entering the state’s criminal justice system. This appendix provides information on the 5 states that BJS identified as having the lowest levels of criminal history record automation in 1994. Maine, Mississippi, New Mexico, Vermont, and West Virginia were designated as priority states, making each eligible to receive an additional $1 million in funding during NCHIP’s first year. NCHIP was tasked with implementing statutory grant provisions that required the states with the lowest levels of criminal history record automation receive priority funds from the program to give them some extra help in automating their records. This additional funding for priority states applied to only the first year of NCHIP grant awards. Also, this appendix provides information about whether any of the 50 states have used NCHIP funds to develop or implement a ballistics registration system—that is, a system that stores digital images of the markings made on bullets and cartridge casings when firearms are discharged. For fiscal years 2000 through 2003, table 12 shows that the priority states allocated 70 percent of their NCHIP awards for NICS/III/criminal records improvements and disposition reporting improvements. The remaining 30 percent of the priority states’ NCHIP award amounts was allocated for AFIS/Livescan activities, sex offender registry enhancements, and protection order activities. None of the priority states allocated NCHIP award amounts for national security/antiterrorism activities. The priority states have made progress in automating their criminal history records. Prior to NCHIP, these states had approximately 1.4 million records in manual formats and very few automated records. By 2003, BJS estimated that these 5 states had over 1 million automated records. More specifically, as shown in table 13, biennial surveys of state criminal history record repositories also indicate the priority states have made progress in automating their records. For example, New Mexico and Mississippi progressed from little or no automation in 1993 to 100 percent automation in 2001. The other priority states also have made progress in automating their records but have not yet achieved full automation. According to Mississippi officials, NCHIP played a critical role in the state’s successes in automating and sharing criminal history information. The officials noted, for instance, that receiving the “priority” designation and the accompanying additional funds enabled Mississippi to begin automating its criminal history records and take advantage of the latest technology developments. Similarly, a West Virginia official commented that the additional priority funding helped the state establish and begin implementing an automated fingerprint identification system, the backbone of West Virginia’s entire records improvement and automation project. Another indicator of progress is participation in III, the system used for a number of law enforcement-related purposes, including background checks of persons purchasing firearms. As of May 2003, 3 of the 5 priority states participated in III, with New Mexico joining the program in 1997 and Mississippi and West Virginia joining in 1998. At the time of our review, Maine and Vermont were not participating in III. According to BJS, Maine’s participation may not occur until some time in 2004 because the state is in the process of undertaking a major revision of its entire criminal justice information technology infrastructure. Vermont officials reported to BJS that the state is currently using NCHIP funds to install a new system that is fundamental to III participation and that the state will be III-compliant by January 2004. States must ensure that their computerized criminal history records systems meet specific FBI criteria and that these systems are compatible with the FBI’s national data systems before the FBI will allow states to provide records nationally through III. The 5 priority states have also increased their participation in other national systems. According to BJS officials, all 5 states participate in the National Sex Offender Registry, 4 of the 5 states have provided some portion of their criminal fingerprints electronically to IAFIS, and 3 states have submitted protection order records to the NCIC Protection Order File. BJS officials said that no NCHIP funds have been used to develop or implement a ballistics registration system—a system typically used as an investigative tool to compare crime scene evidence to the stored images. Also, according to BJS officials, NCHIP funds are to improve the availability of information on the “person,” rather than to improve investigative tools. BJS does not plan to expand the scope of NCHIP funding to include investigative tools because improvements are still needed in the ability to identify prohibited purchasers of firearms, such as individuals with domestic violence misdemeanor convictions. Of the 5 case-study states we visited, only 1 (Maryland) had developed a ballistics registration system. According to BJS and state officials, federal funding was not used to develop or implement this system. In addition to the above, Grace Coleman, Geoffrey Hamilton, Michael H. Harmond, Kevin L. Jackson, Jan B. Montgomery, Jerome T. Sandau, Linda Kay Willard, and Ellen T. Wolfe made key contributions to this report.
Public safety concerns require that criminal history records be accurate, complete, and accessible. Among other purposes, such records are used by the Federal Bureau of Investigation's (FBI) National Instant Criminal Background Check System (NICS) to ensure that prohibited persons do not purchase firearms. Initiated in 1995, the National Criminal History Improvement Program represents a partnership among federal, state, and local agencies to build a national criminal records infrastructure. Under the program, the Department of Justice's Bureau of Justice Statistics (BJS) annually provides federal grants to states to improve the quality of records and their accessibility through NICS and other national systems maintained by the FBI. GAO examined (1) how states have used program grant funds, particularly the extent to which such funds have been used for NICS-related purposes; (2) the progress--using program grants and other funding sources--that states have made in automating criminal history and other relevant records and making them accessible nationally; and (3) the various factors that are relevant considerations for policymakers in debating the future of the program. States have used program grants primarily to support NICS in conducting presale background checks of firearms' purchasers. BJS data show that over 75 percent of the total $164.3 million in program grants awarded in fiscal years 2000 through 2003 was used for NICS-related purposes. These uses encompassed a broad range of activities, such as converting manual records to automated formats and purchasing equipment to implement computerized systems or upgrade existing systems. All other uses of program grants, according to BJS, also had either direct or indirect relevance to building an infrastructure of nationally accessible records. Using their own funds, in addition to the program and other federal grants, states have made progress in automating criminal history records and making them accessible nationally. The percentage of the nation's criminal history records that are automated increased from 79 percent in 1993 to 89 percent in 2001, according to BJS's most recent data. Also, the number of states participating in the Interstate Identification Index--a "pointer system" to locate criminal history records anywhere in the country--increased from 26 at year-end 1993 to 45 by May 2003. But, progress has been more limited for some NICS-related purposes. A national system for domestic violence misdemeanor records is not available. Also, as of May 2003, only 10 states had made mental health records available to NICS, and only 3 states had provided substance abuse records. One of the most relevant factors for policymakers to consider when debating the future of the program is the extent of cumulative progress (and shortfalls) to date in creating national, automated systems. While states have made progress, more work remains. Also, the demand for background checks is growing, and technology is not static, which necessitates periodic upgrades or replacements of automated systems. Continued progress toward establishing and sustaining a national infrastructure inherently will involve long-term commitments from all governmental levels. Justice commented that GAO's report fairly and accurately described the program and its accomplishments.
Although many aspects of an effective response to bioterrorism are the same as those for any form of terrorism, there are some unique features. For example, if a biological agent is released covertly, it may not be recognized for a week or more because symptoms may not appear for several days after the initial exposure and may be misdiagnosed at first. In addition, some biological agents, such as smallpox, are communicable and can spread to others who were not initially exposed. These characteristics require responses that are unique to bioterrorism, including health surveillance, epidemiologic investigation, laboratory identification of biological agents, and distribution of antibiotics to large segments of the population to prevent the spread of an infectious disease. However, some aspects of an effective response to bioterrorism are also important in responding to any type of large-scale disaster, such as providing emergency medical services, continuing health care services delivery, and, potentially, managing mass fatalities. The burden of responding to bioterrorist incidents falls initially on personnel in state and local emergency response agencies. These “first responders” include firefighters, emergency medical service personnel, law enforcement officers, public health officials, health care workers (including doctors, nurses, and other medical professionals), and public works personnel. If the emergency requires federal disaster assistance, federal departments and agencies will respond according to responsibilities outlined in the Federal Response Plan. Under the Federal Response Plan, CDC is the lead Department of Health and Human Services (HHS) agency providing assistance to state and local governments for five functions: (1) health surveillance, (2) worker health and safety, (3) radiological, chemical, and biological hazard consultation, (4) public health information, and (5) vector control. Each of these functions is described in table 1. HHS is currently leading an effort to work with governmental and nongovernmental partners to upgrade the nation’s public health infrastructure and capacities to respond to bioterrorism. As part of this effort, several CDC centers, institutes, and offices work together in the agency’s Bioterrorism Preparedness and Response Program. The principal priority of CDC’s program is to upgrade infrastructure and capacity to respond to a large-scale epidemic, regardless of whether it is the result of a bioterrorist attack or a naturally occurring infectious disease outbreak. The program was started in fiscal year 1999 and was tasked with building and enhancing national, state, and local capacity; developing a national pharmaceutical stockpile; and conducting several independent studies on bioterrorism. CDC is conducting a variety of activities related to research on and preparedness for a bioterrorist attack. Since CDC’s program began 3 years ago, funding for these activities has increased. Research activities focus on detection, treatment, vaccination, and emergency response equipment. Preparedness efforts include increasing state and local response capacity, increasing CDC’s response capacity, preparedness and response planning, and building the National Pharmaceutical Stockpile Program. The funding for CDC’s activities related to research on and preparedness for a bioterrorist attack has increased 61 percent over the past 2 years. See table 2 for reported funding for these activities. Funding for CDC’s Bioterrorism Preparedness and Response Program grew approximately 43 percent in fiscal year 2000 and an additional 12 percent in fiscal year 2001. While the percentage increases are significant, they reflect only a $73 million increase because many of the programs initially received relatively small allocations. Approximately $45 million of the overall two-year increase was due to new research activities. Relative changes in funding for the various components of CDC’s Bioterrorism Preparedness and Response Program are shown in Figure 1. Funding for research activities increased sharply from fiscal year 1999 to fiscal year 2000, and then dropped slightly in fiscal year 2001. The increase in fiscal year 2000 was largely due to a $40.5 million increase in research funding for studies on anthrax and smallpox. Funding for preparedness and response planning, upgrading CDC capacity, and upgrading state and local capacity was relatively constant between fiscal year 1999 and fiscal year 2000 and grew in fiscal year 2001. For example, funding increased to upgrade CDC capacity by 47 percent and to upgrade state and local capacity by 17 percent in fiscal year 2001. The National Pharmaceutical Stockpile Program experienced a slight increase in funding of 2 percent in fiscal year 2000 and a slight decrease in funding of 2 percent in fiscal year 2001. CDC’s research activities focus on detection, treatment, vaccination, and emergency response equipment. In fiscal year 2001, CDC was allocated $18 million to continue research on an anthrax vaccine and associated issues, such as scheduling and dosage. The agency also received $22.4 million in fiscal year 2001 to conduct smallpox research. In addition, CDC oversees a number of independent studies, which fund specific universities and hospitals to do research and other work on bioterrorism. For example, funding in fiscal year 2001 included $941,000 to the University of Findlay in Findlay, Ohio, to develop training for health care providers and other hospital staff on how to handle victims who come to an emergency department during a bioterrorist incident. Another $750,000 was provided to the University of Texas Medical Branch in Galveston, Texas, to study various viruses in order to discover means to prevent or treat infections by these and other viruses (such as Rift Valley Fever and the smallpox virus). For worker safety, CDC’s National Institute for Occupational Safety and Health is developing standards for respiratory protection equipment used against biological agents by firefighters, laboratory technicians, and other potentially affected workers. Most of CDC’s activities to counter bioterrorism are focused on building and expanding public health infrastructure at the federal, state, and local levels. For example, CDC reported receiving funding to upgrade state and local capacity to detect and respond to a bioterrorist attack. CDC received additional funding for upgrading its own capacity in these areas, for preparedness and response planning, and for developing the National Pharmaceutical Stockpile Program. In addition to preparing for a bioterrorist attack, these activities also prepare the agency to respond to other challenges, such as identifying and containing a naturally occurring emerging infectious disease. CDC provides grants, technical support, and performance standards to support bioterrorism preparedness and response planning at the state and local levels. In fiscal year 2000, CDC funded 50 states and four major metropolitan health departments for preparedness and response activities. CDC is developing planning guidance for state public health officials to upgrade state and local public health departments’ preparedness and response capabilities. In addition, CDC has worked with the Department of Justice to complete a public health assessment tool, which is being used to determine the ability of state and local public health agencies to respond to release of biological and chemical agents, as well as other public health emergencies. Ten states (Florida, Hawaii, Maine, Michigan, Minnesota, Pennsylvania, Rhode Island, South Carolina, Utah, and Wisconsin) have completed the assessment, and others are currently completing it. States have received funding from CDC to increase staff, enhance capacity to detect the release of a biological agent or an emerging infectious disease, and improve communications infrastructure. In fiscal year 1999, for example, a total of $7.8 million was awarded to 41 state and local health agencies to improve their ability to link different sources of data, such as sales of certain pharmaceuticals, which could be helpful in detecting a covert bioterrorist event. Rapid identification and confirmatory diagnosis of biological agents are critical to ensuring that prevention and treatment measures can be implemented quickly. CDC was allocated $13 million in fiscal year 1999 to enhance state and local laboratory capacity. CDC has established a Laboratory Response Network of federal, state, and local laboratories that maintain state-of-the-art capabilities for biological agent identification and characterization of human clinical samples such as blood. CDC has provided technical assistance and training in identification techniques to state and local public health laboratories. In addition, five state health departments received awards totaling $3 million to enhance chemical laboratory capabilities from the fiscal year 2000 funds. The states used these funds to purchase equipment and provide training. CDC is working with state and local health agencies to improve electronic infrastructure for public health communications for the collection and transmission of information related to a bioterrorism incident as well as other events. For example, $21 million was awarded to states in fiscal year 1999 to begin implementation of the Health Alert Network, which will support the exchange of key information over the Internet and provide a means to conduct distance training that could potentially reach a large segment of the public health community. Currently, 13 states are connected to all of their local jurisdictions. CDC is also directly connected to groups such as the American Medical Association to reach healthcare providers. CDC has described the Health Alert Network as a “highway” on which programs, such as the National Electronic Disease Surveillance System (NEDSS) and the Epidemic Information Exchange (Epi-X), will run. NEDSS is designed to facilitate the development of an integrated, coherent national system for public health surveillance. Ultimately, it is meant to support the automated collection, transmission, and monitoring of disease data from multiple sources (for example, clinician’s offices and laboratories) from local to state health departments to CDC. This year, a total of $10.9 million will go to 36 jurisdictions for new or continuing NEDSS activities. Epi-X is a secure, Web-based exchange for public health officials to rapidly report and discuss disease outbreaks and other health events potentially related to bioterrorism as they are identified and investigated. CDC is upgrading its own epidemiologic and disease surveillance capacity. It has deployed, and is continuing to enhance, a surveillance system to increase surveillance and epidemiological capacities before, during, and after special events (such as the 1999 World Trade Organization meeting in Seattle). Besides improving emergency response at the special events, the agency gains valuable experience in developing and practicing plans to combat terrorism. In addition, CDC monitors unusual clusters of illnesses, such as influenza in June. Although unusual clusters are not always a cause for concern, they can indicate a potential problem. The agency is also increasing its surveillance of disease outbreaks in animals. CDC has strengthened its own laboratory capacity. For example, it is developing and validating new diagnostic tests as well as creating agent- specific detection protocols. In collaboration with the Association of Public Health Laboratories and the Department of Defense, CDC has started a secure Web-based network that allows state, local, and other public health laboratories access to guidelines for analyzing biological agents. The site also allows authenticated users to order critical reagentsneeded in performing laboratory analysis of samples. The agency has also opened a Rapid Response and Advance Technology Laboratory, which screens samples for the presence of suspicious biological agents and evaluates new technology and protocols for the detection of biological agents. These technology assessments and protocols, as well as reagents and reference samples, are being shared with state and local public health laboratories. One activity CDC has undertaken is the implementation of a national bioterrorism response training plan. This plan focuses on preparing CDC officials to respond to bioterrorism and includes the development of exercises to assess progress in achieving bioterrorism preparedness at the federal, state, and local levels. The agency is also developing a crisis communications/media response curriculum for bioterrorism, as well as core capabilities guidelines to assist states and localities in their efforts to build comprehensive anti-bioterrorism programs. CDC has developed a bioterrorism information Web site. This site provides emergency contact information for state and local officials in the event of possible bioterrorism incidents, a list of critical biological and chemical agents, summaries of state and local bioterrorism projects, general information about CDC’s bioterrorism initiative, and links to documents on bioterrorism preparedness and response. The National Pharmaceutical Stockpile Program maintains a repository of life-saving pharmaceuticals, antidotes, and medical supplies, known as 12- Hour Push Packages, that could be used in an emergency, including a bioterrorist attack. The packages can be delivered to the site of a biological (or chemical) attack within 12 hours of deployment for the treatment of civilians. The first emergency use of the National Pharmaceutical Stockpile occurred on September 11, 2001, when in response to the terrorist attack on the World Trade Center, CDC released one of the eight Push Packages. The National Pharmaceutical Stockpile also includes additional antibiotics, antidotes, other drugs, medical equipment, and supplies, known as the Vendor Managed Inventory, that can be delivered within 24 to 36 hours after the appropriate vendors are notified. Deliveries from the Vendor Managed Inventory can be tailored to an individual incident. The program received $51.0 million in fiscal year 1999, $51.8 million in fiscal year 2000, and $51.0 million in fiscal year 2001. CDC and the Office of Emergency Preparedness (another agency in HHS that also maintains a stockpile of medical supplies) have encouraged state and local representatives to consider stockpile assets in their emergency planning for a biological attack and have trained representatives from state and local authorities in using the stockpile. The stockpile program also provides technical advisers in response to an event to ensure the appropriate and timely transfer of stockpile contents to authorized state representatives. Recently, individuals who may have been exposed to anthrax through the mail have been given antibiotics from the Vendor Managed Inventory. While CDC has funded research and preparedness programs for bioterrorism, a great deal of work remains to be done. CDC and HHS have identified gaps in bioterrorism research and preparedness that need to be addressed. In addition, some of our work on naturally occurring diseases also also indicates gaps in preparedness that would be important in the event of a bioterrorist attack. Gaps in research activities center on vaccines and field testing for infectious agents. CDC has reported that it needs to continue the smallpox vaccine development and production contract begun in fiscal year 2000. This includes clinical testing of the vaccine and submitting a licensing application to the Food and Drug Administration for the prevention of smallpox in adults and children. CDC also plans to conduct further studies of the anthrax vaccine. This research will include studies to better understand the immunological response that correlates with protection against inhalation anthrax and risk factors for adverse events as well as investigating modified vaccination schedules that could maintain protection and result in fewer adverse reactions. The agency has also indicated that it needs to continue research in the area of rapid assay tests to allow field diagnosis of a biological or chemical agent. Gaps remain in all of the areas of preparedness activities under CDC’s program. In particular, there are many unmet needs in upgrading state and local capacity to respond to a bioterrorist attack. There are also further needs in upgrading CDC’s capacity, preparedness and response planning, and building the National Pharmaceutical Stockpile. Health officials at many levels have called for CDC to support bioterrorism planning efforts at the state and local level. In a series of regional meetings from May through September 2000 to discuss issues associated with developing comprehensive bioterrorism response plans, state and local officials identified a need for additional federal support of their planning efforts. This includes federal efforts to develop effective written planning guidance for state and local health agencies and to provide on-site assistance that will ensure optimal preparedness and response. HHS has noted that surveillance capabilities need to be increased. In addition to enhancing traditional state and local capabilities for infectious disease surveillance, HHS has recognized the need to expand surveillance beyond the boundaries of the public health departments. In the department’s FY 2002—FY 2006 Plan for Combating Bioterrorism, HHS notes that potential sources for data on morbidity trends include 911 emergency calls, reasons for emergency department visits, hospital bed usage, and the purchase of specific products at pharmacies. Improved monitoring of food is also necessary to reduce its vulnerability as an avenue of infection and of terrorism. Other sources beyond public health departments can provide critical information for detection and identification of an outbreak. For example, the 1999 West Nile virus outbreak showed the importance of links with veterinary surveillance.Initially there were two separate investigations: one of sick people, the other of dying birds. Once the two investigations converged, the link was made, and the virus was correctly identified. HHS has found that state and local laboratories need to continue to upgrade their facilities and equipment. The department has stated that it would be beneficial if research, hospital, and commercial laboratories that have state-of-the-art equipment and well-trained staff were added to the National Laboratory Response Network. Currently, there are 104 laboratories in the network that can provide testing of biological samples for detection and confirmation of biological agents. Based on the 2000 regional meetings, CDC concluded that it needs to continue to support the laboratory network and identify opportunities to include more clinical laboratories to provide additional surge capacity. CDC also concluded from the 2000 regional meetings that, although it has begun to develop information systems, it needs to continue to enhance these systems to detect and respond to biological and chemical terrorism. HHS has stated that the work that has begun on the Health Alert Network, NEDSS, and Epi-X needs to continue. One aspect of this work is developing, testing, and implementing standards that will permit surveillance data from different systems to be easily shared. During the West Nile virus outbreak, while a secure electronic communication network was in place at the time of the initial outbreak, not all involved agencies and officials were capable of using it at the same time. For example, because CDC’s laboratory was not linked to the New York State network, the New York State Department of Health had to act as an intermediary in sharing CDC’s laboratory test results with local health departments. CDC and the New York State Department of Health laboratory databases were not linked to the database in New York City, and laboratory results consequently had to be manually entered there. These problems slowed the investigation of the outbreak. Moreover, we have testified that there is also a notable lack of training focused on detecting and responding to bioterrorist threats. Most physicians and nurses have never seen cases of certain diseases, such as smallpox or plague, and some biological agents initially produce symptoms that can be easily confused with influenza or other, less virulent illnesses, leading to a delay in diagnosis or identification. Medical laboratory personnel require training because they also lack experience in identifying biological agents such as anthrax. HHS has stated that epidemiologic capacity at CDC also needs to be improved. A standard system of disease reporting would better enable CDC to monitor disease, track trends, and intervene at the earliest sign of unusual or unexplained illness. HHS has noted that CDC needs to enhance its in-house laboratory capabilities to deal with likely terrorist agents. CDC plans to develop agent-specific detection and identification protocols for use by the laboratory response network, a research agenda, and guidelines for laboratory management and quality assurance. CDC also plans further development of its Rapid Response and Advanced Technology Laboratory. As we reported in September 2000, even the West Nile virus outbreak, which was relatively small and occurred in an area with one of the nation’s largest local public health agencies, taxed the federal, state, and local laboratory resources. Both the New York State and the CDC laboratories were quickly inundated with requests for tests during the West Nile virus outbreak, and because of the limited capacity at the New York laboratories, the CDC laboratory handled the bulk of the testing. Officials indicated that the CDC laboratory would have been unable to respond to another outbreak, had one occurred at the same time. CDC plans to work with other agencies in HHS to develop guidance to facilitate preparedness planning and associated investments by local-level medical and public health systems. The department has stated that to the extent that the guidance can help foster uniformity across local efforts with respect to preparedness concepts and structural and operational strategies, this would enable government units to work more effectively together than if each local approach was essentially unique. More generally, CDC has found a need to implement a national strategy for public health preparedness for bioterrorism, and to work with federal, state, and local partners to ensure communication and teamwork in response to a potential bioterrorist incident. Planning needs to continue for potential naturally occurring epidemics as well. In October 2000, we reported that federal and state influenza pandemic plans are in various stages of completion and do not completely or consistently address key issues surrounding the purchase, distribution, and administration of vaccines and antiviral drugs. At the time of our report, 10 states either had developed or were developing plans using general guidance from CDC, and 19 more states had plans under development. Outstanding issues remained, however, because certain key federal decisions had not been made. For example, HHS had not determined the proportion of vaccines and antiviral drugs to be purchased, distributed, and administered by the public and private sectors or established priorities for which population groups should receive vaccines and antiviral drugs first when supplies are limited. As of July 2001, HHS continued to work on a national plan. As a result, policies may differ among states and between states and the federal government, and in the event of a pandemic, these inconsistencies could contribute to public confusion and weaken the effectiveness of the public health response. The recent anthrax incidents have focused a great deal of attention on the national pharmaceutical stockpile. Prior to this, in its FY2002 – FY 2006 Plan for Combating Bioterrorism, HHS had indicated what actions would be necessary regarding the stockpile over the next several years. These included purchasing additional products so that pharmaceuticals were available for treating additional biological agents in fiscal year 2002, and conducting a demonstration project that incorporates the National Guard in planning for receipt, transport, organization, distribution, and dissemination of stockpile supplies in fiscal year 2003. CDC also proposed providing grants to cities in fiscal year 2004 to hire a stockpile program coordinator to help the community develop a comprehensive plan for handling the stockpile and organizing volunteers trained to manage the stockpile during a chemical or biological event. Clearly, these longer range plans are changing, but the need for these activities remains. For further information about this statement, please contact me at (202) 512-7118. Robert Copeland, Marcia Crosse, Greg Ferrante, David Gootnick, Deborah Miller, and Roseanne Price also made key contributions to this statement. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts (GAO-02-208T, Oct. 31, 2001). Terrorism Insurance: Alternative Programs for Protecting Insurance Consumers (GAO-02-199T, Oct. 24, 2001). Terrorism Insurance: Alternative Programs for Protecting Insurance Consumers (GAO-02-175T, Oct. 24, 2001). Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness (GAO-02-162T, Oct. 17, 2001). Homeland Security: Need to Consider VA’s Role in Strengthening Federal Preparedness (GAO-02-145T, Oct. 15, 2001). Homeland Security: Key Elements of a Risk Management Approach (GAO-02-150T, Oct. 12, 2001). Bioterrorism: Review of Public Health Preparedness Programs (GAO-02- 149T, Oct. 10, 2001). Bioterrorism: Public Health and Medical Preparedness (GAO-02-141T, Oct. 9, 2001). Bioterrorism: Coordination and Preparedness (GAO-02-129T, Oct. 5, 2001). Bioterrorism: Federal Research and Preparedness Activities (GAO-01- 915, Sept. 28, 2001). Combating Terrorism: Selected Challenges and Related Recommendations (GAO-01-822, Sept. 20, 2001). Combating Terrorism: Comments on H.R. 525 to Create a President’s Council on Domestic Terrorism Preparedness (GAO-01-555T, May 9, 2001). Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement (GAO-01-666T, May 1, 2001). Combating Terrorism: Observations on Options to Improve the Federal Response (GAO-01-660T, Apr. 24, 2001).
Federal research and preparedness activities related to bioterrorism center on detection; the development of vaccines, antibiotics, and antivirals; and the development of performance standards for emergency response equipment. Preparedness activities include (1) increasing federal, state, and local response capabilities; (2) developing response teams; (3) increasing the availability of medical treatments; (4) participating in and sponsoring exercises; (5) aiding victims; and (6) providing support at special events, such as presidential inaugurations and Olympic games. To coordinate their efforts to combat terrorism, federal agencies are developing interagency response plans, participating in various interagency work groups, and entering into formal agreements with other agencies to share resources and capabilities. However, coordination of federal terrorism research, preparedness, and response programs is fragmented, raising concerns about the ability of states and localities to respond to a bioterrorist attack. These concerns include insufficient state and local planning and a lack of hospital participation in training on terrorism and emergency response planning. This testimony summarizes a September 2001 report (GAO-01-915).
Terrorists have targeted federal facilities several times over the past 10 years. After the 1995 bombing of the Alfred P. Murrah Federal Building in Oklahoma City, the Department of Justice created minimum-security standards for federal facilities. In October 1995, the President signed Executive Order 12977, which established ISC. ISC was expected to enhance the quality and effectiveness of security in, and protection of, facilities in the United States occupied by federal employees for nonmilitary activities and to provide a permanent body to address continuing governmentwide security issues for federal facilities. ISC is expected to have representation from all the major federal departments and agencies, as well as a number of key offices. ISC’s specific responsibilities under the executive order generally relate to three areas: developing policies and standards, ensuring compliance and overseeing implementation, and sharing and maintaining information. Related to policies and standards, the executive order specifically states that ISC is to establish policies for security in and protection of federal facilities; develop and evaluate security standards for federal facilities; assess technology and information systems as a means of providing cost-effective improvements to security in federal facilities; develop long-term construction standards for those locations with threat levels or missions that require blast-resistant structures or other specialized security requirements; and evaluate standards for the location of, and special security related to, day care centers in federal facilities. In the area of compliance and oversight, ISC is to develop a strategy for ensuring compliance with facility security standards and oversee the implementation of appropriate security measures in federal facilities. And, related to sharing and maintaining information, ISC is to encourage agencies with security responsibilities to share security related intelligence in a timely and cooperative manner and assist with developing and maintaining a centralized security database of all federal facilities. Since September 11, the focus on protecting the nation’s critical infrastructure has been heightened considerably. The Homeland Security Act of 2002 and other administration policies assigned DHS specific duties associated with coordinating the nation’s efforts to protect critical infrastructure, and Homeland Security Presidential Directive Number 7 (HSPD-7) stated that DHS’s Secretary was responsible for coordinating the overall national effort to identify, prioritize, and protect critical infrastructure and key resources. Under the Homeland Security Act of 2002, the Federal Protective Service (FPS) was transferred from GSA to DHS and, as a result of this transfer, DHS assumed responsibility for ISC in March 2003. In September 2002, we reported that ISC was having limited success in fulfilling its responsibilities. Specifically, ISC had made little or no progress in areas including developing and establishing policies for security in and protection of federal facilities and developing a strategy for ensuring compliance with security standards. In January 2003, we designated federal property as a high-risk area, in part due to the threat of terrorism against federal facilities. As the government’s security efforts continue to intensify, and real property-holding agencies employ such measures as searching vehicles that enter federal facilities, restricting parking, and installing concrete bollards, important questions continue to be raised regarding the level of security needed to adequately protect federal facilities and how the security community should proceed. Figure 1 shows bollards installed at the Jacob Javits Federal Building in New York, New York. Additionally, questions concerning the cost-effectiveness and impact of various practices have emerged as the nation faces a protracted war on terrorism. ISC has made progress in coordinating the government’s facility protection efforts and has been given a prominent role in reviewing agencies’ physical security plans for the administration since we last reported on this issue. In September 2002, we reported that ISC, at that time, had made little or no progress in key elements of its responsibilities, such as developing policies and standards for security at federal facilities; ensuring compliance with security standards and overseeing the implementation of appropriate security in federal facilities; and related to information, developing a centralized security database of all federal facilities. Agency representatives identified several factors that they believe contributed to ISC’s limited progress. These factors included (1) the lack of consistent and aggressive leadership by GSA, (2) inadequate staff support and funding for ISC, and (3) ISC’s difficulty in making decisions. Nonetheless, there were areas where we observed some progress over its then 7-year existence. For example, ISC had developed and issued security design criteria and minimum standards for building access procedures; disseminated information to member agencies on entry security technology for buildings needing the highest security levels; and, through its meetings and working groups, provided a forum for federal agencies to discuss security-related issues and share information and ideas. In commenting on the September 2002 report, GSA, which at the time had responsibility for chairing ISC, agreed to take action to address the shortcomings we identified. In March 2003, in accordance with the Homeland Security Act of 2002, FPS was transferred from GSA to DHS. As a result, DHS assumed responsibility for chairing ISC, and the executive order establishing ISC was amended to reflect the transfer of this function from GSA to DHS. Transferring responsibility for ISC to DHS reflected the shift to having homeland security activities centralized under one cabinet- level department. Within DHS, the role of chairing ISC was subsequently delegated to the Director of FPS in January 2004. Since our 2002 report, ISC has made clear progress in developing policies and standards and maintaining and sharing information. Related to policies and standards, ISC issued security standards for leased space in July 2003, and OMB has approved them. These standards address security requirements for leased facilities and, according to an ISC official, are currently being used by ISC member agencies as a management tool. In June 2003, ISC issued guidance on escape hoods for federal agencies and, in October 2003, ISC issued an update to its May 2001 Security Design Criteria for New Federal Office Buildings and Major Modernization Projects. According to an FPS official, GSA is incorporating ISC’s Security Design Criteria in the construction of new facilities. More recently, ISC became involved with Homeland Security Presidential Directive Number 12 (HSPD-12), issued in August 2004, which seeks to standardize identification for federal employees and contractors. According to the directive, wide variations in the quality and security of forms of identification used to gain access to federal facilities, where there is a potential for terrorist attacks, need to be eliminated. ISC’s Executive Director informed us that he was asked to be a member of the White House Homeland Security Council Coordination Committee for HSPD-12. This ISC official would provide the leadership role for this committee and ensure that physical security requirements for the federal government, as they relate to the directive, are included and coordinated with ISC members. Related to its role in maintaining and sharing information, ISC has developed a Web site for posting policies and guidance and is developing a secure Web portal for member agencies to exchange security guidance and other information. Also, according to the Executive Director of ISC, standard operating procedures were approved by ISC members in June 2004 and were finalized in September 2004. These operating procedures are intended to improve the quality of information sharing among member agencies at its meetings by establishing standards for attendance and participation at ISC meetings. For example, each ISC agency representative is required to attend all meetings or delegate a person to attend to ensure full participation. Finally, DHS is developing a governmentwide facilities database that the ISC Executive Director believes will meet ISC’s responsibility to assist with developing and maintaining a centralized security database of all federal facilities. This database will list functions and services that are mission critical, map federal assets and their critical infrastructure, and identify key resources for both cyber and physical security protection. According to ISC’s Executive Director, ISC members are an integral part of this process and will ensure that the required support from within their departments and agencies is provided. Despite progress in its other areas of responsibility, ISC has not developed, as specified in its 1995 executive order, a strategy for ensuring compliance with security standards among agencies and overseeing the implementation of appropriate security measures in federal facilities. However, in July 2004, the administration made ISC responsible for annually reviewing and approving physical security plans that agencies are required to develop under a presidential homeland security policy directive. HSPD-7, issued in December 2003, establishes a national policy for federal departments and agencies to identify and prioritize critical infrastructure and key resources in the United States so that they can be protected from terrorist attacks. The directive makes DHS responsible for overseeing the implementation of the directive and outlines the roles and responsibilities of individual agencies. Among the roles and responsibilities delineated, HSPD-7 establishes an annual reporting cycle for agencies to evaluate their critical infrastructure and key resources protection plans for both cyber and physical security. ISC’s Executive Director informed us that in July 2004, the administration designated ISC as the oversight body for agencies’ physical security plans. According to ISC’s Executive Director, ISC’s role will be to review, approve, or disapprove each department or agency’s physical security plan. If ISC were to successfully fulfill its new responsibilities under HSPD-7, which would be done under the broader umbrella of the administration’s central planning and coordination efforts for homeland security, it would represent a major step toward meeting its responsibilities that relate to oversight and compliance monitoring, as specified in the 1995 executive order under which it was established. That is, the 1995 executive order that established ISC specified that ISC should develop a strategy for ensuring agencies’ compliance with governmentwide facility protection standards and oversee the implementation of appropriate security measures in federal facilities. By having a role in reviewing agencies’ physical security plans in relation to HSPD-7, ISC would have a vehicle for carrying out its existing responsibility related to compliance and oversight. Appendix III identifies each of ISC’s major responsibilities under the executive order and actions it has taken to date to fulfill them. ISC’s Executive Director identified several challenges that relate to ISC’s many roles and responsibilities in coordinating the government’s facility protection efforts. These included the following: reaching a consensus with agencies on a risk management process for the government that is reasonable and obtaining funding for this activity; addressing the issue of leased government space and the impact that new physical security standards for leased space will have on the real estate market; developing a compliance process for agencies that can also be used as a self-assessment tool to measure the effectiveness of ISC; educating senior level staff from across the government and gaining their support for ISC activities; and overall, integrating all physical security initiatives for the entire federal government and implementing change. We agree that ISC faces these challenges and, furthermore, that they will have to be addressed in order for ISC to be successful. More specifically, the sheer magnitude of integrating the government’s facility protection initiatives, which ISC and FPS officials identified, is formidable because it involves many different agencies and varying perspectives on security. Furthermore, in discussing the challenges associated with leased property, ISC’s Executive Director touched on one of several long-standing problems in the federal real property area that have implications for facility protection policy. As reported in GAO’s 2003 high-risk report on federal real property, the government’s historical reliance on costly leased space— which achieves short-term budget savings but is more costly over the longer term—is problematic. To the extent that private sector lessors are required to enhance the security of their property for federal tenants, the associated costs will likely be passed on to the government in the form of higher rent. Another long-standing problem that could affect ISC as it attempts to meet its responsibilities is the historically unreliable nature of agency real property data. Poor data could make it difficult for agency management to implement and oversee comprehensive risk-based approaches to protecting their facilities. As discussed later, risk management, as it pertains to facility protection, relies heavily on accurate and timely data. At the governmentwide level, inventory data maintained by GSA for the entire government, and financial data on property reported in the government’s financial statements, have also been historically unreliable. Another challenge identified by ISC’s Executive Director—obtaining adequate resources for its activities—is a particular concern. According to the Executive Director of ISC, as the ISC’s only full-time staff person, his ability to ensure that all of ISC’s responsibilities are fulfilled is limited. Also, according to this official, ISC is dependent entirely on participation and input from member agencies. ISC’s Executive Director said that, in the past, getting buy-in and support from senior officials in member agencies had been a challenge. It seems, however, that given ISC’s new role in the administration’s homeland security efforts, it could make a persuasive case for a sustained level of support from agencies. Also, it is important to note that DHS has certain responsibilities under the executive order that established ISC to ensure it has adequate resources. Specifically, the executive order states that “to the extent permitted by law and subject to the availability of appropriations, the Secretary of Homeland Security should provide ISC with such administrative services, funds, facilities, staff, and other support services as may be necessary for the performance of its functions.” According to ISC’s Executive Director, current ISC resources are not sufficient for ISC to meet all of its evolving responsibilities. This official told us that additional funding for ISC will not be available until fiscal year 2006. However, given the prominent role ISC will be playing in the administration’s homeland security efforts, it will be critical for DHS to help ISC undertake activities that will allow it to fulfill its responsibilities, address other challenges it faces, and ultimately be successful. Given the challenges ISC faces, its new responsibility related to HSPD-7 for reviewing agencies’ physical security plans, and the need to sustain progress it has made in fulfilling its responsibilities, ISC would benefit from having a clearly defined action plan for achieving results. Although ISC has taken steps to address challenges, such as seeking additional resources for fiscal year 2006, it lacks an action plan that could be used to (1) provide DHS and other stakeholders with detailed information on, and a rationale for, its resource needs; (2) garner and maintain the support of ISC member agencies, DHS management, OMB, and Congress; (3) identify implementation goals and measures for gauging progress in fulfilling all of its responsibilities; and (4) propose strategies for addressing the challenges ISC faces. Such a plan could incorporate the strategy for ensuring compliance with facility protection standards that is required under ISC’s executive order, but has not yet been developed. Without an overall action plan for meeting this and other responsibilities, ISC’s strategy and time line for these efforts remain unclear. Having an effective ISC is critically important to the government’s overall homeland security efforts as new threats emerge and agencies continue to focus on improving facility protection. Prior to 1995, there were no governmentwide standards for security at federal facilities and agencies’ efforts to coordinate and share information needed improvement. Without standards and mechanisms for coordination, there were concerns about the vulnerability of federal facilities to acts of terrorism. As recently as August 2004, information from DHS showed that threats against high- profile facilities in the New York area and Washington, D.C., are still a major concern. As ISC and agencies have paid greater attention to facility protection in recent years, several key practices have emerged that collectively could provide a framework for guiding agencies’ efforts. As discussed in more detail later, ISC could play a vital role in promoting key practices in relation to its information sharing responsibilities. Key facility protection practices that we identified include allocating security resources using risk management, leveraging the use of security technology, coordinating protection efforts and sharing information with other stakeholders, and measuring program performance and testing security initiatives. In addition, we determined that two other practices GAO has highlighted as governmentwide issues also have implications for the facility protection area. These include realigning real property assets to agencies’ missions, thereby reducing vulnerabilities, and strategic human capital management, to ensure that agencies are well equipped to recruit and retain high- performing security professionals. Our analysis—based on our work and Inspector General (IG) reports, the views of the NAS symposium experts in facility protection, and interviews with federal agencies—showed that attention to these key practices could provide a framework for guiding agencies’ efforts and achieving success in the facility protection area. Figure 2 identifies each of these key practices. Our discussions with the major property-holding agencies and analysis of documents we obtained showed that each agency used some form of risk management to protect its facilities. Some examples of how agencies applied risk management are as follows: According to officials with FPS, which protects federally owned or occupied facilities held by GSA and DHS, security needs and related countermeasures are prioritized based on the level of risk to a particular facility. Risk is determined by evaluating the impact of loss and vulnerability that each specific threat would have on a facility. According to these officials, FPS inspectors are trained to make educated decisions on applicable countermeasures to the identified threats and vulnerabilities on a recurring basis. We have reported that, for many years, DOE has employed risk-based security practices. To manage potential risks, DOE uses a classified document referred to as a “design basis threat” (DBT). The DBT identifies the potential size and capabilities of terrorist forces and is based on information DOE gathers from the intelligence community. DOE requires contractors operating its sites to provide sufficient protective forces and equipment to defend against the threat contained in the DBT. DOE updated its 1999 DBT in May 2003 to better reflect current and projected terrorist threats in the aftermath of September 11. VA conducts physical security assessments and prioritizes its protection efforts for critical infrastructure, according to VA officials. The phases of the assessment include defining the criticality of VA facilities, identifying and analyzing vulnerabilities of VA’s critical facilities, and identifying appropriate countermeasures. According to VA documents, VA determines vulnerability by factors such as facility population, number of floors in the facility, and the presence or absence of armed officers. This assessment also includes a procedure for scoring and prioritizing identified vulnerabilities at each assessed site. We have reported that DOD requires its installations to assess, identify, and evaluate potential threats to the installation; identify weaknesses and countermeasures to address the installation’s vulnerabilities; and evaluate and rank criticality of the installation’s assets to achieving mission goals. These three assessments serve as the foundation of each DOD installation’s antiterrorism plan. The results of the assessments are used to balance threats and vulnerabilities and to define and prioritize related resource and operational requirements. Interior’s Office of Law Enforcement and Security (OLES) has identified 16 Interior assets as needing special consideration because they are critical to the nation’s infrastructure or are national icons that could be targets for symbolic reasons. Having a rationale such as this, for focusing on certain assets, represents Interior’s approach to risk management at the departmentwide level. According to USPS officials, USPS’s physical security program incorporates a risk assessment methodology and a layered approach to facility security. This effort involves annual security surveys of facilities conducted by facility security control officers and periodic comprehensive reviews at larger core postal facilities by the Postal Inspection Service, which is the investigative branch of USPS. In commenting on this report, State noted that another example of an agency’s use of risk management is State’s Long-Range Overseas Buildings Plan (LROBP). LROBP is a 6-year plan, updated yearly, that identifies embassy and consulate facilities most in need of replacement due to unacceptable security, safety, and/or operational conditions. State also said that the plan identifies State’s facilities’ program objectives and prioritizes competing facility requirements with input from the Bureaus of Overseas Buildings Operations (OBO) and Diplomatic Security (DS), State’s Regional Bureaus, and other overseas agencies. State indicated that the LROBP provides a road map for addressing long-term facility needs under the Capital Security Construction Program, Regular Capital Construction Program, as well as major rehabilitation, compound security, and other programs. According to State’s comments, to prepare the plan, each year OBO and DS meet with the regional bureaus to discuss which posts should move into the “top 80” list, which contains the 80 primary posts requiring replacement for security reasons, and for which, by law, the department can spend security capital construction appropriations. Furthermore, with respect to the original full list of facilities that need replacement, the department, working with intelligence agencies, prioritizes these facilities. At the NAS symposium, a private sector security expert discussed a risk management methodology in use by FPS at GSA and Internal Revenue Service facilities. We did not review the usefulness or effectiveness of this methodology. Nonetheless, the methodology is an example of one risk management process that is in use. The process, called Federal Security Risk Management, or FSRM, is a risk matrix that compares credible threats with assets and assesses the impact of loss and vulnerability. According to the panelist, agencies use the risk matrix to apply security upgrades to the risks deemed unacceptable and reevaluate the countermeasures until a desired level of risk reduction is achieved. The agencies then develop design or retrofit specifications and criteria. This risk assessment cycle generally spans a 2-to-4 year time period. According to the panelist, once unacceptable risks are addressed through countermeasures, agencies need to reevaluate risks and vulnerabilities on an ongoing basis. By efficiently using technology to supplement and reinforce other security measures, vulnerabilities that are identified by the risk management process can be more effectively addressed with appropriate countermeasures. Our work showed broad concurrence among GAO, IGs, facility security experts, and agency experts that making efficient use of security technology to protect federal facilities is a key practice, but that the type of technology to use should be carefully analyzed. For example, in reporting on border security and information security issues in 2003, we found that prior to significant investment in a project, a detailed analysis should be conducted to determine that benefits of a technology outweigh costs, as well as to determine the effects of the technology on areas such as privacy and convenience. In the facility access control area, we also reported that agencies should decide how technology will be used and whether to use technology at all to address vulnerabilities before implementation. According to our 2003 testimony on using technologies to secure federal facilities, technology implementation costs can be high, particularly if significant infrastructure modifications are necessary. Another consideration is that lesser technological solutions sometimes may be more effective and less costly than more advanced technologies. For example, as we reported in 2002, trained dogs are an effective and time- proven tool for detecting concealed explosives. By using the risk management process and balancing costs, benefits, and other concerns, agencies can efficiently leverage technologies to enhance facility protection. Among the advanced technologies that were identified during our review were smart cards—which use integrated circuit chips to store information on individuals—and biometrics—which analyze human physical and behavioral characteristics—to verify the identity of employees. Furthermore, sophisticated detection and surveillance systems such as closed circuit television (CCTV) have also aided in securing facility perimeters and monitoring activity in the building. Such technologies expand surveillance capabilities and can free up security staff for other duties. Several GAO and IG reports indicated that agencies currently have a wide array of security technologies available for protecting facilities, including smart cards, biometrics, X-ray scanners, and CCTV. As we reported in 2002, technologies identified as countermeasures through the risk management process support the following three integral concepts for security: Protection—Provides countermeasures such as policies, procedures, and technical controls to defend assets against attacks. Detection—Monitors for potential breakdowns in protective mechanisms that could result in security breaches. Reaction—Responds to detected breaches to thwart attacks before damage can be done. In GAO’s April 2002 testimony on security technologies, we categorized the security technologies by which security concept they supported. Figure 3 lists the technologies and provides descriptions of each. Several of the major property-holding agencies we contacted use various security technologies to protect their facilities. For example, to control access to its embassies, State employs alarm systems, arrest barriers to stop vehicles, audio/video monitoring equipment, explosive detection devices and metal detectors, and X-ray machines. Officials at USPS indicated that various detection technologies are used to secure its facilities against biological and radiological agents. For example, as we reported in 2002, USPS installed high-efficiency particulate air (HEPA) filtration systems at some facilities to protect them from biohazards. HEPA filtering technology is designed to remove particulate biohazards and other particles. Currently, GSA is conducting a smart card pilot program for two federal buildings in New York City. Although the first cards went into use in October 2003, planning for the pilot program began before the September 11 terrorist attacks. One of the federal buildings participating in the program is the Jacob Javits Federal Building, which houses approximately 35 agencies and more than 7,000 federal employees. All of the employees participating in the program use smart cards to enter the building. In addition to a person’s name, title, and picture, the smart card contains multiple layers of data substantiating the card’s authenticity and personal biometric data of the cardholder. Employees use the smart cards at access portals near the building’s entrances (see fig. 4). After the portal has read the smart card and validated the user, glass doors swing apart to allow entry. If the threat level is raised under the homeland security advisory system, the building access technology requires additional security procedures (e.g., entering a personal identification number (PIN), matching a stored biometric record). Although agencies’ use of smart cards in the building has been optional, all of the agencies in the Javits building are currently participating in the pilot program, including the Federal Bureau of Investigation, the Small Business Administration, and the Department of Housing and Urban Development. Overall, it was evident during our review that agencies are already using or experimenting with a range of technologies in their facility protection efforts. In terms of key practices, it is important to note that focusing on obtaining and implementing the latest technology is not necessarily a key practice by itself. Instead, having an approach that allows for cost- effectively leveraging technology to supplement and reinforce other measures would represent an advanced security approach in this area. Also, linking the chosen technology to countermeasures identified as part of the risk management process provides assurance that factors such as purpose, cost, and expected performance were addressed. Information sharing and coordination among organizations is crucial to producing comprehensive and practical approaches and solutions to address terrorist threats directed at federal facilities. Our work showed a broad consensus—on the basis of prior GAO and IG work and information from agencies and the private sector—that by having a process in place to obtain and share information on potential threats to federal facilities, agencies can better understand the risk they face and more effectively determine what preventive measures should be implemented. In considering the implications that information sharing and coordination have for facility protection efforts, it is useful to look at how this practice is being approached governmentwide, at the agency level, and at the individual facility level. At the governmentwide level, DHS is expected to play a critical role in information sharing and coordination in most homeland security areas, including facility protection. In September 2003, we reported that information sharing was critical for DHS to meet its mission of preventing terrorist attacks in the United States, reducing vulnerability to terrorist attacks, and minimizing damage and assisting with recovery if attacks do occur. In 2003, we also reported that to accomplish its mission, DHS needed to access, receive, and analyze law enforcement information, intelligence information, and other threat, incident, and vulnerability information from federal and nonfederal sources and analyze this information to identify and assess the nature and scope of terrorist threats. Furthermore, we reported that DHS should share information both internally and externally with agencies, law enforcement, and first responders. As we testified in September 2003, we have made numerous recommendations to DHS to improve information sharing and coordination to accomplish its homeland security responsibilities. These recommendations involved, for example, incorporating existing information-sharing guidance contained in various national strategies and the information-sharing procedures required by the Homeland Security Act of 2002; establishing a clearinghouse to coordinate the various information- sharing initiatives to eliminate possible confusion and duplication of effort; fully integrating states and cities into a national policy-making process for information sharing and taking steps to provide greater assurance that actions at all levels of government are mutually reinforcing; identifying and addressing perceived barriers to federal information- using survey methods or related data collection approaches to determine, over time, the needs of private and public organizations for information related to homeland security and to measure progress in improving information sharing at all levels of government. In addition to those recommendations, we identified a need for a comprehensive plan to facilitate information sharing and coordination to protect critical infrastructure in our August 2004 testimony on strengthening information sharing for homeland security. We reported that such a plan could encourage improved information sharing by clearly delineating roles and responsibilities of federal and nonfederal entities, defining interim objectives and milestones, setting time frames for achieving objectives, and establishing performance measures. DHS has concurred with the above recommendations to improve information sharing and coordination and is in various stages of implementing them. These recommendations clearly have implications for the facility protection area, by, for example, increasing coordination among facility stakeholders that would reduce duplicative efforts and reinforce protection strategies. The emphasis on information sharing and coordination is also evident in the National Strategy for Homeland Security and its related strategies to protect critical infrastructure, including federal facilities. According to the national strategy, successfully protecting facilities will rely on effective information sharing and coordination among multiple entities as part of the nation’s broader homeland security efforts. In the related National Strategy for the Physical Protection of Critical Infrastructure and Key Assets, information sharing is a common theme. This strategy calls for the federal government to work with various stakeholders to, among other things, develop processes for visitor screening, assess vulnerabilities, develop construction standards, and implement security technology. With regard to national icon protection, the strategy recommends that Interior work with other agencies, the public, and the private sector to define criticality criteria, assess vulnerabilities, conduct security awareness programs, and collaborate to protect national icons outside the purview of the federal government. Related to dams, the strategy recommends that DHS work with other agencies, dam owners, and local and state officials to assess risks and institute a national dam security program. At the agency level, the agencies we contacted provided several examples of their activities related to information sharing and coordination. These activities are described in table 1. In addition to agencywide efforts, coordination and information sharing is important at the individual facility level. As we have previously reported, protecting federal facilities requires facility security managers to involve multiple organizations to effectively coordinate and share information to prevent, detect, and respond to terrorist attacks. Security managers typically are not aware of potential threats to their facilities and depend on intelligence from other organizations to prevent and/or deter attacks. For example, according to officials from VA, due to limited resources and its lack of an intelligence gathering capability, VA must rely on other agencies to gain threat information. Additionally, security managers have to coordinate and share information with state and local governments to respond to terrorist attacks and do not have direct access to the range of emergency resources required to respond to terrorist attacks. They rely on state and local governments to provide first-responder services such as firefighting, medical personnel, and other emergency services. They also rely on local police and the judicial process to enforce and prosecute violators of the laws and regulations governing the protection of federal facilities. As such, at the individual facility level, security managers are less equipped to make informed decisions about security without effective information sharing and coordination. One way managers at the individual facility level may become better informed is if they take advantage of emerging efforts by the government to disseminate targeted threat information. For example, one recent DHS effort to increase information sharing and coordination among security stakeholders is its Homeland Security Information Network. According to DHS’s Web site, this unclassified network consists of Internet, phone, fax, and pager communications systems that provides DHS with constant access to real-time threat information from public and private industries and agencies. DHS can also use the network to send targeted alert notifications and other threat information to states, cities, and others, which can then collect and disseminate this information among those other entities involved in combating terrorism. A base of locally knowledgeable experts governs and administers the network with the support of DHS regional coordinators. Performance measurement can help achieve broad program goals and improve security at the individual facility level. Our analysis showed a consensus among various stakeholders that performance measurement is a key practice that agencies should follow. Although using performance measurement for facility protection is a practice that—based on our analysis—is in the early stages of development, several initiatives at three levels—governmentwide policy, agency, and facility-specific—demonstrate how performance measurement is being approached in the facility protection area. At the governmentwide policy level, the National Strategy for Homeland Security addresses the threat of terrorism in the United States by organizing the domestic efforts of federal, state, local and private organizations. It aligns and focuses homeland security functions into six mission critical areas, set forth as (1) intelligence and warning, (2) border and transportation security, (3) domestic counterterrorism, (4) protecting critical infrastructures and key assets, (5) defending against catastrophic terrorism, and (6) emergency preparedness and response. As mentioned before in relation to information sharing and coordination, the National Strategy for the Physical Protection of Critical Infrastructures and Key Assets incorporates facility protection efforts and identifies a set of national goals and objectives. The strategy outlines the guiding principles that will underpin the government’s efforts to secure the infrastructures and assets vital to national security, governance, public health and safety, the economy, and public confidence. It also provides a unifying organizational structure and identifies specific initiatives to drive the government’s near-term national protection priorities and inform the resource allocation process. According to the strategy, the strategic objectives that underpin our national critical infrastructure and key asset protection effort include the following: identifying and assuring the protection of those infrastructures and assets that are deemed most critical in terms of national-level public health and safety, governance, economic and national security, and public confidence consequences; providing timely warning and assuring the protection of those infrastructures and assets that face a specific, imminent threat; and assuring the protection of other infrastructures and assets that may become terrorist targets over time by pursuing specific initiatives and enabling a collaborative environment in which federal, state, and local governments and the private sector can better protect the infrastructures and assets they control. These strategies are national in scope, cutting across all levels of government, and involve a large number of organizations and entities including federal, state, local, and private sectors. We have testified that these national strategies are the starting point for federal agencies and that the ultimate measure of this and other strategies’ value will be the extent they are useful as guidance for policy and decision makers in allocating resources. Related to facility protection, the strategic objectives are useful in providing a context and a broader framework for agencies, as they develop agencywide and facility-specific goals and measures to determine if their specific facility protection efforts are achieving desired results. At the agency level, we have reported that tying security goals to broader agency mission goals can help federal agencies measure the effectiveness and ensure accountability of their security programs. One tool that agencies can use is the Government Performance and Results Act of 1993 (GPRA). Under GPRA, agencies are to prepare 5-year strategic plans that set the general direction for their efforts. These plans are to include comprehensive mission statements, general and outcome-related goals, descriptions of how those goals will be achieved, identification of external factors that could affect progress, and a description of how performance will be evaluated. Agencies are to then prepare annual performance plans that establish connections between the long-term goals in the strategic plans with the day-to-day activities of program managers and staff. These plans are to include measurable goals and objectives to be achieved by a program activity, descriptions of the resources needed to meet these goals, and a description of the methods used to verify and validate measured values. Finally, GPRA requires that the agency report annually on the extent to which it is meeting its goals and the actions needed to achieve or modify those goals that were not met. GPRA provides a framework under which agencies can identify implementation time lines for facility protection initiatives and adherence to related budgets. We did not assess the extent to which agencies were using GPRA to develop agencywide facility protection or security-related goals. However, we noted one agency that ties its strategic security goals to GPRA is the Defense Threat Reduction Agency (DTRA) at DOD. DTRA’s 2003 strategic plan contains most of the elements in a strategic plan developed using GPRA standards. DTRA plays a key role in addressing the threats posed by weapons of mass destruction (WMD), and its specialized capabilities and services are used to support civilian agencies’ efforts to address WMD threats, particularly the efforts of DOE and DHS. DTRA also provides training for emergency personnel responding to WMD incidents and assesses the vulnerability of personnel and facilities to WMD threats. DTRA’s strategic plan lays out the agency’s five goals, which serve as the basis of its individual units’ annual performance plans: (1) deter the use and reduce the impact of WMD, (2) reduce the present threat, (3) prepare for future threats, (4) conduct the right programs in the best manner, and (5) develop people and enable them to succeed. These long-term goals are further broken down into four or five objectives, each with a number of measurable tasks under each objective. These tasks have projected completion dates and identify the DTRA unit responsible for the specific task. For example, under the goal “deter the use and reduce the impact of WMD” is the objective “support the nuclear force.” A measurable task under this objective is to work with DOE to develop support plans for potential resumption of underground nuclear weapons effects testing. The technology development unit in DTRA was expected to complete this task by the fourth quarter of fiscal year 2004. Our work showed examples where federal agencies were testing security measures by conducting inspections and assessments to ensure that adequate levels of protection are employed. For example, officials at Interior said that after September 11, one of its bureaus began conducting full-risk assessments at all of its facilities, in order of importance. As part of one of its regularly scheduled assessments at one location, Interior received assistance from DTRA, which performed an assessment of vulnerabilities. According to Interior officials, DTRA officials looked at whether the resulting effect from various types of attack would affect the mission capabilities of the location. After the assessment, DTRA made recommendations to Interior officials for strengthening security. Consequently, Interior officials took actions to improve security and scheduled plans for follow-up. In another example, the Interior IG reported in August 2003 on its security assessment of National Park Service (NPS) parks. During the review, Interior IG officials identified some serious deficiencies with the overall security program and made recommendations to remedy these problems. For example, the IG’s assessment revealed that necessary security enhancements were delayed or wholly disregarded, that management officials lacked situational awareness, and that other officials lacked the expertise and resources to effectively assess, determine, and prioritize necessary security actions. This type of active testing is useful in exposing vulnerabilities and developing countermeasures. According to DOE officials, DOE’s Performance Assurance Program requires that performance testing determine the effectiveness of facility protection systems and programs. DOE conducts inspections to ensure that proper levels of protection are consistent with standards it has established. Assessments are made of the sites’ ability to prevent unacceptable, adverse impact on national security or on the health and safety of DOE and contract employees, the public, or the environment. The adequacy of safeguards and security measures are then validated through various means such as surveys, periodic facility self-assessments, program reviews and inspections, and assessments. In addition to testing facility access control through inspections and site surveys, we found examples of security programs that tested the effectiveness of physical security measures such as structural enhancements, physical barriers, and blast-resistant windows. Blast- resistance in buildings is generally provided by passive features such as additional reinforcement and connections in the structural frame for increased malleability, composite fiber wraps to prevent shattering of columns and slabs, and high-performance glazing materials that resist blast pressures. In both field tests and experience (for example, the attack on the Pentagon), these measures have been quite effective in reducing the devastating effects of deliberate explosions and, consequently, reducing casualties as well. In March 2004, a panelist from DOD at the NAS symposium indicated that blast testing is also important in the prevention of injuries resulting from progressive collapse of buildings and flying debris. He reported that 87 percent of the deaths occurred in the collapsed portion of the Alfred P. Murrah Federal Building in Oklahoma City, and only 5 percent of the deaths occurred in the uncollapsed portion of the building. Furthermore, another panelist noted that 70 of the over 2,000 publicly reported terrorist incidents worldwide, since 1970, were directed at buildings. Most of these have involved large vehicle bombs, incendiary bombs, or rocket-propelled grenades. Training exercises and drills are also useful in assessing preparedness. We have reported that effective security also entails having a well-trained staff that follows and enforces policies and procedures. In these reports, we found breaches in security resulting from human error are more likely to occur if personnel do not understand the technologies, risks, and the policies that are put in place to mitigate them. Furthermore, good training and practice are essential to successfully implementing policies by ensuring that personnel exercise good judgment in following security procedures. Presidential Decision Directive (PDD) 39 requires key federal agencies to maintain well-exercised capabilities for combating terrorism. Exercises test and validate policies and procedures, test the effectiveness of response capabilities, increase the confidence and skill levels of personnel, and identify strengths and weaknesses in responses before they arise in actual incidents. Counterterrorism exercises also include activities where agency officials discuss scenarios around a table or other similar setting, and field exercises, where agency leadership and operational units actually deploy to practice their skills and coordination in a realistic field setting. Overall, training, as it relates to facility protection, provides decision makers with data on performance in various scenarios. Training is also discussed later in this report in relation to strategic human capital management. Excess and underutilized real property at federal agencies is a long- standing and pervasive problem that has implications for the facility protection area. Along with the need to secure facilities against the threat of terrorism, excess property and the need to realign the federal real property inventory were among the reasons GAO designated federal real property as a high-risk area in January 2003. To the extent that agencies are expending resources to maintain and protect facilities that are not needed, funds available to protect critical assets may be lessened. Our past work showed examples where funds spent to maintain and protect excess property were significant. For example, we reported in January 2003 that DOD estimates it is spending $3 billion to $4 billion each year maintaining facilities that are not needed. In another example, costs associated with excess DOE facilities, primarily for security and maintenance, were estimated by the DOE IG in April 2002 to exceed $70 million annually. One building that illustrates this problem is the former Chicago main post office. In October 2003, we testified that this building, a massive 2.5 million square foot structure located near the Sears Tower, is vacant and costing USPS $2 million annually in holding costs. It is likely that other agencies that continue to hold excess or underutilized property are also incurring significant holding costs for services including security and maintenance. Given the need to realign the federal real property inventory so that it better reflects agencies’ missions, agencies that can overcome this problem may reap benefits in the facility protection area. That is, funds no longer spent securing and maintaining excess property could be put to other uses, such as enhancing protection at critical assets that are tied to agencies’ missions. VA’s Capital Asset Realignment for Enhanced Services (CARES) initiative, which VA started in October 2000, is an example where a realignment effort is under way. In the mid-1990s, VA began shifting its role from being a traditional hospital-based provider of medical services to an integrated delivery system that emphasizes a full continuum of care with a significant shift from inpatient to outpatient services. Subsequently, VA began the CARES initiative so that it could reduce its large inventory of buildings, many of which are underutilized or vacant. The administration’s effort to “rightsize” the nation’s overseas presence demonstrates how giving consideration to security, people, and facilities could be approached as part of an asset realignment framework. During 2000, an interagency effort led by the Department of State began to assess staffing of U.S. embassies and consulates to determine whether there were opportunities to improve mission effectiveness and reduce security vulnerabilities and costs by relocating staff. This process, referred to as rightsizing, was initiated in response to the November 1999 recommendations of the Overseas Presence Advisory Panel (OPAP). In the aftermath of the August 1998 bombings of U.S. embassies in Africa, OPAP determined that overseas staffing levels had not been adjusted to reflect the changing missions and requirements; thus, some embassies and consulates were overstaffed, and some were understaffed. The framework provides a systematic approach for assessing workforce size and identifying options for rightsizing, both at the embassy level and for making related decisions worldwide. It links staffing levels to three critical elements of overseas diplomatic operations: (1) physical/technical security of facilities and employees, (2) mission priorities and requirements, and (3) cost of operations. The first element includes analyzing the security of embassy buildings, the use of existing secure space, and the vulnerabilities of staff to terrorist attack. The second element focuses on assessing embassy priorities and the staff’s workload requirements. The third element involves developing and consolidating cost information from all agencies at a particular embassy to permit cost-based decision making. Unlike an analysis that considers the elements in isolation, the rightsizing framework encourages consideration of a full range of options, along with the security, mission, and cost trade-offs. With this information, decision makers would then be in a position to, for example, determine whether rightsizing actions are needed either to add staff, reduce staff, or change the staff mix at an embassy. Options for reducing staff could include outsourcing functions or relocating functions to the United States or to regional centers. In May 2002, we testified that the use of this approach for the U.S. embassy in Paris was successful in identifying security concerns and finding alternative locations for staff, such as in the United States or other cities in Europe. In April 2003, we reported that the rightsizing framework could be applied at U.S. embassies in developing countries. We later testified in April 2003 that OMB should expand the use of the rightsizing framework and that State adopt additional measures to ensure that U.S. agencies take a systematic approach to assessing workforce size that considers security, mission, and cost factors. GAO also recommended that State develop guidance on a systematic approach for developing and vetting staffing projections for new diplomatic compounds. State and OMB agreed with our recommendations. Figure 5 illustrates the rightsizing process, which integrates security, people, and mission considerations in determining how facilities are used. The strategic management of human capital is a key practice that can maximize the government’s performance and ensure the accountability of its efforts related to homeland security. People define an organization’s culture, drive its performance, and embody its knowledge base. They are the source of all knowledge, process improvement, and technological advancements. As the government’s homeland security efforts evolve, federal agencies involved with the intelligence community and other homeland security organizations will need the most effective human capital systems to reach projected security goals. For facility protection, as with other areas related to homeland security, it is especially critical for agencies to recognize the “people” element and implement strategies to help individuals maximize their full potential. Also, it is important for agencies to be well equipped to recruit and retain high-performing security and law enforcement professionals. Training is also essential to successfully implementing policies by ensuring that personnel are well exercised and exhibit good judgment in following security procedures. As we have reported, high-performing organizations align human capital approaches with missions and goals, and human capital strategies are designed, implemented, and assessed based on their ability to achieve results and contribute to an organization’s mission. This includes aligning their strategic planning and key institutional performance with unit and individual performance management, as well as implementing reward systems. We reported in March 2003 that federal agencies can develop effective performance management systems by implementing a selected, generally consistent, set of key practices. These key practices helped public sector organizations both in the United States and abroad create a clear linkage or “line of sight” between individual performance and organizational success and, thus, transform their cultures to be more results-oriented, customer-focused, and collaborative in nature. These key practices, which have applicability to agencies’ management of facility protection employers and contractors, include the following: Align individual performance expectations with organizational goals. An explicit alignment helps individuals see the connection between their daily activities and organizational goals. Connect performance expectations to crosscutting goals. Placing an emphasis on collaboration, interaction, and teamwork across organizational boundaries helps strengthen accountability for results. Provide and routinely use performance information to track organizational priorities. Individuals use performance information to manage during the year, identify performance gaps, and pinpoint improvement opportunities. Require follow-up actions to address organizational priorities. By requiring and tracking follow-up actions on performance gaps, organizations underscore the importance of holding individuals accountable for making progress on their priorities. Use competencies to provide a fuller assessment of performance. Competencies define the skills and supporting behaviors that individuals need to effectively contribute to organizational results. Link pay to individual and organizational performance. Pay, incentive, and reward systems that link employee knowledge, skills, and contributions to organizational results are based on valid, reliable, and transparent performance management systems with adequate safeguards. Make meaningful distinctions in performance. Effective performance management systems strive to provide candid and constructive feedback and the necessary objective information and documentation to reward top performers and deal with poor performers. Involve employees and stakeholders to gain ownership of performance management systems. Early and direct involvement helps increase employees’ and stakeholders’ understanding and ownership of the system and belief in its fairness. Maintain continuity during transitions. Because cultural transformations take time, performance management systems reinforce accountability for change management and other organizational goals. Our analysis showed that several GAO and IG reports discuss the importance of strategic management of human capital in relation to homeland security functions, including facility protection. For example, in June 2004 we recommended that DHS develop a transformation strategy for FPS to resolve challenges related to, among other things, the change in organizational culture and responsibilities FPS faces since it was transferred from GSA to DHS. DHS concurred with our recommendations. Furthermore, we testified on the importance of making changes to human capital management related to improving intelligence gathering at the CIA for security purposes. Also, the DOE IG recommended that DOE standardize annual, refresher training requirements for security forces and conduct reviews of safeguards and security training programs departmentwide to ensure compliance with the agency training plan. The Director, Office of Safeguards and Security at DOE, agreed with the recommendation. Successfully training employees on using emerging security technologies is also an important element in facility protection (see fig. 6). Installing the latest security technology alone cannot guarantee effective facility protection if security personnel have not been adequately trained to use the technologies properly. Training is particularly essential if the technology requires personnel to master certain knowledge and skills to operate it, such as detecting concealed objects in generated X-ray images. Without adequate training in understanding how technology works, the security system will likely be less effective. This is especially important in assessing risks and vulnerabilities in facility protection. According to DHS officials, FPS inspectors are trained to conduct risk assessments and to evaluate the effectiveness of previously installed facility countermeasures. Trained FPS inspectors articulate their findings to a building security committee for approval and funding, after which FPS implements the necessary countermeasures. At the NAS symposium, a security consultant from the private sector said that the effectiveness of a risk management approach depends on the involvement of experienced and professional security personnel and that there is an increased chance that personnel could omit major steps in the risk management process if they are not well trained in applying risk management. As the emphasis on protecting people, property, and information has increased, it has made the demand for professional security practitioners become even more important. It is widely recognized that there is a need for competent professionals who can effectively manage complex security programs that are designed to reduce threats to people and the assets of corporations, governments, and public and private institutions. To meet these needs, we noted an effort by one organization to provide standard certifications for security professionals. ASIS International is an international organization for professionals responsible for security, including managers and directors of security. According to the ASIS International Web site, the organization is dedicated to increasing the effectiveness and productivity of security practices by developing educational programs and materials that address broad security concerns. ASIS International has put together a training curriculum where security professionals, upon completing requirements, can receive certifications to become Certified Protection Professionals, Professional Certified Investigators, or Physical Security Professionals (PSP). The PSP designation is the certification for those whose primary responsibility is to conduct threat surveys; design integrated security systems that include equipment, procedures and people; or install, operate and maintain those systems. We did not assess the training and certifications offered by ASIS International. Nonetheless, seeking certifications for security staff may allow agencies to better ensure that they are adequately trained and allows for comparisons with other organizations and the security industry. During our review, we noted that agencies face obstacles in implementing the six key practices that we have identified. For example, determining which assets to protect by establishing and sustaining a comprehensive risk management approach is a significant undertaking for federal agencies. The quality of information needed for the risk management process is often difficult to obtain and analyze. Another obstacle is keeping risk assessments up-to-date as threat levels change, and resources for this activity are stretched. As we pointed out earlier in relation to ISC’s challenges, in our January 2003 high-risk report on federal real property, we highlighted that some major real property-holding agencies face obstacles in developing quality management data on their real property assets. Also, in April 2002, we reported that GSA’s worldwide inventory of property contained data that were unreliable and of limited usefulness. This inventory is the only central source of descriptive data on the makeup of the federal real property inventory. In addition to data reliability problems, we have reported that some agencies face obstacles in implementing and leveraging security investments. As we testified in 2002, the capabilities of technology can be overestimated.We found that by overestimating technology’s capabilities, security officials risk falling into a false sense of security and relaxing their vigilance. Furthermore, technology cannot compensate for human failure. Instead, technology and people need to work together as part of an overall security process where security personnel are properly trained to use the technology. The federal government also faces systemic obstacles regarding information sharing and coordination. We testified in August 2004 that there is a need for a comprehensive plan to facilitate information sharing and coordination in the protection of critical infrastructure. However, DHS has not yet developed a plan that describes how it will carry out its overall information sharing responsibilities and relationships. In commenting on this report, DHS indicated in its technical comments that such an information plan is being developed. Another obstacle is developing productive information sharing relationships among federal, state, and local governments and the private sector. Improving the federal government’s capabilities to analyze incident, threat, and vulnerability information from numerous sources could assist in more effectively disseminating information to federal, state, local, and private entities. Not sharing information on threats and on actual incidents experienced by others can hinder the ability of agencies’ to identify new trends, better understand risks, and determine what preventive measures to implement. As we reported in August 2003, information sharing initiatives implemented by states and cities were neither effectively coordinated with those of federal agencies, nor were they coordinated within and between federal entities. At the agencywide level, we have reported that agencies face obstacles in developing meaningful, outcome-oriented performance goals and collecting performance data that can be used to assess the true impact of facility security. Performance measurement under GPRA typically focuses on regularly collected data on the level and type of program activities, the direct products and services delivered by the program, and the results of those activities. For programs that have readily observable results or outcomes, performance measurement may provide sufficient information to demonstrate program results. In some programs, such as facility security, however, outcomes are not quickly achieved or readily observed, or their relationship to the program is often not clearly defined. In such cases, more in-depth program evaluations may be needed, in addition to performance measurement, to examine the extent to which a program is achieving its objectives. This approach is more challenging and represents a more advanced level of performance measurement. Significant long-standing obstacles also hinder agencies’ ability to realign their asset portfolios. As we have reported, the complex legal and budgetary environment in which real property managers operate has a significant impact on real property decisionmaking and often does not lead to businesslike outcomes. Resource limitations—including those related to facility protection—in general, often prevent agencies from addressing real property needs from a strategic portfolio perspective. When available funds for capital investment are limited, Congress must weigh the need for new, modern facilities with the need for renovation, maintenance, and disposal of existing facilities, the latter of which often gets deferred. Facility protection often falls within this latter category. Until these competing factors are mitigated, agencies face budgetary and legal disincentives when trying to realign their assets. State’s experience to date with rightsizing its overseas presence demonstrated some of the challenges in realigning real property assets. We reported in November 2003 that State’s efforts to replace facilities at risk of terrorist or other attacks have experienced project delays due to changes in project design and security requirements, difficulties hiring appropriate American and local labor with the necessary clearances and skills, differing site conditions, and unforeseen events such as civil unrest. Finally, we have reported that agencies continue to face obstacles in implementing and maintaining a strategic approach to human capital. Specifically, agencies continue to face challenges in promoting (1) leadership; (2) strategic human capital planning; (3) acquiring, developing, and retaining talent; and (4) results-oriented organizational cultures in an effort to strategically manage human capital. Although some progress has been made since we designated human capital management as high-risk in 2001, today’s federal human capital strategies are not appropriately constituted to meet current and emerging challenges, especially in light of the new security challenges facing the government. Human capital challenges are relevant to the facility protection area because security is a people-intensive activity involving active management and response, and there is a high dependency on law enforcement and security officers, as well as contract guards. Given these obstacles, and the need to overcome them, agencies would benefit from having a set of key practices to guide their facility protection efforts. GAO has advocated the use of guiding principles in other areas, including human capital management, information technology, and capital investment. ISC, in serving as the central coordinator for agencies’ efforts, is uniquely positioned to promote key practices in facility protection and could use our work as a starting point. In fact, ISC views one of its primary roles as being the nucleus of communication on key practices and lessons learned for the facility protection community in the federal government and has embraced this responsibility. After having limited success prior to the September 11 terrorist attacks, ISC has made progress in recent years related to its responsibilities to develop policies and standards, as well as those related to information sharing. Although this progress is encouraging, more work remains to fulfill ISC’s major responsibilities related to ensuring agency compliance and overseeing the implementation of various policies and standards. Fulfilling its new role in reviewing and approving agencies’ physical security plans for the administration represents a major step toward meeting its compliance and oversight responsibilities. Furthermore, because DHS now has responsibility for ISC, the department also has a responsibility, in keeping with the executive order under which ISC was established, to ensure that ISC has adequate resources to accomplish its mission. Given the challenges ISC faces, its new responsibility related to HSPD-7 for reviewing agencies’ physical security plans, and the need to sustain progress it has made in fulfilling its responsibilities, ISC would benefit from having a clearly defined action plan for achieving results. Such a plan, which ISC lacks, could be used to (1) provide DHS and other stakeholders with detailed information on, and a rationale for, its resource needs; (2) garner and maintain the support of ISC member agencies, DHS management, OMB, and Congress; (3) identify implementation goals and measures for gauging progress in fulfilling all of its responsibilities; and (4) propose strategies for addressing the challenges ISC faces. Such a plan could incorporate the strategy for ensuring compliance with facility protection standards that is required under ISC’s executive order but has not yet been developed. Without an overall action plan for meeting this and other responsibilities, ISC’s strategy and time line for these efforts remain unclear. Since September 11, the focus on protecting the nation’s critical infrastructure has been heightened considerably. At the individual building level, agencies have improved perimeter security by, for example, installing concrete bollards and are routinely screening vehicles and people entering federal property. In looking at facility protection issues more broadly, several key practices have emerged that include allocating resources using risk management, leveraging security technology, sharing information and coordinating protection efforts with other stakeholders, and measuring program performance and testing security initiatives. In addition, other key practices that have clear implications for the facility protection area include realigning real property assets and strategically managing human capital. Because agencies face various obstacles and would benefit from evaluating their actions, it would be useful for them to have a framework of key practices in the facility protection area that could guide their efforts, and ISC is well positioned to lead this initiative as the government’s central forum for exchanging information and guidance on facility protection. We are making two recommendations—one to the Secretary of Homeland Security and one to the Chair of ISC. Specifically, we recommend that the Secretary of Homeland Security direct the Chair of ISC to develop an action plan that identifies resource needs, implementation goals, and time frames for meeting ISC’s ongoing and yet-unfulfilled responsibilities. The action plan should also be used to propose strategies for addressing the range of challenges ISC faces. Such an action plan would provide a road map for DHS to use in developing resource priorities and for ISC to use in communicating its planned actions to agencies and other stakeholders, including Congress. Furthermore, we recommend that the Chair of ISC, with input from ISC member agencies, consider using our work as a starting point for establishing a framework of key practices that could guide agencies’ efforts in the facility protection area. This initiative could subsequently be used by agencies to evaluate their actions, identify lessons learned, and develop strategies for overcoming obstacles. We provided a draft of this report to DHS, State, GSA, DOE, Interior, DOD, VA, and USPS for their official review and comment. DHS concurred with the report’s overall conclusions and said it would implement the recommendations. In its comments, DHS provided information on ongoing initiatives related to information sharing and coordination. DHS’s comments can be found in appendix V. DHS also provided separate technical comments, which we incorporated where appropriate. State provided additional information on its activities as they relate to the key practices, which we incorporated into the final report where appropriate. State’s comments can be found in appendix VI. GSA, DOE, and Interior concurred with the report’s findings and recommendations. Comments from GSA, Interior, and DOE can be found in appendixes VII, VIII, and IX, respectively. DOD, VA, and USPS notified us that they had no comments on this report. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to other interested Congressional Committees and the Secretaries of Defense, Energy, the Interior, Homeland Security, State, Veterans Affairs; the Administrator of GSA; and the Postmaster General of the U.S. Postal Service. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me on (202) 512-2834 or at goldsteinm@gao.gov or David Sausville, Assistant Director, on (202) 512-5403 or at sausvilled@gao.gov. Other contributors to this report were Matt Cail, Roshni Dave, Joyce Evans, Brandon Haller, Anne Izod, Susan Michal-Smith, and Cynthia Taylor. Our objectives were to (1) assess the Interagency Security Committee’s (ISC) progress in fulfilling its responsibilities and (2) identify key practices in protecting federal facilities and any related implementation obstacles. To assess ISC’s progress in fulfilling its responsibilities, we interviewed the Executive Director of ISC; analyzed ISC publications and other documents; considered prior GAO work; and reviewed various laws and policies, including the Homeland Security Act of 2002. We also reviewed the executive order that established ISC, a subsequent executive order that amended it in connection with the transfer of ISC’s function to DHS, and relevant homeland security policy directives. We also reviewed minutes from ISC meetings. We also considered prior GAO work on ISC. As part of our interviews with ISC’s Executive Director, we focused on the challenges ISC faces in meeting its major responsibilities. To identify key practices for facility protection and any related implementation obstacles, we conducted a comprehensive literature review of GAO and Inspector General (IG) reports, interviewed officials from the major property-holding agencies, and validated our results using an expert symposium on facility protection. For the analysis of GAO and IG reports, we systematically analyzed reports issued between January 1, 1995, and March 1, 2004. We chose 1995 as a starting point to coincide with the year of the terrorist attack on the Alfred P. Murrah Federal Building in Oklahoma City, Oklahoma, and the publishing of the Justice Department’s minimum-security standards. We identified reports by searching GAO and IG online databases and consulting with GAO and IG contacts using several search terms such as facility security, terrorism, and homeland security. From this initial selection, we identified over 450 reports related to homeland security, which we subsequently reduced to 170 reports that were related to facility protection. Thirty-six of the reports were from IG offices at the seven agencies that control over 85 percent of federal facilities in terms of building square footage. These agencies included the Departments of Defense (DOD), Energy (DOE), the Interior (Interior), Veterans Affairs (VA) and State (State); the U.S. Postal Service (USPS); and the General Services Administration (GSA). We systematically reviewed these reports using a data collection instrument we designed to identify and group key practices according to theme or type of activity. In doing our work, we also gave consideration to other GAO reports on governmentwide management issues that, in our judgment, had implications for the facility protection area. We also considered new GAO reports that were issued after the selection time period that were relevant. For the purposes of this review, we did not assess the extent to which agencies were using GPRA to develop agencywide facility protection or security-related goals. Also, for the purpose of this review, we did not assess the training and certifications offered by ASIS International. We also interviewed officials at the major property-holding agencies, including DOD, DOE, Interior, VA, State, USPS, and GSA to obtain updated information on their facility protection activities and their use of key practices. We then contracted with the National Academy of Sciences (NAS) to convene a symposium with 21 security experts from the private sector, government, academia, and foreign countries to validate the practices and gain further insights. Using their judgment, NAS officials selected security experts based on their broad expertise and backgrounds in building security programs. Appendix II contains the symposium agenda and identifies the experts. As a result, for the purpose of this review, we defined key practices as those activities that, on the basis of our analysis, were recommended by GAO and others, acknowledged by agencies, and validated by experts in the area. It is important to note that the key practices identified in this report may not be an exclusive list and may not necessarily represent all key practices for the protection of federal facilities. In addition, new reports and other information may have become available since we completed the analysis. Also, ISC has identified GAO as an associate member, which includes the ability to serve on ISC subcommittees. While associate members of ISC have this ability, no GAO staff member serves on any subcommittee. Furthermore, no GAO staff member actively participates in ISC meetings or contributes to decisions. Rather, GAO’s role on ISC is only to observe proceedings and obtain ISC information distributed to the other ISC members. Because of GAO’s observational role, our independence in making recommendations involving ISC and in completing this engagement was maintained. ISC, agency officials, and other experts provided much of the data and other information used in this report. We noted cases where these officials provided testimonial evidence, and we were not always able to obtain documentation that would substantiate the testimonial evidence they provided. In cases where officials provided their views and opinions on various issues within the context of speaking for the organization, we corroborated the information with other officials. Overall, we found no discrepancies with these data and, therefore, determined that they were sufficiently reliable for the purpose of this report. We requested official comments on this report from DHS, State, GSA, Interior, DOE, DOD, VA, and USPS. Appendixes V through IX contain comments we received from DHS, State, GSA, Interior, and DOE, respectively. We received State’s comments on November 12, 2004. DOD, VA, and USPS had no comments. Responsibilities Related to Developing Policies and Standards Establish policies for security in and protection of federal facilities. Develop and evaluate security standards for federal facilities. Assess technology and information systems as a means of providing cost-effective improvements to security in federal facilities. Develop long-term construction standards for those locations with threat levels or missions that require blast-resistant structures or other specialized security requirements. Evaluate standards for the location of, and special security related to, day care centers in federal facilities. May 2001: Issued Security Design Criteria for New Federal Office Buildings and Major Modernization Projects (Security Design July 2001: Issued Minimum Standards for Federal Building Access Procedures. June 2003: Issued ISC Information Document on Escape Hoods. October 2003: Issued update of ISC Security Design Criteria. Currently developing physical security requirements for HSPD-12 and the federal credentialing program. In 1997, ISC disseminated guidance on entry security technology for member agencies’ buildings with high security designations. Provided input in smart card development process for federal government. Integrated expert opinions from engineering and architectural disciplines and included technology expert advice on blasting and biochemical threats in the most recent update of ISC Security Design Criteria for 2004. July 2003: Issued Security Standards for Leased Space. In its review of the latest ISC security design criteria update, the ISC long-term construction team will look into security needs at child care centers (no actions implemented to date). Responsibilities Related to Ensuring Compliance and Overseeing Implementation of Policies and Standards Develop a strategy for ensuring compliance with standards. Oversee the implementation of appropriate security measures in federal facilities. According to ISC’s Executive Director, ISC does not have the necessary resources to develop a compliance process—ISC has requested additional funding and resources for the fiscal year 2006 budget (no actions implemented to date). As reviewer of agency physical security plans under HSPD-7, ISC has not been able to develop a scoring process to review the plans. Furthermore, ISC will not meet the November 2004 deadline for completing agency reviews and is working with OMB and DHS on this issue. Responsibilities Related to Encouraging Information Sharing Encourage agencies with security responsibilities to share security-related intelligence in a timely and cooperative manner. Assist in developing and maintaining a centralized security database of all federal facilities. April 2003: Appointed a full-time Executive Director. Since September 11, 2001, ISC has expanded its membership and outreach efforts by adding associate member agencies that can provide input but are not listed in Executive Order 12977. In recent years, GAO has consistently advocated the use of a risk management approach as an iterative analytical tool to help implement and assess responses to various national security and terrorism issues. Although applying risk management principles to facility protection can take on various forms, our past work showed that most risk management approaches generally involve identifying potential threats, assessing vulnerabilities, identifying the assets that are most critical to protect in terms of mission and significance, and evaluating mitigation alternatives for their likely effect on risk and their cost. We have concluded that without a risk management approach, there is little assurance that programs to combat terrorism are prioritized and properly focused. Risk management principles acknowledge that while risk cannot be eliminated, enhancing protection from known or potential threats can help reduce it. Drawing on this precedent, we compiled a risk management framework—outlined below—to help assess the U.S. government’s response to homeland security and terrorism risk. This framework, which we have used to assess the Department of Homeland Security’s programs to target oceangoing cargo containers for inspection, also has applicability to protecting federal facilities. For purposes of the risk management framework, we used the following definitions: Risk—an event that has a potentially negative impact, and the possibility that such an event will occur and adversely affect an entity’s assets and activities and operations, as well as the achievement of its mission and strategic objectives. As applied to the homeland security context, risk is most prominently manifested as “catastrophic” or “extreme” events related to terrorism, i.e., those involving more that $1 billion in damage or loss and/or more than 500 casualties. Risk management—a continuous process of managing, through a series of mitigating actions that permeate an entity’s activities, the likelihood of an adverse event happening and having a negative impact. In general, risk is managed as a portfolio, addressing entity-wide risk within the entire scope of activities. Risk management addresses “inherent,” or pre-action, risk (i.e., risk that would exist absent any mitigating action) as well as “residual,” or post-action, risk (i.e., the risk that remains even after mitigating actions have been taken). The risk management framework—which is based on the proposition that a threat to a vulnerable asset results in risk—consists of the following components: Internal (or implementing) environment—the internal environment is the institutional “driver” of risk management, serving as the foundation of all elements of the risk management process. The internal environment includes an entity’s organizational and management structure and processes that provide the framework to plan, execute, and control and monitor an entity’s activities, including risk management. Within the organizational and management structure, an operational unit that is independent of all other operational (business) units is responsible for implementing the entity’s risk management function. This unit is supported by and directly accountable to an entity’s senior management. For its part, senior management (1) defines the entity’s risk tolerance (i.e., how much risk is an entity willing to assume in order to accomplish its mission and related objectives) and (2) establishes the entity’s risk management philosophy and culture (i.e., how an entity’s values and attitudes view risk and how its activities and practices are managed to deal with risk). The operational unit (1) designs and implements the entity’s risk management process and (2) coordinates internal and external evaluation of the process and helps implement any corrective action. Threat (event) assessment—threat is defined as a potential intent to cause harm or damage to an asset (e.g., natural environment, people, manmade infrastructures, and activities and operations). Threat assessments consist of the identification of adverse events that can affect an entity. Threats might be present at the global, national, or local level, and their sources include terrorists and criminal enterprises. Threat information emanates from “open” sources and intelligence (both strategic and tactical). Intelligence information is characterized as “reported” (or raw) and “finished” (fully fused and analyzed). Criticality assessment—criticality is defined as an asset’s relative importance. Criticality assessments identify and evaluate an entity’s assets based on a variety of factors, including the importance of its mission or function, the extent to which people are at risk, or the significance of a structure or system in terms of, for example, national security, economic activity, or public safety. Criticality assessments are important because they provide, in combination with the framework’s other assessments, the basis for prioritizing which assets require greater or special protection relative to finite resources. Vulnerability assessment—vulnerability is defined as the inherent state (either physical, technical, or operational) of an asset that can be exploited by an adversary to cause harm or damage. Vulnerability assessments identify these inherent states and the extent of their susceptibility to exploitation, relative to the existence of any countermeasures. Risk assessment—risk assessment is a qualitative and/or quantitative determination of the likelihood (probability) of occurrence of an adverse event and the severity, or impact, of its consequences. Risk assessments include scenarios under which two or more risks interact creating greater or lesser impacts. Risk characterization—risk characterization involves designating risk as, for example, low, medium, or high (other scales, such as numeric, are also be used). Risk characterization is a function of the probability of an adverse event occurring and the severity of its consequences. Risk characterization is the crucial link between assessments of risk and the implementation of mitigation actions, given that not all risks can be addressed because resources are inherently scarce; accordingly, risk characterization forms the basis for deciding which actions are best suited to mitigate the assessed risk. Mitigation evaluation—Mitigation evaluation is the identification of mitigation alternatives to assess the effectiveness of the alternatives. The alternatives should be evaluated for their likely effect on risk and their cost. Mitigation selection—Mitigation selection involves a management decision on which mitigation alternatives should be implemented among alternatives, taking into account risk, costs, and the effectiveness of mitigation alternatives. Selection among mitigation alternatives should be based upon preconsidered criteria. There are as of yet no clearly preferred selection criteria, although potential factors might include risk reduction, net benefits, equality of treatment, or other stated values. Mitigation selection does not necessarily involve prioritizing all resources to the highest-risk area, but in attempting to balance overall risk and available resources. Risk mitigation—Risk mitigation is the implementation of mitigation actions, in priority order and commensurate with assessed risk; depending on its risk tolerance, an entity may choose not to take any action to mitigate risk (this is characterized as risk acceptance). If the entity does choose to take action, such action falls into three categories: (1) risk avoidance (exiting activities that expose the entity to risk), (2) risk reduction (implementing actions that reduce likelihood or impact of risk), and (3) risk sharing (implementing actions that reduce likelihood or impact by transferring or sharing risk). In each category, the entity implements actions as part of an integrated “systems” approach, with built-in redundancy to help address residual risk (the risk that remains after actions have been implemented). The systems approach consists of taking actions in personnel (e.g., training, deployment), processes (e.g., operational procedures), technology (e.g., software or hardware), infrastructure (e.g., institutional or operational—such as port configurations), and governance (e.g., management and internal control and assurance). In selecting actions, the entity assesses their costs and benefits, where the amount of risk reduction is weighed against the cost involved and identifies potential financing options for the actions chosen. Monitoring and evaluation of risk mitigation—Monitoring and evaluation of risk mitigation entails the assessment of the functioning of actions against strategic objectives and performance measures to make necessary changes. Monitoring and evaluation includes, where and when appropriate, peer review and testing and validation; and an evaluation of the impact of the actions on future options; and identification of unintended consequences that, in turn, would need to be mitigated. Monitoring and evaluation helps ensure that the entire risk management process remains current and relevant, and reflects changes in (1) the effectiveness of the actions and (2) the risk environment in which the entity operates—risk is dynamic and threats are adaptive. The risk management process should be repeated periodically, restarting the “loop” of assessment, mitigation, and monitoring and evaluation. HOMELAND SECURITY: Further Actions Needed to Coordinate Federal Agencies’ Facility Protection Efforts and Promote Key GSA Public Buildings Service (PBS) Response The PBS agrees with the findings of the Government Accountability Office (GAO) relating security issues facing the federal government. PBS also supports the recommendations to the Secretary of Department of Homeland Security and the Chair of Interagency Security Committee (ISC). As a member agency, of the ISC, GSA will support the initiatives and efforts proposed by the committee. Reason GAO stated for conducting the subject audit: 1. Assess the Interagency Security Committee’s (ISC) progress in fulfilling its responsibilities 2. Sharing of information between agencies 3. July 2004, ISC became responsible for reviewing federal agencies physical security plans 4. ISC lacks an action plan for identifying implementation goals, strategy and timeline 1. Audit Recommendations to the Secretary of DHS: Direct ISC to develop an action plan that identifies resource needs, goals, and timeframes for meeting its responsibilities, and proposes strategies for addressing the challenges it faces. 2. Audit Recommendations to the Chair of ISC: With input from ISC member agencies, and considering our work as a starting point, establish a set of key practices that could guide agencies’ efforts in the facility protection area. This effort could evaluate agency action, identify lessons learned, and develop strategies for overcoming challenges. U.S. Department of Defense, Office of Inspector General. Interagency Summary Report on Security Controls Over Biological Agents (D-2003- 126). Washington, D.C.: August 27, 2003. U.S. Department of Energy, Office of Inspector General. Management of the Nuclear Weapons Production Infrastructure (DOE/IG-0484). Washington, D.C.: September 22, 2000. U.S. Department of Energy, Office of Inspector General. Summary Report on Allegations Concerning the Department of Energy’s Site Safeguards and Security Planning Process (DOE/IG-0482). Washington, D.C.: September 28, 2000. U.S. Department of Energy, Office of Inspector General. The U.S. Department of Energy’s Audit Follow-up Process (DOE/IG-0447). Washington, D.C.: July 7, 1999. U.S. Department of Energy, Office of Inspector General. Special Audit Report on the Department of Energy’s Arms and Military-Type Equipment (IG-0385). Washington, D.C.: February 1, 1996. U.S. Department of Energy, Office of Inspector General. Audit of the Department of Energy’s Security Police Officer Training (CR-B-95-03). Washington, D.C.: February 6, 1995. U.S. Department of the Interior, Office of Inspector General. Homeland Security: Protection of Critical Infrastructure Systems – Assessment 2: Critical Infrastructure Systems (2002-I-0053). Washington, D.C.: September 2002. U.S. Department of the Interior, Office of Inspector General. Homeland Security: Protection of Critical Infrastructure Facilities and National Icons – Assessment 1: Supplemental Funding – Plans and Progress (2002-I-0039). Washington, D.C.: June 2002. U.S. Department of the Interior, Office of Inspector General. Progress Report: Secretary’s Directives for Implementing Law Enforcement Reform in Department of the Interior (2003-I-0062). Washington, D.C.: August 28, 2003. U.S. Department of the Interior, Office of Inspector General. Review of National Icon Park Security (2003-I-0063). Washington, D.C.: August 2003. U.S. Department of State, Office of Inspector General. Limited-Scope Security Inspection of Embassy Port of Spain, Trinidad and Tobago (SIO-I-03-22). Washington, D.C.: August 2003. U.S. Department of State, Office of Inspector General. Security Inspection of Embassy N’Djamena, Chad (SIO-I-03-27). Washington, D.C.: June 2003. U.S. Department of State, Office of Inspector General. Security Inspection of Embassy Yaoundé, Cameroon (SIO-I-03-28). Washington, D.C.: March 2003. U.S. Department of State, Office of Inspector General. Security Inspection of Embassy Maseru, Lesotho (SIO-I-03-26). Washington, D.C.: March 2003. U.S. Department of State, Office of Inspector General. Limited-Scope Security Inspection of Embassy Belgrade, Serbia and Montenegro (SIO-I- 03-13). Washington, D.C.: March 2003. U.S. Department of State, Office of Inspector General. Limited-Scope Security Inspection of Embassy Quito, Ecuador and Consulate General Guyaquil (SIO-I-03-25). Washington, D.C.: February 2003. U.S. Department of State, Office of Inspector General. Security Oversight Inspection of Embassy Muscat, Oman (SIO-I-03-17). Washington, D.C.: February 2003. U.S. Department of State, Office of Inspector General. Limited-Scope Security Inspection of Embassy Dublin, Ireland (SIO-I-03-08). Washington, D.C.: December 2002. U.S. Department of State, Office of Inspector General. Limited-Scope Security Inspection of Embassy Apia, Samoa (SIO-I-03-04). Washington, D.C.: November 2002. U.S. Department of State, Office of Inspector General. Limited-Scope Security Inspection of Embassy Ljubljana, Slovenia (SIO-I-03-03). Washington, D.C.: November 2002. U.S. Department of State, Office of Inspector General. Limited-Scope Security Inspection of Embassy Almaty, Kazakhstan (SIO-I-03-02). Washington, D.C.: November 2002. U.S. Department of State, Office of Inspector General. Limited-Scope Security Inspection of Embassy Amman, Jordan (SIO-I-03-01). Washington, D.C.: November 2002. U.S. Department of State, Office of Inspector General. Classified Semiannual Report to the Congress: April 1, 2003 to September 30, 2003. Washington, D.C.: September 2003. U.S. Department of State, Office of Inspector General. Classified Semiannual Report to the Congress: October 1, 2002 to March 31, 2003. Washington, D.C.: March 2003. General Services Administration, Office of Inspector General. Follow-up Review of the Federal Protective Service’s Contract Guard Program (A020092/P/2/R02016). Arlington, VA: August 29, 2002. General Services Administration, Office of Inspector General. Report on Federal Protective Service Security Equipment Countermeasures Installed at Federal Facilities (A020092/P/2/R02008). Arlington, VA: March 29, 2002. General Services Administration, Office of Inspector General. Audit of the Federal Protective Service’s Federal Security Risk Manager Program (A010129/P/2/R02007). Arlington, VA: March 27, 2002. General Services Administration, Office of Inspector General. Audit of the Federal Protective Service’s Intelligence Sharing Program (A000992/P/2/R01013). Arlington, VA: March 23, 2001. General Services Administration, Office of Inspector General. Audit of The Federal Protective Service’s Contract Guard Program (A995175/P/2/R00010). Arlington, VA: March 28, 2000. General Services Administration, Office of Inspector General. Audit of Security Measures for New and Renovated Federal Facilities (A995025/P/H/R99513). Arlington, VA: March 24, 1999. General Services Administration, Office of Inspector General. Audit of The Federal Protective Service’s Program for Upgrading Security at Federal Facilities (A70642/P/2/R98024). Arlington, VA: September 14, 1998. U.S. Postal Service, Office of Inspector General. Fiscal Year 1999 Information System Controls: St. Louis Information Service Center (FR- AR-99-010). Arlington, VA: September 28, 1999. U.S. Postal Service, Office of Inspector General. Review of Security Badge Controls at Postal Service Headquarters (OV-LA-01-001). Arlington, VA: March 26, 2001. U.S. Postal Service, Office of Inspector General. Review of United States Postal Service Personnel Security Program: Process for Updating Sensitive Clearances (OV-MA-99-001). Arlington, VA: March 31, 1998. Veterans Affairs, Office of Inspector General. Review of Security and Inventory Controls Over Selected Biological, Chemical, and Radioactive Agents Owned by or Controlled at Department of Veterans Affairs Facilities (02-00266-76). Washington, D.C.: March 14, 2002. Fiscal Year 2003 U.S. Government Financial Statements: Sustained Improvement in Federal Financial Management Is Crucial to Addressing Our Nation’s Future Fiscal Challenges. GAO-04-886T. Washington, D.C.: July 8, 2004. Nuclear Security: Several Issues Could Impede the Ability of DOE’s Office of Energy, Science and Environment to Meet the May 2003 Design Basis Threat. GAO-04-894T. Washington, D.C.: June 22, 2004. Homeland Security: Summary of Challenges Faced in Targeting Oceangoing Cargo Containers for Inspection. GAO-04-557T. Washington, D.C.: March 31, 2004. Homeland Security: Management Challenges Facing Federal Leadership. GAO-03-260. Washington, D.C.: December 20, 2002. Critical Infrastructure Protection: Significant Challenges Need to Be Addressed. GAO-02-961T. Washington, D.C.: July 24, 2002. Homeland Security: Critical Design and Implementation Issues. GAO-02- 957T. Washington, D.C.: July 17, 2002. Critical Infrastructure Protection: Significant Homeland Security Challenges Need to Be Addressed. GAO-02-918T. Washington, D.C.: July 9, 2002. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness. GAO-02-162T. Washington, D.C.: October 17, 2001. Homeland Security: Key Elements of a Risk Management Approach. GAO-02-150T. Washington, D.C.: October 12, 2001. Chemical and Biological Defense: Improved Risk Assessment and Inventory Management Are Needed. GAO-01-667. Washington, D.C.: September 28, 2001. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. Combating Terrorism: Actions Needed to Improve DOD Antiterrorism Program Implementation and Management. GAO-01-909. Washington, D.C.: September 19, 2001. Weapons of Mass Destruction: Defense Threat Reduction Agency Addresses Broad Range of Threats, but Performance Reporting Can Be Improved. GAO-04-330. Washington, D.C.: February 13, 2004. Electronic Government: Smart Card Usage is Advancing Among Federal Agencies, Including the Department of Veterans Affairs. GAO-05-84T. Washington, D.C: September 6, 2004. Information Security: Technologies to Secure Federal Systems. GAO-04- 467. Washington, D.C.: March 9, 2004. Security: Counterfeit Identification Raises Homeland Security Concerns. GAO-04-133T. Washington, D.C.: October 1, 2003. Electronic Government: Challenges to the Adoption of Smart Card Technology. GAO-03-1108T. Washington, D.C.: September 9, 2003. Information Security: Challenges in Using Biometrics. GAO-03-1137T. Washington, D.C.: September 9, 2003. Border Security: Challenges in Implementing Border Technology. GAO- 03-546T. Washington, D.C.: March 12, 2003. Electronic Government: Progress in Promoting Adoption of Smart Card Technology. GAO-03-144. Washington, D.C.: January 3, 2003. Technology Assessment: Using Biometrics for Border Security. GAO-03- 174. Washington, D.C.: November 15, 2002. National Preparedness: Technologies to Secure Federal Buildings. GAO- 02-687T. Washington, D.C.: April 25, 2002. Information Technology: Major Federal Networks That Support Homeland Security Functions. GAO-04-375. Washington, D.C.: September 17, 2004 9/11 Commission Report: Reorganization, Transformation, and Information Sharing. GAO-04-1033T. Washington, D.C.: August 3, 2004. Critical Infrastructure Protection: Improving Information Sharing with Infrastructure Sectors. GAO-04-780. Washington, D.C.: July 9, 2004. Posthearing Questions from September 17, 2003, Hearing on “Implications of Power Blackouts for the Nation’s Cybersecurity and Critical Infrastructure Protection: The Electrical Grid, Critical Interdependencies, Vulnerabilities, and Readiness”. GAO-04-300R. Washington, D.C.: December 8, 2003. Homeland Security: Challenges in Achieving Interoperable Communications for First Responders. GAO-04-231T. Washington, D.C.: November 6, 2003. Homeland Security: Information Sharing Responsibilities, Challenges, and Key Management Issues. GAO-03-1165T. Washington, D.C.: September 17, 2003. Homeland Security: Efforts to Improve Information Sharing Need to Be Strengthened. GAO-03-760. Washington, D.C.: August 27, 2003. Homeland Security: Information Sharing Responsibilities, Challenges, and Key Management Issues. GAO-03-715T. Washington, D.C.: May 8, 2003. Information Technology: Terrorist Watch Lists Should Be Consolidated to Promote Better Integrating and Sharing. GAO-03-322. Washington, D.C.: April 15, 2003. Bioterrorism: Information Technology Strategy Could Strengthen Federal Agencies’ Abilities to Respond to Public Health Emergencies. GAO-03-139. Washington, D.C.: May 30, 2003. Homeland Security: Information Sharing Activities Face Continued Management Challenges. GAO-02-1122T. Washington, D.C.: October 1, 2002. National Preparedness: Technology and Information Sharing Challenges. GAO-02-1048R. Washington, D.C.: August 30, 2002. Homeland Security: Effective Intergovernmental Coordination is Key to Success. GAO-02-1013T. Washington, D.C.: August 23, 2002. Homeland Security: Effective Intergovernmental Coordination is Key to Success. GAO-02-1012T. Washington, D.C.: August 22, 2002. Homeland Security: Effective Intergovernmental Coordination is Key to Success. GAO-02-1011T. Washington, D.C.: August 20, 2002. Homeland Security: Intergovernmental Coordination and Partnership Will Be Critical to Success. GAO-02-901T. Washington, D.C.: July 3, 2002. Homeland Security: Intergovernmental Coordination and Partnership Will Be Critical to Success. GAO-02-900T. Washington, D.C.: July 2, 2002. Homeland Security: Intergovernmental Coordination and Partnership Will Be Critical to Success. GAO-02-899T. Washington, D.C.: July 1, 2002. National Preparedness: Integration of Federal, State, Local, and Private Sector Efforts is Critical to an Effective National Strategy for Homeland Security. GAO-02-621T. Washington, D.C.: April 11, 2002. Combating Terrorism: Intergovernmental Cooperation in the Development of a National Strategy to Enhance State and Local Preparedness. GAO-02-550T. Washington, D.C.: April 2, 2002. Combating Terrorism: Enhancing Partnerships Through a National Preparedness Strategy. GAO-02-549T. Washington, D.C.: March 28, 2002. Combating Terrorism: Critical Components of a National Strategy to Enhance State and Local Preparedness. GAO-02-548T. Washington, D.C.: March 25, 2002. Combating Terrorism: Intergovernmental Partnership in a National Strategy to Enhance State and Local Preparedness. GAO-02-547T. Washington, D.C.: March 22, 2002. Homeland Security: Progress Made; More Direction and Partnership Sought. GAO-02-490T. Washington, D.C.: March 12, 2002. Combating Terrorism: Key Aspects of a National Strategy to Enhance State and Local Preparedness. GAO-02-473T. Washington, D.C.: March 1, 2002. Bioterrorism: Review of Public Health Preparedness Programs. GAO-02- 149T. Washington, D.C.: October 10, 2001. Bioterrorism: Public Health and Medical Preparedness. GAO-02-141T. Washington, D.C.: October 9, 2001. Bioterrorism: Coordination and Preparedness. GAO-02-129T. Washington, D.C.: October 5, 2001. Combating Terrorism: Observations on Federal Spending to Combat Terrorism. GAO/T-NSIAD/GGD-99-107. Washington, D.C.: March 11, 1999. Embassy Construction: State Department Has Implemented Management Reforms, but Challenges Remain. GAO-04-100. Washington, D.C.: November 4, 2003. VA Health Care: Framework for Analyzing Capital Asset Realignment for Enhanced Services Decisions. GAO-03-1103R. Washington, D.C.: August 18, 2003. Major Management Challenges and Program Risks: Department of State. GAO-03-107. Washington, D.C.: January 2003. Overseas Presence: Framework for Assessing Embassy Staff Levels Can Support Rightsizing Initiatives. GAO-02-780. Washington, D.C.: July 26, 2002. Overseas Presence: Observations on a Rightsizing Framework. GAO-02- 659T. Washington, D.C.: May 1, 2002. Overseas Presence: More Work Needed on Embassy Rightsizing. GAO-02- 143. Washington, D.C.: November 27, 2001. Human Capital: Building on the Current Momentum to Transform the Federal Government. GAO-04-976T. Washington, D.C.: July 20, 2004. Information Technology: Training Can Be Enhanced by Greater Use of Leading Practices. GAO-04-791. Washington, D.C.: June 24, 2004. Results-Oriented Government: Shaping the Government to Meet 21st Century Challenges. GAO-03-1168T. Washington, D.C.: September 17, 2003. Results-Oriented Cultures: Creating a Clear Linkage between Individual Performance and Organizational Success. GAO-03-488. Washington, D.C.: March 14, 2003. Human Capital: Building on the Current Momentum to Address High- Risk Issues. GAO-03-637T. Washington, D.C.: April 8, 2003. High-Risk Series: Strategic Human Capital Management. GAO-03-120. Washington, D.C.: January 2003. Human Capital: A Self-Assessment Checklist for Agency Leaders. GAO/OCG-00-14G. Washington, D.C.: September 2000. Executive Guide: Leading Practices in Capital Decision-Making. AIMD- 99-32. Washington, D.C.: December 1998. Weaknesses in Screening Entrants Into the United States. GAO-03-438T. Washington, D.C.: January 30, 2003. Building Security: Interagency Security Committee Has Had Limited Success in Fulfilling Its Responsibilities. GAO-02-1004. Washington, D.C.: September 17, 2002. Security Breaches at Federal Buildings in Atlanta, Georgia. GAO-02- 668T. Washington, D.C.: April 30, 2002. Homeland Security: Responsibility and Accountability For Achieving National Goals. GAO-02-627T. Washington, D.C.: April 11, 2002. Bioterrorism: Federal Research and Preparedness Activities. GAO-01-915. Washington, D.C.: September 28, 2001. Combating Terrorism: Observations on Options to Improve the Federal Response. GAO-01-660T. Washington, D.C.: April 24, 2001. Combating Terrorism: Analysis of Federal Counterterrorist Exercises. GAO/NSIAD-99-157BR. Washington, D.C.: June 25, 1999. Federal Law Enforcement: Investigative Authority and Personnel at 13 Agencies. GAO/GGD-96-154. Washington, D.C.: September 30, 1996. Critical Infrastructure Protection: Challenges for Selected Agencies and Industry Sectors. GAO-03-233. Washington, D.C.: February 28, 2003. Combating Terrorism: Funding Data Reported to Congress Should Be Improved. GAO-03-170. Washington, D.C.: November 26, 2002. Combating Terrorism: Actions Needed to Guide Services’ Antiterrorism Efforts at Installations. GAO-03-14. Washington, D.C.: November 1, 2002. Homeland Security: Challenges and Strategies in Addressing Short-and Long-Term National Needs. GAO-02-160T. Washington, D.C.: November 7, 2001. Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response. GAO-01-15. Washington, D.C.: March 20, 2001. Critical Infrastructure Protection: Challenges to Building a Comprehensive Strategy for Information Sharing and Coordination. GAO/T-AIMD-00-268. Washington, D.C.: July 26, 2000. Combating Terrorism: Observations on Growth in Federal Programs. GAO/T-NSIAD-99-181. Washington, D.C.: June 9, 1999. Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination. GAO/NSIAD-98-39. Washington, D.C.: December 1, 1997. Homeland Security: Transformation Strategy Needed to Address Challenges Facing the Federal Protective Service. GAO-04-537. Washington, D.C.: July 14, 2004. General Services Administration: Factors Affecting the Construction and Operating Costs of Federal Buildings. GAO-03-609T. Washington, D.C.: April 2, 2003. High-Risk Series: Federal Real Property. GAO-03-122. Washington, D.C.: January 2003. Building Security: Security Responsibilities for Federally Owned and Leased Facilities. GAO-03-8. Washington, D.C.: October 31, 2002. Diffuse Security Threats: USPS Air Filtration Systems Need More Testing and Cost Benefit Analysis before Implementation. GAO-02-838. Washington, D.C.: August 22, 2002. Homeland Security: Key Elements to Unify Efforts Are Underway but Uncertainty Remains. GAO-02-610. Washington, D.C.: June 7, 2002. Federal Real Property: Better Governmentwide Data Needed for Strategic Decisionmaking. GAO-02-342. Washington, D.C.: April 16, 2002. Highlights of GAO’s Conference on Options to Enhance Mail Security and Postal Operations. GAO-02-315SP. Washington, D.C.: December 20, 2001. General Services Administration: Status of Efforts to Improve Management of Building Security Upgrade Program. GAO/T-GGD/OSI-00- 19. Washington, D.C.: October 7, 1999 General Services Administration: Many Building Security Upgrades Made But Problems Have Hindered Program Implementation. GAO/T- GGD-98-141. Washington, D.C.: June 4, 1998. Combating Terrorism: Evaluation of Selected Characteristics in National Strategies Related to Terrorism. GAO-04-408T. Washington, D.C.: February 3, 2004. Homeland Security Advisory System: Preliminary Observations Regarding Threat Level Increases from Yellow to Orange. GAO-04-453R. Washington, D.C.: February 26, 2004. Homeland Security: Preliminary Observations on Efforts to Target Security Inspections of Cargo Containers. GAO-04-325T. Washington, D.C.: December 16, 2003. Aviation Security: Efforts to Measure Effectiveness and Strengthen Security Programs. GAO-04-285T. Washington, D.C.: November 20, 2003. Bioterrorism: A Threat to Agriculture and the Food Supply. GAO-04-259T. Washington, D.C.: November 19, 2003. Aviation Security: Efforts to Measure Effectiveness and Address Challenges. GAO-04-232T. Washington, D.C.: November 5, 2003. Aviation Security: Progress Since September 11, 2001 and the Challenges Ahead. GAO-03-1150T. Washington, D.C.: September 9, 2003. Transportation Security: Post-September 11th Initiatives and Long-Term Challenges. GAO-03-616T. Washington, D.C.: April 1, 2003. Combating Terrorism: Observations on National Strategies Related to Terrorism. GAO-03-519T. Washington, D.C.: March 3, 2003. Overseas Presence: Conditions of Overseas Diplomatic Facilities. GAO- 03-557T. Washington, D.C.: March 20, 2003. Mass Transit: Federal Action Could Help Transit Agencies Address Security Challenges. GAO-03-263. Washington, D.C.: December 13, 2002. Mass Transit: Challenges in Securing Transit Systems. GAO-02-1075T. Washington, D.C.: September 18, 2002. Combating Terrorism: Department of State Programs to Combat Terrorism Abroad. GAO-02-1021. Washington, D.C.: September 6, 2002. Port Security: Nation Faces Formidable Challenges in Making New Initiatives Successful. GAO-02-993T. Washington, D.C.: August 5, 2002. National Preparedness: Integrating New and Existing Technology and Information Sharing into an Effective Homeland Security Strategy. GAO-02-811T. Washington, D.C.: June 7, 2002. Homeland Security: A Framework for Addressing the Nation’s Efforts. GAO-01-1158T. Washington, D.C.: September 21, 2001. Combating Terrorism: Comments on H.R. 525 to Create a President’s Council on Domestic Terrorism Preparedness. GAO-01-555T. Washington, D.C.: May 9, 2001. Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy. GAO-01-556T. Washington, D.C.: March 27, 2001. Embassy Construction: Better Long-Term Planning Will Enhance Program Decision-making. GAO-01-11. Washington, D.C.: January 22, 2001. FAA Computer Security: Recommendations to Address Continuing Weaknesses. GAO-01-171. Washington, D.C.: December 6, 2000. FAA Computer Security: Actions Needed to Address Critical Weaknesses That Jeopardize Aviation Operations. GAO/T-AIMD-00-330. Washington, D.C.: September 27, 2000. FAA Computer Security: Concerns Remain Due to Personnel and Other Continuing Weaknesses. GAO/AIMD-00-252. Washington, D.C.: August 16, 2000. Combating Terrorism: Action Taken but Considerable Risks Remain for Forces Overseas. GAO/NSIAD-00-181. Washington, D.C.: July 19, 2000. State Department: Overseas Emergency Security Program Progressing, but Costs Are Increasing. GAO/NSIAD-00-83. Washington, D.C.: March 8, 2000. Combating Terrorism: Issues in Managing Counterterrorist Programs. GAO/T-NSIAD-00-145. Washington, D.C.: April 6, 2000. State Department: Progress and Challenges in Addressing Management Issues. GAO/T-NSIAD-00-124. Washington, D.C.: March 8, 2000. State Department: Major Management Challenges and Program Risks. GAO/T-NSIAD/AIMD-99-99. Washington, D.C.: March 4, 1999. Major Management Challenges and Program Risks: Department of State. GAO/OCG-99-12. Washington, D.C.: January 1999. Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency. GAO/NSIAD-99-3. Washington, D.C.: November 12, 1998. Foreign Affairs Management: Major Challenges Facing the Department of State. GAO/T-NSIAD-98-251. Washington, D.C.: September 17, 1998. Combating Terrorism: Efforts to Protect U.S. Forces in Turkey and the Middle East. GAO/T-NSIAD-98-44. Washington, D.C.: October 28, 1997. Combating Terrorism: Federal Agencies’ Efforts to Implement National Policy and Strategy. GAO/NSIAD-97-254. Washington, D.C.: September 26, 1997. Combating Terrorism: Status of DOD Efforts to Protect Its Forces Overseas. GAO/NSIAD-97-207. Washington, D.C.: July 21, 1997. Aviation Security: FAA’s Procurement of Explosives Detection Devices. GAO/RCED-97-111R. Washington, D.C.: May 1, 1997. Aviation Security: Posting Notices at Domestic Airports. GAO/RCED-97- 88R. Washington, D.C.: March 25, 1997. Aviation Security: Technology’s Role in Addressing Vulnerabilities. GAO/T-RCED/NSIAD-96-262. Washington, D.C.: September 19, 1996. Aviation Security: Urgent Issues Need to Be Addressed. GAO/T- RCED/NSIAD-96-251. Washington, D.C.: September 11, 1996. Aviation Security: Immediate Action Needed to Improve Security. GAO/T- RCED/NSIAD-96-237. Washington, D.C.: August 1, 1996. Aviation Security: FAA Can Help Ensure That Airports’ Access Control Systems are Cost-Effective. GAO/RCED-95-25. Washington, D.C.: March 1, 1995.
The war on terrorism has made physical security for federal facilities a governmentwide concern. The Interagency Security Committee (ISC), which is chaired by the Department of Homeland Security (DHS), is tasked with coordinating federal agencies' facility protection efforts, developing protection standards, and overseeing implementation. GAO's objectives were to (1) assess ISC's progress in fulfilling its responsibilities and (2) identify key practices in protecting federal facilities and any related implementation obstacles. ISC has made progress in coordinating the government's facility protection efforts. In recent years, ISC has taken several actions to develop policies and guidance for facility protection and to share related information. Although its actions to ensure compliance with security standards and oversee implementation have been limited, in July 2004, ISC became responsible for reviewing federal agencies' physical security plans for the administration. ISC, however, lacks an action plan that could be used to provide DHS and other stakeholders with information on, and a rationale for, its resource needs; garner and maintain the support of ISC member agencies, DHS management, Office of Management and Budget, and Congress; identify implementation goals and measures for gauging progress; and propose strategies for addressing various challenges it faces, such as resource constraints. Without an action plan, ISC's strategy and time line for implementing its responsibilities remain unclear. s ISC and agencies have paid greater attention to facility protection in recent years, several key practices have emerged that, collectively, could provide a framework for guiding agencies' efforts. These include allocating resources using risk management; leveraging security technology; coordinating protection efforts and sharing information; measuring program performance and testing security initiatives; realigning real property assets to mission, thereby reducing vulnerabilities; and, implementing strategic human capital management, to ensure that agencies are well equipped to recruit and retain high-performing security professionals. GAO also noted several obstacles to implementation, such as developing quality data for risk management and performance measurement, and ensuring that technology will perform as expected.
The President has established, and DOD operates geographic combatant commands to perform military missions around the world. Geographic combatant commands are each assigned an area of responsibility in which to conduct their missions and activities (see fig. 1 below). Combatant commands are responsible for a variety of functions including tasks such as (1) deploying forces as necessary to carry out the missions assigned to the command; (2) coordinating and approving those aspects of administration, support (including control of resources and equipment, internal organization, and training), and discipline necessary to carry out missions assigned to the command; and (3) assigning command functions to subordinate commanders. Combatant commands are supported by Service component commands (Army, Navy, Marine Corps, and Air Force) and Special Operations Command. Each of these component commands has a significant role in planning and supporting operations. On February 6, 2007, the President directed the Secretary of Defense to establish a new geographic combatant command to consolidate the responsibility for DOD activities in Africa that have been shared by U.S. Central Command, U.S. Pacific Command, and U.S. European Command. AFRICOM was officially established on October 1, 2007, with a goal to reach full operational capability as a separate, independent geographic combatant command by September 30, 2008. Full operational capability was defined as the point at which the AFRICOM commander will accept responsibility for executing all U.S. military activities in Africa currently being conducted by the U.S. European, Central, and Pacific commands; have the capability to plan and conduct new operations; and have the capability to develop new initiatives. AFRICOM’s mission statement, which was approved by the Secretary of Defense in May 2008, is to act in concert with other U. S. government agencies and international partners to conduct sustained security engagement through military-to-military programs, military-sponsored activities, and other military operations as directed to promote a stable and secure African environment in support of U.S. foreign policy. Since the President announced the establishment of AFRICOM, DOD has focused on building the capabilities necessary for AFRICOM to systematically assume responsibility for all existing military missions, activities, programs, and exercises in the area of responsibility it is inheriting from the U.S. European, Central, and Pacific commands. From the outset, AFRICOM has sought to assume responsibility for these existing activities seamlessly, without disrupting them or other U.S. government and international efforts in Africa. To accomplish this task, AFRICOM officials created a formal process to manage the transfer of activities it initially identified as ongoing within AFRICOM’s area of responsibility. These range from activities to combat HIV/AIDS to programs that provide training opportunities for foreign military personnel and include the two largest U.S. military activities in Africa, the Combined Joint Task Force-Horn of Africa and Operation Enduring Freedom-Trans Sahara. DOD plans to transfer most activities to the new command by September 30, 2008. The areas of responsibility and examples of activities being transferred to AFRICOM from the U.S. European, Central and Pacific commands are presented in figure 2. In cases involving State Department-led activities where DOD plays a primary role in its execution, such as the International Military Education and Training program, AFRICOM is assuming only the execution of the program from other combatant commands—the State Department still maintains overall authority and responsibility for the program. Since the initial establishment of the command in October 2007, AFRICOM has also sought to staff its headquarters, which will include DOD military personnel, DOD civilian personnel, and interagency personnel. Officials explained that staffing the command’s positions is the most critical and limiting factor in the process for assuming responsibility for activities in Africa because activities cannot be transferred without personnel in place to execute them. DOD has approved 1,304 positions (military and DOD civilian) for the command’s headquarters, of which about 270 military positions are being transferred from other commands. By September 30, 2008, DOD plans to have filled 75 percent, or 980 of these positions. In addition, DOD plans to have 13 command positions filled by representatives from non-DOD agencies. As a result, on September 30, 2008, 1 percent of AFRICOM headquarters positions will be filled by representatives from non-DOD organizations (see fig. 3). At this point, the number of interagency representatives in AFRICOM headquarters will be only slightly more than the number of representatives in other geographic commands, but AFRICOM has been designed to embed these interagency personnel at all levels in the command, including in leadership and management roles. While AFRICOM expects to fill 622 (97 percent) of its military personnel positions by September 30, 2008, it only expects to fill 358 (54 percent) of its DOD civilian positions, and 13 out of 52 (25 percent) targeted interagency positions by this time. DOD officials explained that unlike military positions, hiring civilians may include conducting security clearance investigations and overcoming the logistics necessary to physically relocate civilians overseas as well as other administrative requirements. Figure 4 compares the positions DOD has approved for AFRICOM, the targeted interagency positions, the command’s progress in filling them as of July, 2008, and the progress it expects to make by October 1, 2008. In order to meet infrastructure needs, AFRICOM is renovating existing facilities in Stuttgart, Germany, to establish an interim headquarters at a projected cost of approximately $40 million. DOD also projects an investment of approximately $43 million in command, control, communications, and computer systems infrastructure to enable AFRICOM to monitor and manage the vast array of DOD activities in Africa. Decisions related to the location of AFRICOM’s permanent headquarters and the overall command presence in Africa will be decided at a future date; therefore, DOD expects the command will operate from the interim headquarters in Germany for the foreseeable future. In total, DOD budgeted approximately $125 million to support the establishment of AFRICOM during fiscal years 2007 and 2008 and has requested nearly $390 million more for fiscal year 2009. This does not reflect the full cost of establishing the command over the next several years, a cost that is projected to be substantial and could range in the billions of dollars. For example, although DOD has not fully estimated the additional costs of establishing and operating the command, AFRICOM officials said that as the command is further developed and decisions are made on its permanent headquarters, it will need to construct both enduring facilities and meet other operational support requirements. DOD’s preliminary estimates for the command’s future infrastructure and equipping costs over the next several years exceed several billion dollars, excluding the cost of activities AFRICOM will be performing. The progress AFRICOM intends to make in establishing the command by September 30, 2008, will provide it a foundation for working toward DOD’s goal to promote whole-of-government approaches to building the capacity of partner nations. However, AFRICOM officials recognize the command will need to continue to develop after its September 30, 2008, milestone to move beyond episodic security cooperation events to more strategic, sustained efforts. The AFRICOM commander has described the command as a “…listening, growing, and developing organization.” In addition, senior DOD officials told us that on September 30, 2008, DOD does not anticipate that AFRICOM will have the desired interagency skill sets, the ability to strategically engage with African countries beyond the established level, or the capacity to take on new initiatives. In addition to DOD’s efforts to establish the combatant command, the military services and Special Operations Command are also working to establish component commands that will be subordinate to AFRICOM. They are in the process of developing organizational structures and determining facilities, personnel, and other requirements, such as operational support aircraft, that have yet to be fully defined, but could be challenging for the services to meet. For example, personnel requirements for each component command range from approximately 100 personnel to more than 400, and Army officials said they will likely face difficulties in filling positions because many of the positions require a certain level of rank or experience that is in high demand. At the time that AFRICOM is estimated to reach full operational capability (September 30, 2008), only two component commands (Navy, Marine Corps) are expected to be fully operational. The Army, Air Force, and Special Operations component commands are expected to reach full operational capability by October 1, 2009. DOD’s first challenge to achieving its vision for AFRICOM is in integrating personnel from civilian agencies into AFRICOM’s command and staff structure. According to AFRICOM, strategic success in Africa depends on a whole-of-government approach to stability and security. A whole-of- government approach necessitates collaboration among federal agencies to ensure their activities are synchronized and integrated in pursuit of a common goal. Integrating personnel from federal civilian agencies is intended to facilitate collaboration among agencies, but AFRICOM has had difficulties in filling its interagency positions. Unlike liaison positions in other combatant commands, AFRICOM has been designed to embed personnel from non-DOD agencies in leadership, management, and staff positions at all levels in the command. For example, AFRICOM’s Deputy to the Commander for Civil-Military Activities, one of two co-equal Deputies to the Commander, is a senior Foreign Service officer from the Department of State. By bringing knowledge of their home agencies, personnel from other agencies, such as the USAID and the departments of Treasury and Commerce, are expected to improve the planning and execution of AFRICOM’s plans, programs, and activities and to stimulate collaboration among U.S. government agencies. Initially, DOD established a notional goal of 25 percent of AFRICOM’s headquarters’ staff would be provided by non-DOD agencies. According to State officials, however, this goal was not vetted through civilian agencies and was not realistic because of the resource limitations in civilian agencies. Subsequently, AFRICOM reduced its interagency representation to 52 notional interagency positions and as displayed in figure 5, would be approximately 4 percent of the AFRICOM staff. As previously discussed, however, DOD officials have indicated that the target of 52 interagency positions for the command will continue to evolve as AFRICOM receives input from other agencies. Even with a reduction in the number of interagency positions, according to DOD officials, some civilian agencies have limited personnel resources and incompatible personnel systems that have not easily accommodated DOD's intent to place interagency personnel in the command. AFRICOM is looking to civilian agencies for skills sets that it does not have internally, but many of the personnel who have these skills sets and experience outside of DOD are in high demand. Officials at the State Department, in particular, noted their concern about the ability to fill positions left vacant by personnel being detailed to AFRICOM since it takes a long time to develop Foreign Service officers with the requisite expertise and experience. In fact, according to State Department officials, some U.S. embassies in Africa are already experiencing shortfalls in personnel, especially at the mid-level. DOD officials also said that personnel systems among federal agencies were incompatible and do not readily facilitate integrating personnel into other agencies, particularly into non-liaison roles. In addition, many non-DOD agencies have missions that are domestically focused and therefore will need time to determine how best to provide personnel support to AFRICOM. To encourage agencies to provide personnel to fill positions in AFRICOM, DOD will pay the salaries and expenses for these personnel. As previously discussed, while DOD has focused initially on establishing AFRICOM’s headquarters, the services and Special Operations Command are also working to establish component commands to support AFRICOM, but the extent of interagency participation at these commands has not been fully defined. Neither OSD nor AFRICOM has provided guidance on whether AFRICOM’s component commands should integrate interagency representatives, and among the services, plans for embedded interagency personnel varied. The Army has proposed including four interagency positions in AFRICOM’s Army service component command, U.S. Army, Africa. Officials from the Office of the Secretary of Defense, the Joint Forces Command, Marine Corps, and the Air Force stated that component commands would receive interagency input from AFRICOM headquarters and embassy country teams. One OSD official added that the level of interagency input at the headquarters was sufficient because component commands are responsible for executing plans developed by the combatant command headquarters where interagency personnel would be involved in the planning process. In the 2006 Quadrennial Defense Review Execution Roadmap, Building Partnership Capacity, DOD recognized the importance of a seamless integration of U.S. government capabilities by calling for strategies, plans, and operations to be coordinated with civilian agencies. One of AFRICOM’s guiding principles is to collaborate with U.S. government agencies, host nations, international partners, and nongovernmental organizations. AFRICOM officials told us that they had not yet developed the mechanisms or structures to ensure that their activities were synchronized or integrated with those of civilian agencies to ensure a mutually supportive and sustainable effort, but would turn their attention to this synchronization after October 2008. Barriers to interagency collaboration, however, could arise as AFRICOM develops mechanisms, processes, and structures to facilitate interagency collaboration, since both AFRICOM and the agencies will likely encounter additional challenges that are outside their control, such as different planning processes, authorities, and diverse institutional cultures. For example, according to State and DOD officials, the State Department is focused on bilateral relationships with foreign governments through its embassies overseas, while the Defense Department is focused regionally through its geographic combatant commands. With relatively few interagency personnel on the AFRICOM staff, such coordination mechanisms could be critical for the command to achieve its vision. DOD’s second challenge to achieving its vision for AFRICOM is in overcoming stakeholder concerns of the command’s mission. This could limit its ability to develop key partnerships. Since its establishment was announced in early 2007, AFRICOM has encountered concerns from U.S. civilian agencies, nongovernmental organizations, and African partners about what AFRICOM is and what it hopes to accomplish in Africa. Many of the concerns from U.S. government agencies, nongovernmental organizations, and African partners stem from their interpretations of AFRICOM’s intended mission and goals. Although DOD has often stated that AFRICOM is intended to support, not lead, U.S. diplomatic and development efforts in Africa, State Department officials expressed concern that AFRICOM would become the lead for all U.S. government activities in Africa, even though the U.S. embassy leads decision-making on U.S. government non-combat activities conducted in that country. Other State and USAID officials noted that the creation of AFRICOM could blur traditional boundaries among diplomacy, development, and defense, thereby militarizing U.S. foreign policy. An organization that represents U.S.-based international nongovernmental organizations told us that many nongovernmental organizations shared the perception that AFRICOM would militarize U.S. foreign aid and lead to greater U.S. military involvement in humanitarian assistance. Nongovernmental organizations are concerned that this would put their aid workers at greater risk if their activities are confused or associated with U.S. military activities. Among African countries, there is apprehension that AFRICOM will be used as an opportunity to increase the number of U.S. troops and military bases in Africa. African leaders also expressed concerns to DOD that U.S. priorities in Africa may not be shared by their governments. For example, at a DOD- sponsored roundtable, a group of U.S.-based African attachés identified their most pressing security issues were poverty, food shortages, inadequate educational opportunities, displaced persons, and HIV/AIDS, while they perceived U.S. priorities were focused on combating terrorism and weakened states. One factor contributing to persistent concerns among U.S. government agencies, non governmental organizations, and African partners is the evolution of how DOD has characterized AFRICOM’s unique mission and goals. Between February 2007 and May 2008 AFRICOM’s mission statement went through several iterations that ranged in its emphasis on humanitarian-oriented activities to more traditional military programs. According to an official from an organization representing nongovernmental organizations, the emphasis on humanitarian assistance as part of AFRICOM’s mission early on contributed to their fears that AFRICOM would be engaged in activities that are traditionally the mission of civilian agencies and organizations. Additionally, the discussion of AFRICOM’s mission evolved from highlighting its whole-of-government approach to referring to it as a bureaucratic reorganization within DOD. When articulating its vision for AFRICOM, DOD also used language that did not translate well to African partners and civilian agency stakeholders. For civilian agencies use of the words "integrating U.S. government activities” led to concerns over AFRICOM’s assuming leadership in directing all U.S. government efforts. Likewise, DOD’s use of the term “combatant command” led some African partners to question whether AFRICOM was focused on non-warfighting activities. State Department officials said that they had difficulty in responding to African concerns because of their own confusion over AFRICOM’s intended mission and goals. Another factor contributing to concerns over AFRICOM’s mission and goals can be attributed to unclear roles and responsibilities. Although DOD has long been involved in humanitarian and stability-related activities, AFRICOM’s emphasis on programs that prevent conflict in order to foster dialogue and development has put a spotlight on an ongoing debate over the appropriate role of the U.S. military in non-combat activities. Consequently, civilian agencies are concerned about the overlap of DOD missions with their own and what impact DOD’s role may have on theirs. DOD is currently conducting a mission analysis to help define roles and responsibilities between AFRICOM and civilian agencies operating in Africa, but broader governmentwide consensus on these issues has not been reached. An additional factor contributing to U.S. government perceptions that AFRICOM could militarize U.S. foreign policy is in part based on DOD’s vast resources and capacity compared to the civilian agencies. Civilian agencies and some African partners are concerned that the strategic focus AFRICOM could bring to the continent would result in AFRICOM supplanting civilian planning and activities. One USAID official told us that an increase in funding executed by AFRICOM could change the dynamic in relationships among U.S. federal agencies and in relationships between individual U.S. agencies and African partners. DOD has not yet reached agreement with the State Department and potential host nations on the structure and location of AFRICOM’s presence in Africa. Initially, an important goal of AFRICOM was to establish a command presence in Africa that would provide a regional approach to African security and complement DOD’s representation in U.S. embassies. AFRICOM is planning to increase its representation in 11 U.S. embassies by establishing new offices to strengthen bilateral military- to-military relationships. It is also planning to establish regional offices in five locations on the continent that would align with the five regional economic communities in Africa. DOD, however, has faced difficulty reaching agreement with the State Department on AFRICOM’s future presence on the continent. Therefore, AFRICOM will be based in Stuttgart, Germany, for the foreseeable future and plans to focus on increasing its representatives in embassies until decisions on the structure and location of AFRICOM’s presence are made. In testimony to the Congress in March of this year, the AFRICOM Commander stated that he considers command presence in Africa an important issue, but states that it is not considered a matter of urgency. DOD officials have previously stated that the command’s presence in Africa was important. Specifically, DOD officials have indicated that the structure and location of AFRICOM’s presence in Africa is important because being located in Africa would provide AFRICOM staff with a more comprehensive understanding of the regional environment and African needs. Second, having staff located in Africa would help the command build relationships and partnerships with African nations and the regional economic communities and associated regional standby forces. Enduring relationships are an important aspect of building African partner security capacity and in successfully planning and executing programs and activities. Third, regional offices are intended to promote a regional dimension to U.S. security assistance through their coordination with DOD representatives who manage these programs in multiple U.S. embassies. As DOD continues to evolve its plans for a presence in Africa and decisions involving presence are delayed, DOD officials have indicated that other coordinating mechanisms may be established as a substitute for a physical presence on the continent. In addition, senior DOD officials have stated that preparing budget estimates for future fiscal years is difficult without an agreed upon AFRICOM presence on the continent. For example, although DOD requested $20 million in fiscal year 2009 to begin establishing the presence in Africa, AFRICOM has not been able to identify total funding requirements for headquarters infrastructure and operations in Africa. Furthermore, a senior official from the Office of the Secretary of Defense for Program Analysis and Evaluation stated that AFRICOM’s future presence in Africa was one of the most important policy decisions that could affect the ability of the department to estimate future costs for the command. For example, in developing the fiscal year 2009 budget request, DOD estimated the costs to operate the interim headquarters in Stuttgart, Germany, was approximately $183 million, but these costs may change significantly, according to DOD officials, if the headquarters were located in an African country with more limited infrastructure than currently available in Stuttgart, Germany. Therefore, without an agreed-upon U.S. government strategy for establishing AFRICOM’s presence on the continent of Africa that is negotiated with and supported by potential host nations, the potentially significant fiscal implications of AFRICOM’s presence and impact on its ability to develop relationships and partnerships at the regional and local levels will remain unclear. As AFRICOM nears the October 2008 date slated for reaching full operational capability, DOD is working to shape expectations for the emergent command—both inside and outside the United States. Confronted by concerns from other U.S. agencies and African partners, AFRICOM is focused on assuming existing military missions while building capacity for the future. The ultimate role of AFRICOM in promoting a whole-of-government approach to stability and security on the continent is still uncertain, but initial expectations that the command would represent a dramatic shift in U.S. approach to security in Africa are being scaled back. Two key precepts of the command—that it would have significant interagency participation and would be physically located in Africa to engage partners there—will not be realized in the near term. Looking to the future, the difficulties encountered in staffing the command, sorting out the military’s role in policy, and establishing a presence in Africa are emblematic of deeper cultural and structural issues within the U.S. government. Having such a command will likely help DOD focus military efforts on the African continent, but the extent to which an integrated approach is feasible remains unclear. Over the next few years, DOD intends to invest billions in this new command—including devoting hundreds of staff—and sustained attention will be needed to ensure that this substantial investment pays off over time. Mr. Chairman, this concludes my prepared statement. We would be happy to answer any questions you may have. For questions regarding this testimony, please call John Pendleton at (202) 512-3489 or pendletonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Other key contributors to this statement were Robert L. Repasky, Tim Burke, Leigh Caraher, Grace Colemen, Taylor Matheson, Lonnie McAllister, and Amber Simco. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In February 2007, the President announced the U. S. Africa Command (AFRICOM), a Department of Defense (DOD) geographic combatant command with a focus on strengthening U.S. security cooperation with Africa, creating opportunities to bolster the capabilities of African partners, and enhancing peace and security efforts on the continent through activities such as military training and support to other U.S. government agencies' efforts. DOD officials have emphasized that AFRICOM is designed to integrate DOD and non-DOD personnel into the command to stimulate greater coordination among U.S. government agencies to achieve a more whole-of-government approach. This testimony is based on the preliminary results of work GAO is conducting for the Subcommittee on the establishment of AFRICOM. GAO analyzed relevant documentation and obtained perspectives from the combatant commands, military services, Joint Staff, Department of State, USAID and non-governmental organizations. GAO plans to provide the Subcommittee with a report later this year that will include recommendations as appropriate. This testimony addresses (1) the status of DOD's efforts to establish and fund AFRICOM and (2) challenges that may hinder the command's ability to achieve interagency participation and a more integrated, whole-of-government approach to DOD activities in Africa. The Department of Defense has made progress in transferring activities, staffing the command, and establishing an interim headquarters for AFRICOM, but has not yet fully estimated the additional costs of establishing and operating the command. To date, AFRICOM's primary focus has been on assuming responsibility for existing DOD activities such as military exercises and humanitarian assistance programs, and DOD plans to have most of these activities transferred by October 1, 2008. DOD has approved 1,304 positions for the command's headquarters, and by October 1, 2008, plans to have filled about 75 percent, or 980 positions. Also, DOD plans to have 13 other positions filled by representatives from non-DOD organizations, such as the State Department. DOD is renovating facilities in Stuttgart, Germany, for interim headquarters and plans to use these facilities for the foreseeable future until decisions are made regarding the permanent AFRICOM headquarters location. The initial concept for AFRICOM, designed and developed by DOD, met resistance from within the U.S. government and African countries and contributed to several implementation challenges. First, DOD has had difficulties integrating interagency personnel in the command, which is critical to synchronizing DOD efforts with other U. S. government agencies. DOD continues to lower its estimate of the ultimate level of interagency participation in the command. According to DOD, other agencies have limited resources and personnel systems which have not easily accommodated DOD's intent to place interagency personnel in the command. Second, DOD has encountered concerns from civilian agencies and other stakeholders over the command's mission and goals. For example, State Department and U.S. Agency for International Development officials have expressed concerns that AFRICOM will become the lead for all U.S. efforts in Africa, rather than just DOD activities. If not addressed, these concerns could limit the command's ability to develop key partnerships. Third, DOD has not yet reached agreement with the State Department and potential host nations on the structure and location of the command's presence in Africa. Uncertainties related to AFRICOM's presence hinder DOD's ability to estimate future funding requirements for AFRICOM and raises questions about whether DOD's concept for developing enduring relationships on the continent can be achieved.
O&M appropriations support the training, supply, and equipment maintenance of military units as well as the administrative and facilities infrastructure of military bases. Along with military personnel costs, which are funded with separate military personnel appropriations, O&M funding is considered one of the major components of DOD’s funding for readiness. O&M funds provide for a diverse range of programs and activities that include the salaries and benefits for most DOD civilian employees; depot maintenance activities; fuel purchases; flying hours; base operations; consumable supplies; health care for active duty service personnel and other eligible beneficiaries; reserve component operations; and DOD-wide support functions including several combat support agencies, four intelligence agencies, and other agencies that provide common information services, contract administration, contract audit, logistics, and administrative support to the military departments. The Congress provides O&M appropriations to 11 service-oriented O&M accounts—the Army, Navy, Marine Corps, Air Force, Army Reserve, Navy Reserve, Marine Corps Reserve, Air Force Reserve, Army National Guard, Air National Guard, and defensewide—and to program accounts, such as the defense health program. In addition to the regular annual O&M appropriations, the Congress can make supplemental O&M appropriations to finance the incremental costs above the peacetime budget that are associated with contingencies, such as the GWOT. Since late 1995, DOD has encouraged the services and the defense agencies to conduct cost comparison studies as provided for in the Office of Management and Budget’s Circular A-76. Under the A-76 process, otherwise known as competitive sourcing, the military services and other defense components conduct a public/private competition for a commercial activity currently performed by government personnel to determine whether it would be cost-effective to contract with the private sector for that activity’s performance. On the other hand, a public/private competition is not required for private sector performance of a new requirement, private sector performance of a segregable expansion of an existing commercial activity performed by government personnel, or continued private sector performance of a commercial activity. However, before government personnel may perform a new requirement, an expansion to an existing commercial activity, or an activity performed by the private sector, a public/private competition is required to determine whether government personnel should perform the commercial activity. The DOD Commercial Activities Management Information System is DOD’s database of record established to meet reporting requirements on the conduct of A-76 competitions and the results from implementing A-76 decisions, whether the decisions are to continue using government employees to perform the work or to outsource the work. For contracts awarded to the private sector, the database includes the estimated cost to perform the work using government employees, the contract award amount, the actual contract cost for each contract performance period, and brief reasons for any cost growth over the performance periods. A contract performance period is normally for 12 months, although the first performance period may cover a shorter transition period when the work is initially conveyed to the contractor. Contract information is to be maintained through the end of the last performance period included in the competition. Installation officials are responsible for reporting information on the A-76 program for input into the DOD database. Driven primarily by increased operations associated with GWOT and other contingencies, DOD’s O&M costs increased substantially between fiscal years 1995 and 2005, with the most growth occurring since fiscal year 2001. DOD’s reliance on contractors for support services also increased substantially during this period in order to meet increased military requirements without an increase in active duty and civilian personnel and because federal government policy is to rely on the private sector for needed commercial services that are not inherently governmental in nature, which includes many of the requirements generated from the GWOT in areas such as logistics and base operations support. Although DOD’s O&M costs increased significantly between fiscal years 1995 and 2005, there was a distinct difference in the rate of growth between the early and latter years of this 10-year period. Specifically, as shown in figure 1, DOD’s annual O&M costs were practically constant until 2001, when the costs began to increase. Figure 2 shows that during the first half of the 10-year period from fiscal year 1995 to fiscal year 2000, DOD’s O&M costs increased about 1 percent. In comparison, costs in DOD’s other major budget categories during this period changed as follows: military personnel costs declined about 13 percent; procurement costs increased about 21 percent; research and development costs increased about 4 percent; and other costs increased about 1 percent. DOD total costs were almost constant between fiscal year 1995 and fiscal year 2000. Figure 3 shows that a significant change in cost growth occurred during the subsequent 5-year period from fiscal year 2000 to fiscal year 2005, when DOD’s O&M costs increased about 57 percent. In the other major budget categories during this period, military personnel costs increased about 36 percent, procurement costs increased by about 62 percent, research and development increased by about 62 percent, and other costs increased about 13 percent. DOD total costs increased about 51 percent between fiscal year 2000 and fiscal year 2005. Trends in O&M costs at the military service level generally reflect the overall DOD trend. As shown in figure 4, between fiscal years 1995 and 2000, little change occurred in each service’s O&M costs. However, considerable cost growth occurred between fiscal years 2000 and 2005. Among the services, the Army had the largest percentage of growth in O&M costs between fiscal years 2000 and 2005. During this period, the Army’s O&M costs increased by about 137 percent, while the Navy and Marine Corps’ and the Air Force’s O&M costs increased by about 30 percent and 29 percent, respectively. According to DOD and service officials, the primary cause for increased O&M costs since fiscal year 2001 is the increase in military operations associated with GWOT and other contingencies, including hurricane relief. However, the officials also stated that other factors have contributed to the growth in O&M costs, such as the aging of military infrastructure and equipment; increased costs for installation security, antiterrorism force protection, communications, information technology, transportation, and utilities; and certain changes in acquisition approaches. The fight against terrorism has resulted in operations and deployments around the globe that are in addition to the usual peacetime operations. According to DOD, the related costs have included not only the personnel costs associated with mobilizing National Guard and reserve forces but also the costs of supporting these forces and the increased pace of operations. O&M-funded costs include a wide range of activities and services supporting operations including costs related to (1) predeployment and forward-deployed training of units and personnel; (2) personnel support costs including travel, subsistence, reserve component personnel activation and deactivation costs, and unit-level morale, welfare, and recreation; (3) establishment, maintenance, and operation of housing and dining facilities and camps in the theaters of operation; (4) petroleum, oils and lubricants, spare parts, consumable end items, and other items necessary to support the deployment of air, ground, and naval units; (5) establishment, maintenance, and operation of facilities including funds for roads, water, supply, fire protection, hazardous waste disposal, force protection bunkers and barricades; (6) command, control, communications, computers and intelligence within the contingency areas of operations; (7) organization-level maintenance including repairs to equipment and vehicles; (8) intermediate- and depot-level maintenance of weapons and weapon system platforms requiring service after the wear and tear of combat operations; and (9) contracts for services for logistics and infrastructure support to deployed forces. The additional military costs associated with GWOT and other contingencies have been primarily funded through supplemental appropriations. Figure 5 shows the annual amount of supplemental O&M funds appropriated each year from fiscal year 2000 through fiscal year 2005. During this period, supplemental O&M appropriations totaled about $210 billion and, according to the services, additional amounts were transferred or reprogrammed from other accounts to the O&M accounts of the military services. Although costs associated with the GWOT and other contingencies have been the primary reason for increased O&M costs between fiscal years 2000 and 2005, other factors also contributed to the O&M cost growth in the military services. To illustrate, if the services’ annual O&M total obligation authority is adjusted by removing annual supplemental O&M appropriations and net transfers and reprogrammings into the O&M account, the result shows that O&M costs still grew during this time period, as illustrated in figure 6. Specifically, between fiscal years 2000 and 2005, O&M costs after the adjustment grew by about 44 percent in the Army, 17 percent in the Navy and Marine Corps, and 2 percent in the Air Force. According to service officials, baseline O&M costs have increased between fiscal years 2000 and 2005 because of many factors, such as aging of military infrastructure and equipment, and increased costs for installation security, antiterrorism force protection, communications, information technology, transportation, and utilities. Navy officials particularly cited the implementation of DOD’s utility privatization program as a factor contributing to increased O&M costs. In a September 2006 report, we noted that DOD’s utility costs could potentially increase by another $954 million to pay costs associated with remaining utility systems that might be privatized. Increased O&M costs are also attributable to certain changes in DOD’s acquisition approaches. For example, the Air Force historically bought space launch vehicles, such as the Delta and Titan rockets, as products paid for with procurement appropriations. Now, under the Evolved Expendable Launch Vehicle program, the Air Force uses O&M appropriations to purchase launch services using contractor- owned launch vehicles. The projected cost of this program is $28 billion. Further, as we noted in our September 2006 report, the Army and the Air Force turned to service contracts for simulator training primarily because efforts to modernize existing simulator hardware and software had lost out in the competition for procurement funds. As a result, the simulators were becoming increasingly obsolete. Buying training as a service meant that O&M funds could be used instead of procurement funds. To meet military requirements during a period of increased operations without an increase in active duty and civilian personnel, DOD has relied not only on reserve personnel activations but also on increased use of contractor support in areas such as management and administrative services, information technology services, medical services, and weapon systems and base operations support. Between fiscal years 2000 and 2005, DOD’s service contract costs in O&M-related areas increased over $40 billion, or 73 percent. Table 1 highlights the growth in several service contract categories. DOD officials noted several factors that have contributed to DOD’s increased use of contractor support. First, the GWOT and other contingencies have significantly increased O&M requirements and DOD has met these requirements without an increase in active duty and civilian personnel. To do this, DOD relied not only on reserve personnel activations, but also on increased use of contractor support. Second, Office of Management and Budget Circular A-76 notes that the long-standing policy of the federal government has been to rely on the private sector for needed commercial services and that commercial activities should be subject to the forces of competition to ensure that the American people receive maximum value for their tax dollars. The circular notes that a public/private competition—which can involve a lengthy and costly process—is not required for contractor performance of a new requirement or private sector performance of a segregable expansion of an existing commercial activity. On the other hand, the circular states that before government personnel may perform a new requirement or an expansion of an existing commercial activity a public/private competition is required to determine whether government personnel should perform the work. Third, DOD initiatives that have required that consideration be given to outsourcing certain work performed by uniformed and DOD civilian personnel have resulted in outsourcing decisions. For example, between fiscal years 1995 and 2005, DOD’s competitive sourcing, or A-76 public/private competition, program resulted in 570 decisions to contract out work that had been performed by over 39,000 uniformed and DOD civilian personnel. Also, in 1997, DOD decided that privatization of military installation utility systems was the preferred method for improving utility systems and services because privatization would allow installations to benefit from private sector financing and efficiencies. As of March 2006, DOD had awarded contracts to privatize 117 systems and had an additional 904 systems in various phases of the privatization evaluation and solicitation process. Fourth, service officials noted that in some instances certain personnel issues tend to favor the use of contractor support. For example, service officials stated that because of limitations on headquarters personnel authorizations, the use of contractor support is often the only readily available option to accomplish new or expanded commercial work requirements at service headquarters. Service officials also noted that it is generally easier to terminate or not renew a contract than to lay off government employees in the event of reduced work requirements. For this reason, use of contractor support is often favored when there is uncertainty over the length of time that support services will be needed, which is the case for some work supporting GWOT and other contingencies. Sufficient data are not available to determine whether increased services contracting has caused DOD’s costs to be higher than they would have been had the contracted activities been performed by uniformed or DOD civilian personnel. Although overall quantitative information was not available, our analysis of the military services’ reported information from its competitive sourcing program, commonly referred to as the A-76 public/private competition process, and case studies of O&M-related work contracted out at three installations showed that outsourcing decisions generally resulted in reducing the government’s costs for the work. However, compared to all O&M-related contracts, the number of A-76 public/private competition contracts is small, the results from this program may not be representative of the results from all services contracts for new or expanded O&M work, and certain limitations exist with the use of the A-76 data. Further, a recent DOD study found that the Army’s use of contract security guards at domestic installations cost more than the use of guards employed by the Army. To determine whether increased services contracting has exacerbated the growth of O&M costs, information is needed that allows for a comparison of the contract costs with the costs of performing the same work in-house with uniformed or DOD civilian personnel. However, in most cases DOD does not know how much contracted services work would cost if the work were performed by government employees. DOD officials noted that existing policy generally does not require a public/private competition for private sector performance of a new or expanded commercial requirement and, as a result, in-house cost estimates have not been prepared for most of the work awarded to contractors as a result of increased O&M requirements from GWOT and other contingencies. In the absence of such quantitative data, information is not available to determine whether the government’s costs are higher than they would have been had the contracted services work been performed by uniformed or DOD civilian personnel. While overall information was not available to determine whether increased services contracting has exacerbated O&M cost growth, DOD does maintain data on its competitive sourcing program, otherwise known as the A-76 public/private competition process, which allows a comparison of in-house and contract costs for some O&M related work. Specifically, DOD’s A-76 program data include in-house and contract cost information on contracts for work formerly performed by uniformed or DOD civilian personnel that were awarded to the private sector as a result of a public/private cost competition or, under certain conditions prior to May 2003, direct conversion to contractor performance. As shown in table 2, of the 1,112 total A-76 public/private competition decisions that were made between fiscal years 1995 and 2005, the military services decided to outsource the work in 570, or 51 percent, of the cases. These decisions resulted in contracting out the work formerly performed by over 39,000 uniformed and DOD civilian personnel. In the remaining cases, based on the public/private cost comparison the military services decided to continue performing the work in-house. At the time of our review, the Army, Navy, Marine Corps, and Air Force had reported detailed contract cost data on 538 of the 570 A-76 decisions to outsource work. Our analysis of these data showed that the public/private competition decisions generally resulted in reducing the government’s costs for the work. Specifically, according to data reported during the first contract performance period, the Army estimated savings of about $33 million from 96 contracts, the Navy and Marine Corps estimated savings of about $74 million from 88 contracts, and the Air Force estimated savings of about $115 million from 354 contracts. Figures 7, 8, and 9 show each service’s reported A-76 outsourcing information for contracts resulting from both public/private competitions and direct conversion from government to contractor performance between fiscal years 1995 and 2005. Although the services’ A-76 data show that decisions to outsource work were cost-effective, several limitations are associated with the use of this information. First, DOD officials noted that when work performed by uniformed personnel is outsourced, the personnel generally are assigned to other duties. Thus, while the cost to perform the outsourced work may be less than when it was performed in-house, the overall cost to the government may actually increase because the uniformed personnel continue to be paid to perform different work and a contractor is now paid to do the work formerly performed by the uniformed personnel. Also, outsourcing of work formerly performed by uniformed personnel may also increase O&M costs because military personnel appropriations are used to compensate uniformed personnel and O&M appropriations are used to pay contractors for services work. Second, compared to all O&M-related contracts, the number of A-76 public/private competition contracts is small and the results from this program may not be representative of the results from all services contracts for new or expanded O&M work. For example, for the 538 A-76 outsourcing decisions for fiscal years 1995 through 2005 with reported cost data, the total contract value for the first performance period was about $1.2 billion. Yet, in fiscal year 2005 alone, the value of DOD’s O&M- related services contracts exceeded $95 billion. Third, the available A-76 public/private competition information compares the contract costs with the cost estimates for work using government employees during the first contract performance period. Our review of contract costs in subsequent performance periods showed that contractor costs frequently grew and in many cases exceeded the government employee cost estimate in subsequent periods. However, according to DOD cost information, the cost growth was usually attributed to requirements being added to the contract and contract wage increases, which the government employee cost estimate did not reflect. As a result, information is normally not available to determine whether the outsourcing continued to be cost-effective for those contracts that subsequently cost more than the estimate using government employees. Fourth, the reliability of the services’ reported A-76 public/private competition contract costs and savings appears questionable. The DOD Inspector General reported in November 2005 that DOD had not effectively implemented a system to track and assess costs of performance under the A-76 program. The report stated that because system users did not always maintain supporting documentation for key data elements and entered inaccurate and unsupported costs, and the military services used different methodologies to calculate baseline costs, DOD’s A-76 database included inaccurate and unsupported costs, and as a result, reported costs and estimated savings may be either overstated or understated. DOD officials noted that, while the estimated savings may be either overstated or understated, there were still savings and that DOD was in the process of addressing the report’s recommendations for improving the tracking system. During our visits to Fort Hood, Naval Air Station Pensacola, and Langley Air Force Base, we reviewed the accuracy of reported cost information on contracts awarded as a result of A-76 public/private competitions. According to information provided by Fort Hood officials, we found that actual contract costs were greater than the costs reported in the DOD A-76 database for one contract. However, the difference was less than 1 percent. At Naval Air Station Pensacola, there were no differences in the costs reported in the A-76 database and the actual costs for eight contracts awarded as a result of A-76 competitions. At Langley Air Force Base, we found some differences in the reported and actual costs for four contracts awarded as a result of A-76 competitions. For the four contracts over 4 years, the actual contract costs, according to installation officials, were about $250,000, or 5 percent, more than reported in the database. However, even with the increased actual costs, the contracts still showed considerable savings over the estimated costs using government employees. During our visits to Fort Hood, Naval Air Station Pensacola, and Langley Air Force Base, we reviewed examples of O&M-related work that was contracted out, or slated to be contracted out, either as a result of an A-76 public/private competition or because the uniformed personnel who formerly performed the work were needed to support other missions. According to installation officials, the outsourcing of work formerly performed in-house had not resulted in any unexpected funding or other consequences. Officials at each installation stated that their outsourcing efforts had resulted in reduced costs for performing the work and that they were satisfied with contractor performance. The following examples illustrate the outsourcing results from specific cases of work formerly performed in-house at the three installations we visited and in general show that the outsourcing efforts appeared to be cost-effective. In June 2000, as a result of an A-76 public/private competition, Fort Hood contracted the operation and maintenance of the installation’s firing range. During the A-76 competition, the cost estimate to continue performing the work in-house was $37.1 million over the 4-year and 7-month total performance period. The estimate was based on using 118 civilian and 11 military personnel to do the work. The work was awarded to a contractor who bid $30.8 million to perform the work. Fort Hood officials stated that between the time of the contract solicitation and the time the contractor took over range operations, changes occurred in unit training events and range operating standards which caused the work requirement to increase far above the level included in the solicitation’s statement of work. As a result, the officials stated that the contract was modified to provide for the increased workload and actual contract costs totaled $38.2 million through the end of the total performance period in December 2004. Although the contract costs exceeded the in-house estimate by $1.1 million, or 3 percent, Fort Hood officials stated that they were confident that the outsourcing was cost-effective because the in-house cost estimate would have exceeded the actual contract costs if the in-house estimate had included the cost of the workload subsequently added to the contract. The officials also stated that they were satisfied with the contractor’s performance. In January 2003, Fort Hood contracted the installation’s ammunition supply work because the uniformed personnel who formerly performed the work at Fort Hood were needed to help support the GWOT. According to installation officials, the work, which included the receipt, storage, and issue of training ammunition, had historically been performed by approximately 180 uniformed personnel, who were also responsible for completing collateral military duties. The officials stated that the work was converted to contractor performance by modifying an existing Fort Hood support services contract to add the ammunition supply work for about $1.8 million annually. According to the officials, the contractor used between 45 and 56 people to do the work, and performance metrics, such as inventory accuracy, improved after the contractor took over the work. Although an analysis was not performed to compare the contract cost with the cost to perform the work with uniformed personnel, Fort Hood officials stated that they believe that the outsourcing was cost-effective because the contractor was performing the work with far fewer people compared to the number of uniformed personnel who formerly did the work. The officials stated that a new contract for the work was awarded in June 2006 at an annual cost of about $2.3 million. The officials attributed the increase in contract costs to new requirements that were added to the scope of the work. In January 2001, as a result of an A-76 public/private competition, Naval Air Station Pensacola contracted the installation’s receipt, storage, and distribution of petroleum, oil, and lubrication products. The work had previously been performed by 14 civilian personnel at an estimated annual cost of about $700,000. During the A-76 competition, the cost estimate to continue performing the work using government employees was $355,000 annually based on reducing the number of employees needed to do the work to seven. Naval Air Station Pensacola officials stated that the work was awarded to a contractor who bid $250,000 annually to do the work. This amount was about $450,000 less than the original cost of the work and about $105,000 less than the estimate to continue performing the work in-house. Primarily because of added work requirements, reported data showed that the actual contract costs increased to $315,000 by the fifth contract performance period. Nevertheless, Naval Air Station Pensacola officials noted that this outsourcing effort continued to cost less than the estimated cost to perform the work in-house. The officials also stated that they were satisfied with the contractor’s performance. In March 2002, as a result of another A-76 public/private competition, Naval Air Station Pensacola contracted the management of household goods shipments for military personnel arriving and departing the installation. The work had previously been performed by 21 civilian personnel at an estimated cost of about $6.1 million over a 5-year period. During the A-76 competition, the cost estimate to continue performing the work in-house was $3.8 million over the 5-year total contract performance period, based on streamlining the work and reducing the number of employees needed to do the work. Naval Air Station Pensacola officials stated that the work was awarded to a contractor who bid $2.8 million to perform the work over the total performance period. This amount was about $1.1 million less than the in-house estimate. Through the first 3 years and 3 months of the contract, reported data showed that the actual contract costs were about 13 percent higher than the contractor’s bid amount but were still less than the estimated cost to perform the work in- house. Naval Air Station Pensacola officials stated that contract costs were higher because of wage rate increases. The officials also stated that they were satisfied with the contractor’s performance. In June 2000, as a result of an A-76 public/private competition, Langley Air Force Base contracted transient aircraft services work. During the A-76 competition, the cost estimate to continue performing the work in-house was $1.1 million annually based on using 14 military and 7 civilian personnel to do the work. According to Langley Air Force Base officials, the work was awarded to a contractor who bid $365,000 to perform the work, and the actual contract cost to perform the work during the first performance period was about $374,000. This amount was about $726,000, or about 66 percent, less than the estimated cost to do the work in-house. Although reported data showed that contract costs increased by 8 percent by the third contract performance period primarily because of wage rate adjustments, the contract still cost less than the estimated in-house cost to perform the work. Langley Air Force Base officials stated that they were satisfied with the contractor’s performance and that the contract was recompeted in 2003 and awarded at approximately the same cost. In October 2001, as a result of another A-76 public/private competition, Langley Air Force Base contracted certain records management services. During the A-76 competition, the cost estimate to continue performing the work in-house was $643,000 annually based on using 13 uniformed personnel to do the work. According to Langley Air Force Base officials, the work was awarded to a contractor who bid about $337,000 to perform the work during the first annual performance period. This amount was $306,000, or about 48 percent, less than the estimated cost to perform the work in-house. According to the available data and Langley Air Force Base officials, the actual contract cost during the first performance period was the same as the bid amount. Although reported data showed that contract costs increased to about $394,000 by the fifth contract performance period primarily because of wage rate adjustments, the officials noted that the cost was still less than the in-house estimate for the work. The officials also stated that they were satisfied with the contractor’s performance. A recent DOD report provides another comparison of costs for work performed by contractors and government personnel. In this case, DOD found that contract security guards at domestic installations cost more than use of guards employed by the government. However, as with the reported results from A-76 contracts, because the data used in DOD’s report is from a relatively small number of contracts, the results may not be representative of the results of all O&M related services contracts. The John Warner National Defense Authorization Act for Fiscal Year 2007 required the Secretary of Defense to submit a report including an explanation of the Army’s progress in responding to our April 2006 report that assessed the Army’s acquisition of security guards and an assessment of the cost-effectiveness and performance of contract security guards. Our report noted that in the aftermath of the September 11, 2001, attacks, DOD sent numerous active-duty, U.S.-based personnel to Afghanistan, Iraq, and other destinations to support the GWOT. These deployments depleted the pool of military security guards at a time when DOD was faced with increased security requirements at its U.S. installations. To ease this imbalance, the Congress authorized DOD to waive a prohibition against the use of contract security guards at domestic military installations. The Army, the first service to use the authority, had awarded contracts worth nearly $733 million for contract guards at 57 installations as of December 2005. Our report also noted that the Army had relied heavily on sole-source contracts to acquire contract security guards, despite the Army’s recognition early on that it was paying considerably more for its sole- source contracts than for those awarded competitively. Our report made recommendations to the Secretary of Defense to improve management and oversight of the contract security guard program. In early 2007, DOD issued its report, which stated that the Army concurred with our recommendations and was in the process of resoliciting security guard contracts to increase the use of competition. In regard to comparing the costs of government-employed and contract security guards, DOD reported that the contract security guards were more expensive than use of government guards. However, the amount of the cost difference varied widely depending on whether the contract was awarded competitively. In cases where the contracts were awarded competitively, the contracts cost about 5 percent more than use of government guards. However, in cases where the contracts were not awarded competitively; the contracts cost about 42 percent more than government guards. DOD’s report also noted its view that the security guard contracts provided greater flexibility in this instance to adjust the workforce level up or down when the threat level changes and a performance test showed no difference in the effectiveness between government and contract security guards. DOD officials cited several benefits associated with the increased use of contractors for support services in certain circumstances. On the other hand, concerns over increased contracting have also been cited by the Congress, the military services, and us. DOD officials noted that when expanded military missions, deployments, and other contingencies increase operational requirements, additional personnel are needed to perform the extra work. For mission support work, additional personnel might be obtained from several alternatives, such as increasing the size of the active military force, mobilizing reserve forces, hiring government employees, or contracting for services with the private sector. In certain circumstances, the officials stated that increased use of contractor support to help meet expanded mission support work has certain benefits. For example, the officials noted that the use of contractors can provide a force multiplier effect when contractor personnel perform military support missions to allow more uniformed personnel to be available for combat missions. Moreover, contractors can provide support capabilities that are in short supply in the active and reserve components, thus reducing the frequency and duration of deployments for certain uniformed personnel. The officials also stated that obtaining contractor support in some instances can be faster than hiring government workers and, when there is uncertainty over the length of time that support services will be needed, the use of contractor support instead of government employees can be advantageous because it is generally easier to terminate or not renew a contract than to lay off government employees when the operations return to normal. Further, the officials stated that they believed that contracts for new and expanded requirements can be cost-effective when the contracts are subjected to the forces of competition in the private sector. Recently cited concerns associated with increased use of contractor support have included (1) the need for DOD to consider performing more work using government employees, (2) controlling support services contract costs, (3) reduced operational flexibility resulting from some outsourcing contracts, (4) the difficulty in ensuring accurate contract statements of work and sufficient contract oversight, and (5) questions on the adequacy of DOD’s services acquisition process. The National Defense Authorization Act for Fiscal Year 2006 required DOD to prescribe guidelines and procedures for ensuring that consideration is given to performing more work using government employees. Section 343 of the Act requires the Secretary of Defense to prescribe guidelines and procedures for ensuring that consideration is given to using government employees for work that is currently performed or would otherwise be performed by contractors. The guidance is to provide for special consideration to be given to contracts that (1) have been previously performed by federal government employees at any time on or after October 1, 1980; (2) are associated with the performance of inherently governmental functions; (3) were not awarded on a competitive basis; or (4) have been determined to be poorly performed due to excessive costs or inferior quality. In February 2007, DOD officials stated that they had been working on developing the required guidelines and that they planned to issue the new guidance in the near future. The officials also stated that the use of government employees instead of contractors to meet O&M- related requirements in some circumstances might result in savings. Each of the military services has expressed concerns over increasing contract costs for support services. Citing the need to control costs, the Secretaries of the Army and the Air Force have issued policy memorandums calling for review and reduction in services contracts. For example, the Secretary of the Army stated in a January 2007 memorandum that he expected to see significant reductions in the number of Army contracted services personnel during the remainder of fiscal year 2007. Also, in a March 2006 memorandum, the Secretary of the Air Force set targets for realizing estimated savings in Air Force contract support services costs. Navy officials stated that although they have not issued any new policy statements on contracted services, the issue is a concern. The officials stated that the Navy proactively reduced its planned contractor support budgets in both fiscal year 2007 and 2008. During our installation visits, local officials noted some concerns with outsourcing of support services. For example, Fort Hood officials stated that outsourcing of work formerly performed in-house can result in reduced flexibility in being able to quickly respond to changing requirements. The officials noted that in some instances when a new or different work requirement develops, uniformed and DOD civilian personnel can be reassigned to perform the tasks on a temporary basis or as a collateral duty. However, before contractors perform new or different work requirements, contract changes normally have to be negotiated, which can result in delays before the new work is started. Installation officials also noted concern over the difficulty in preparing accurate contract statements of work in order to avoid contract changes. Naval Air Station Pensacola officials stated that in some cases numerous contract changes occurred when the original statement of work did not anticipate or accurately define certain work situations. Further, installation officials cited concerns over ensuring adequate contract oversight. Officials at Naval Air Station Pensacola noted that ensuring adequate oversight becomes increasingly difficult as the number of contracts increases. In November 2006, we reported that DOD’s approach to managing service acquisitions has tended to be reactive and has not fully addressed the key factors for success at either the strategic or transactional level. As a result, the growth in service contracting over the past 10 years was, in large part, not a managed outcome. Further, DOD’s approach did not always take the necessary steps to ensure customer needs were translated into well-defined contract requirements or that postcontract award activities resulted in expected outcomes. As a result, DOD was potentially exposed to a variety of risks, including purchasing services that did not fully meet customer needs or that should have been provided in a different manner or with better results. Also, in January 2007 testimony before the Subcommittee on Readiness and Management Support, Senate Committee on Armed Services, we noted that long-standing problems with contract management continue to adversely affect service acquisition outcomes even as DOD has become more reliant on contractors to provide services for DOD’s operations. For example, the lack of sound business practices—poorly defined requirements, inadequate competition, and inadequate monitoring of contractor performance—exposes DOD to unnecessary risk and wastes resources. We have found cases in which the absence of well-defined requirements and clearly understood objectives complicates efforts to hold DOD and contractors accountable for poor service acquisition outcomes. Likewise, obtaining reasonable prices depends on the benefits of a competitive environment, but we have reported on cases in which DOD sacrificed competition for the sake of expediency. Monitoring contractor performance to ensure DOD receives and pays for required services is another control we have found lacking. In the testimony, we noted DOD has taken some steps to improve its management of services acquisition, and it is developing an integrated assessment of how best to acquire services. DOD’s O&M costs and reliance on contractors to perform O&M-related work have increased substantially since fiscal year 2001. However, sufficient data are not available to determine whether increased services contracting has caused DOD’s costs to be higher than they would have been had the contracted activities been performed by uniformed or DOD civilian personnel. While we believe that there may be some merit in DOD developing more information on the cost-effectiveness of its O&M services contracts that fall outside of the A-76 public/private competition process, at this time we are not recommending that DOD do this for several reasons. First, performing the analyses to determine the estimated in- house costs to perform work awarded to contractors can be expensive and time consuming. Second, according to DOD officials, contracting with the private sector may be the only alternative to meet certain requirements in the short term, such as when uniformed personnel must be diverted from performing peacetime work to supporting operational missions. Third, as long as DOD uses competition in its contract solicitations for new and expanded requirements and provides adequate contract oversight, cost efficiencies could be achieved through normal market forces. DOD made no comments on a draft of this report except for technical comments, which we incorporated where appropriate. We are sending copies of this report to the Secretaries of Defense, the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or by e-mail at leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. The GAO staff members who made key contributions to this report are listed in appendix II. To identify the trends in operations and maintenance (O&M) costs and services contracts and the reasons for the trends, we reviewed and analyzed the Department of Defense’s (DOD) O&M appropriations, budget documentation, and services contract costs and identified the related trends for fiscal years 1995 through 2005. We used costs as reflected by total obligation authority, which includes regular O&M appropriations, any supplemental O&M appropriations, and any funding from other appropriation accounts transferred or reprogrammed into the O&M account during budget execution. To consider inflation, we adjusted cost data to constant fiscal year 2007 dollars using DOD’s adjustment factors. We discussed with DOD and service headquarters officials the reasons for the trends in O&M costs and how outsourcing of O&M activities formerly performed in-house has affected the overall O&M budget. We shared the results of our analyses with DOD and service officials and incorporated their comments as appropriate. To discuss whether increased services contracting has exacerbated the growth of O&M costs, we determined the availability of information related to services contracts including whether in-house cost estimates were available for all contracts for new or expanded work awarded as a result of the global war on terrorism (GWOT) and other contingencies. We also reviewed and analyzed information from DOD’s competitive sourcing, or A-76, program. Further, we visited three installations—Fort Hood, Texas; Naval Air Station Pensacola, Florida; and Langley Air Force Base, Virginia—to develop case study examples of O&M-related work that was contracted out either as a result of A-76 public/private competitions or because the uniformed personnel who formerly performed the work were needed to support other missions. Fort Hood and Langley Air Force Base were selected based on discussions with our requesters, and Naval Air Station Pensacola was selected based on recommendations from Navy officials who stated that the installation had a cross section of contracts for O&M work that was formally performed in-house. At each installation, we reviewed O&M budget information and discussed with local officials any adverse consequences associated with contracting out O&M-related work. For the case studies highlighting examples of work that was outsourced to private contractors, we identified cost estimates for the work if performed by government employees, the reasons that the work was contracted out, the actual contract costs, and the reasons for any contract cost growth. We relied on cost data provided by the installation officials and did not review any actual contracts. However, we did review the accuracy of reported information on selected contracts awarded as a result of A-76 public/private competitions. To provide perspectives on the benefits and concerns associated with increased contracting for support services, we discussed this issue with DOD officials. We also examined DOD’s response to recent legislation requiring DOD to give consideration to performing more work using government employees. We also discussed with DOD and service headquarters officials the effect of increased contracting support for support services and reviewed steps recently taken by the military services to control service contract costs. We also discussed with installation officials concerns associated with outsourcing O&M-related work that was formally performed in-house. Additionally, we summarized recent GAO reports that identified concerns with DOD’s acquisition of services. We conducted our review from August 2006 through March 2007 in accordance with generally accepted government auditing standards. In addition to the contact named above, Mark A. Little, Assistant Director; Alissa Czyz; Kevin Keith; Harry Knobler; Gary Phillips; and Sharon Reid made key contributions to this report.
The Department of Defense (DOD) spent about 40 percent of the total defense budget to operate and maintain the nation's military forces in fiscal year 2005. Operation and maintenance (O&M) funding is considered one of the major components of funding for readiness. O&M appropriations fund the training, supply, and equipment maintenance of military units as well as the infrastructure of military bases. Over the past several years, DOD has increasingly used contractors, rather than uniformed or DOD civilian personnel, to provide O&M services in areas such as logistics, base operations support, information technology services, and administrative support. The House Appropriations Committee directed GAO to examine growing O&M costs and support services contracting. This GAO report (1) identifies the trends in O&M costs and services contracts and the reasons for the trends, (2) discusses whether increased services contracting has exacerbated the growth of O&M costs, and (3) provides perspectives on the benefits and concerns associated with increased contracting for support services. GAO analyzed DOD's O&M appropriations, budgets, and services contract costs over a 10-year period and developed case studies of outsourced O&M-related work at three installations. GAO is not making any recommendations. DOD made only technical comments on a draft of this report. DOD's O&M and services contract costs increased substantially between fiscal years 1995 and 2005, with most growth occurring since fiscal year 2001. DOD's O&M costs were almost constant between fiscal years 1995 and 2000. However, between fiscal years 2000 and 2005, DOD's O&M costs increased from $133.4 billion to $209.5 billion--an increase of $76.1 billion, or 57 percent, in constant fiscal year 2007 dollars. This growth was primarily caused by increased military operations associated with the global war on terrorism and other contingencies. In addition to increased O&M costs, DOD has increasingly relied on contractors to perform O&M-related work. Between fiscal years 2000 and 2005, DOD's services contract costs in O&M-related areas increased by 73 percent. According to DOD and service officials, several factors have contributed to the increased use of contractors for support services: (1) increased O&M requirements from the global war on terrorism and other contingencies, which DOD has met without an increase in active duty and civilian personnel, (2) federal government policy, which is to rely on the private sector for needed commercial services that are not inherently governmental in nature, and (3) DOD initiatives, such as its competitive sourcing and utility privatization programs. Sufficient data are not available to determine whether increased services contracting has caused DOD's costs to be higher than they would have been had the contracted activities been performed by uniformed or DOD civilian personnel. Because existing policy generally does not require a public/private competition for contractor performance of a new or expanded commercial requirement, in-house cost estimates have not been prepared for most of the work awarded to contractors as a result of increased O&M requirements from expanded military operations. Without this information, an overall determination cannot be made of the effect of increased services contracting on O&M cost growth. DOD does maintain data from its competitive sourcing, or A-76, program. GAO's analysis of the military services' reported information on 538 A-76 decisions during fiscal years 1995 through 2005 to contract out work formerly performed by uniformed and DOD civilian personnel showed that the decisions generally resulted in reducing the government's costs for the work. However, the number of A-76 public/private competition contracts is relatively small and the results from this program may not be representative of the results from all services contracts for new or expanded O&M work. Although DOD officials have cited certain benefits from increased use of contractors for support services, such as allowing more uniformed personnel to be available for combat missions, concerns have also been cited. For example, Congress recently required DOD to prescribe guidelines giving consideration to performing more work using government employees and GAO has noted concerns over DOD's approach to services acquisition.